• 0 Posts
  • 17 Comments
Joined 5 months ago
cake
Cake day: November 20th, 2024

help-circle




  • I don’t think overheating would cause random corruptions (it should throttle down when overheating, and then shut down if the temperature gets too high even when throttled, but there should never be an incorrect result of any computation), and surely the RAM will run at the standard 2133 speed on default settings - OP says they reset the BIOS settings to default between CPU swaps.



  • Nah, the kernel isn’t that important for apps - you can replace the kernel and update the massive Android framework to work with the new one relatively easily (you will need some Linux compatibility for native code that does syscalls on its own, but that’s pretty much it - even WSL1 could do that).

    It’s all the APIs and system apps provided by Google that have no reasonable alternative in AOSP that are the problem for compatibility. Look how incomplete projects like MicroG (an open-source implementation of Google Play Services) are, and their only goal is to provide Android compatibility for unofficial ROMs without installing the proper Google services.


  • Sure, but I don’t see how any of that disproves the current “M$ supremacy” for “normies” - the fact is that people who couldn’t care less about how their computers work will have a much easier time using Windows (and probably macOS) than any Linux distro. You don’t have to worry that some software won’t be available to you because of your choice of the OS, and if you ever have a problem it’s easy to find help.

    I haven’t used Windows in a decade on my personal computers, but as long as these two things hold true, it will always be my recommended OS for people who simply don’t care - I’m not going to spend my time doing free IT support for everyone I know and then get blamed everytime something doesn’t work.









  • That’s more of a storage thing, RAM does a lot smaller transfers - for example a DDR5 memory has two independent 32bit (4 byte) channels with a minimum of 16 transfers in a single “operation”, so it does 64 bytes at once (or more). And CPUs don’t waste memory bandwidth than transferring more than absolutely necessary, as memory is often the bottleneck even without writing full pages.

    The page size is relevant for memory protection (where the CPU will stop the program execution and give control back to the operating system if said program tries to do something it’s not allowed to do with the memory) and virtual memory (which is part of the same thing, but they are two theoretically independent concepts). The operating system needs to make a table describing what memory the program has what kind of access to, and with bigger pages the table can be much smaller (at the cost of wasting space if the program needs only a little bit of memory of a given kind).



  • My two cents: the only time I had an issue with Btrfs, it refused to mount without using a FS repair tool (and was fine afterwards, and I knew which files needed to be checked for possible corruption). When I had an issue with ext4, I didn’t know about it until I tried to access an old file and it was 0 bytes - a completely silent corruption I found out probably months after it actually happened.

    Both filesystems failed, but one at least notified me about it, while the second just “pretended” everything was fine while it ate my data.