Apple forked WebKit from KDE back in 2001. For all intents and purposes, they didn’t switch to it; they developed it.
Apple forked WebKit from KDE back in 2001. For all intents and purposes, they didn’t switch to it; they developed it.
Counterpoint: SMS shouldn’t exist, and RCS is our best shot at replacing it right now
arm still needs a custom kernel and conpletely different drivers to even boot, because every manifacturer can implement it completely differently.
Dunno why you’re getting downvoted, this is correct. ARM makes comparatively very expensive to maintain an OS over a variety of CPU models. The specialization required by each Cortex revision (and beyond that, each manufacturer adaptation) is too intense for a world trying to conserve resources.
x86 hardware is standardized in a way where you don’t need to port an os to them, it just runs with generic drivers.
That being said, I’m honestly shocked your friend doesn’t run into issues. Several ISA extensions have been released for x86 since the Core 2 Duo days, and I have to imagine software incompatibilities appear semi-frequently. Running Windows 10 on that can’t be a good experience.
If there was an option that was presented to users once the device got below 80% battery health to slow down the system to make daily batter life longer
This isn’t why they did it. Degraded Li-ion batteries cannot sustain their rated voltage at high currents due to increased internal resistance. Sufficiently undervolted CPUs/memory cells produce errors (specifically bit flips), which can rather quickly lead to memory corruption and a crash.
Reducing the CPU frequency (thereby reducing the peak current draw) is practically necessary in the face of a degraded battery. Various laptops were infamous for not doing this, because it resulted in a ~20-30 minute battery life, as the voltage drop became too great once the battery charge drops below 80-90%. Within the context of a smartphone, neglecting to use the remaining 80-90% would make it basically useless.
What Apple (and the rest of the smartphone industry, at this point) really needs to do is make their batteries replaceable.
Until Cloudflare responds to the post, it is IMO most beneficial to assume that the OP is being truthful and forthright. Doing so puts pressure on Cloudflare to either clarify or rectify the situation, whereas treating Cloudflare as though they are above suspicion accomplishes nothing.
After all, OP is very much the little guy here.
-1 points
I’m shocked that this needs a /s, wow
Gas demand is absolutely price sensitive lol
That’s actually not what I was referring to, although the unified memory architecture is certainly more power efficient for mixed-intensive workloads. The cost of transferring to/from dedicated GPU memory is (unsurprisingly) quite large.
Here is a great article on the topic. Basically, x86 spends a comparatively enormous amount of energy ensuring that its strong memory guarantees are not violated, even in cases where such violations would not affect program behavior. As it turns out, the majority of modern multithreaded programs only occasionally rely on these guarantees, and including special (expensive) instructions to provide these guarantees when necessary is still beneficial for performance/efficiency in the long run.
For additional context, the special sauce behind Apple’s Rosetta 2 is that the M family of SoCs actually implement an x86 memory model mode that is selectively enabled when executing dynamically translated multithreaded x86 programs.
It is always quite amusing to see a billion dollar corporation beaten in its own game :)
More information/context, if you’re curious:
Rosetta 2 in particular isn’t full emulation because the API is the same for both architectures - it is only dynamic ISA translation. I expect that Prism will be slightly closer to full emulation; there is simply no way Microsoft will reimplement all of the legacy Windows APIs on ARM.
WINE is a great example of something that is also not a full emulator, but for the opposite reason: it does not perform any ISA translation or hardware emulation, but rather only syscall (API) translation.
Oh yeah, clearly I did not read the article well. Still, it doesn’t mean what you think it does.
First, Yuzu is more of an alternative API implementation than an emulator in this setup. The stock Switch OS and API implementation have been entirely replaced with Linux and the Yuzu implementation of the API. Given recent performance uplifts in the Linux kernel, I’m not surprised that Linux+Yuzu beats the first-party implementation.
Second, the use of the word “emulation” in the above thread is really a misnomer: Rosetta 2, Prism and the like all perform what is called dynamic ISA translation. Yuzu need not perform ISA translation when running on ARM hardware.
So if any developer wants to support modern devices they have to port to that new hardware.
See, you say that, but it doesn’t seem like Rosetta 2 going anywhere any time soon, which means developers aren’t pressured their software to ARM.
This article fails to mention the single biggest differentiator between x86 and ARM: their memory models. Considering the sheer amount of everyday software that is going multithreaded, this is a huge issue, and the reason why ARM drastically outperforms x86 running software like modern web browsers.
Apple controls the whole ecosystem on Macs.
In what sense? The vast majority of macOS software is downloaded/installed from the internet, just like Windows.
I don’t see it working because the Windows APIs are a dozen self-oxidizing dumpster fires scattered into the wind, but that’s a different story.
Yuzu can exhibit superior performance because the Switch is rocking the Tegra X1 from 2015. Yuzu absolutely cannot beat the Switch with contemporary hardware and/or comparable power consumption.
Hah, cool fantasy bro. GPT-9’s first output was
As an AI, I cannot predict whether humans can solve climate change. Is there anything else you would like help with?
Alignmentmaxxed
I really hope that on-device AI becomes competitive soon. It’s nice to see that on-device is the way large portions of the industry is going, but cloud AI just uses way too much energy. Not to mention the resources required to manufacture millions of large-die GPUs.
It’s probably naive to think that the corporations that created this problem will solve it, but it honestly seems like the most feasible path forward in the near term. I certainly don’t expect the world’s governments to be effective at regulating AI any time soon.
IIRC dude went home and played Civ all night
I mean, nobody intrinsically cares how many competitors there are, so long as the all content can be retrieved from a single source. Of course that doesn’t mean people wouldn’t care if a single company were to abuse their monopoly e.g. by charging unreasonable rates or forcing ads (looking at you, cable).
It’s worth remembering that monopolies aren’t inherently illegal in the U.S. or anywhere else really; it’s not against the law to have the best product by a mile, nor should it be. Antitrust is illegal, which in this case would be defined by signing exclusive rights for all content and then providing a shitty service.
lol
This is not indicative of how well RCS will work as its widespread adoption continues to mature. I do understand your frustration; I just would expect the growing pains to last much longer. Remember how shitty USB C was for the first few years?