

How was he allowed to publish this on AJ?


How was he allowed to publish this on AJ?


demands for interoperability with non-Apple products
the rules were not applied to Samsung
I wonder if Samsung owns a walled garden where their products aren’t interoperable with those from other manufacturers.


I expected him to have said something celebratory or at least making light of it. The only thing he said is that the ‘MAGA gang’ are capitalising on his death.


dark corners of the internet. “Clearly there was a lot of gaming going on,”
deep, dark internet, the Reddit culture,
Well this part seems trustworthy.


Don’t you have that backwards? Without TSMC’s outstanding technology, the island’s value decreases, both for China and for the USA. Conventional wisdom is that reduced tensions also reduces the risk of war.


300i https://www.bilibili.com/video/BV15NKJzVEuU/
M4 https://github.com/itsmostafa/inference-speed-tests
It’s comparable to an M4, maybe a single order of magnitude faster than a ~1000 euro 9960X, at most, not multiple. And if we’re considering the option of buying used, since this is a brand new product and less available in western markets, the CPU-only option with an EPYC and more RAM will probably be a better local LLM computer for the cost of 2 of these and a basic computer.


That’s still faster than your expensive RGB XMP gamer RAM DDR5 CPU-only system, and you can depending on what you’re running saturate the buses independently, doubling the speed and matching a 5060 or there about. I disagree that you can categorise the speed as negating the capacity, as they’re different axis. You can run bigger models on this. Smaller models will run faster on a cheaper Nvidia. You aren’t getting 5080 performance and 6x the RAM for the same price, but I don’t think that’s a realistic ask either.


I’m not saying you can deploy these in place of Nvidia cards where the tooling is built with Nvidia in mind. I’m saying that if you’re writing code you can do machine learning projects without CUDA, including training.


I agree with your conclusion, but these are LPDDR4X, not DDR4 SDRAM. It’s significantly faster. No fans should also be seen as a positive, since they’re assuming the cards aren’t going to melt. It costs them very little to add visible active cooling to a 1000+ euro product.


You can run llama.cpp on CPU. LLM inference doesn’t need any features only GPUs typically have, that’s why it’s possible to make even simpler NPUs that can still run the same models. GPUs just tend to be faster. If the GPU in question is not faster than an equally priced CPU, you should use the CPU (better OS support).
Edit: I looked at a bunch real-world prices and benchmarks, and read the manual from Huawei and my new conclusion is that this is the best product on the market if you want to run a model at modest speed that doesn’t fit in 32GB but does in 96GB. Running multiple in parallel seems to range from unsupported to working poorly, so you should only expect to use one.
Original rest of the comment, made with the assumption that this was slower than it is, but had better drivers:
The only benefit to this product over CPU is that you can slot multiple of them and they parallelise without needing to coordinate anything with the OS. It’s also a very linear cost increase as long as you have the PCIe lanes for it. For a home user with enough money for one or two of these, they would be much better served spending the money on a fast CPU and 256GB system RAM.
If not AI, then what use case do you think this serves better?


CUDA is not equivalent to AI training. Nvida offers useful developer tools for using their hardware, but you don’t have to use them. You can train on any GPU or even CPU. The projects you’ve looked at (?) just chose to use CUDA because it was the best fit for what hardware they had on hand, and were able to tolerate the vendor lock-in.


260 km/h is still good. As long as it’s significantly faster than driving, it’s deserving of some praise.


Does it have any sort of on-board NPU to make it AI-oriented?


Unalive started being widely used 2020-2021.


Also the idea that NATO caused the Bosnian Genocide is laughable. The bombing is the only reason it stopped.

https://thegrayzone.com/2025/08/04/us-ethnic-cleansing-serbs-croat/


Though he was named directly and is still just an economist.
Well he’s a famous guy and knows a lot of important people, directly worked with the CIA and USA government before. And Davel wasn’t saying “The Lancet published this conspiracy” but that the guy that the Lancet - who is very trustworthy - trusts and says is qualified to weigh in, is independently pushing it. So Jeremy was borrowing his authority from having been associated with The Lancet, and he has some authority already from his celebrity-status and previous work, not necessarily from his hard skills (economics).
That’s a far cry from “CIA bioweapon” like the OOP believes.
Okay. I believe that. I haven’t read the article and don’t want to weigh in. I haven’t investigated the claim, anything I add can only be nonsense. I wasn’t pushing a Fort Detrick bioweapon conspiracy, I just wanted to clarify Davel’s comment because I didn’t feel your objection to it was fair, or if it was just a question with no position then I answered the question. Feel free to discuss the article with Davel since you have both read the article.


The authority being highlighted wasn’t Jeffrey Sachs on his own, but The Lancet.


I’d think there’d be at least as much pro-China content on a Chinese platform.
There is a lot of Chinese state-sponsored propaganda on TikTok, as well as pro-China speech by unaffiliated users. This is true of TikTok and of YouTube and Twitter. (The state-sponsored propaganda I’m highlighting isn’t particularly insidious, it’s things like student exchange or paid travel bloggers, Chinese news spending budget to create English language content). But you aren’t more likely to come across it on TikTok than western platforms, because China doesn’t control the algorithm. TikTok was forked off a Chinese product, but it’s controlled by Oracle and the USA in terms of tuning and moderation. The Chinese just collect rent.
Now if your angle isn’t that TikTok pushes those things, but just that kids use it and kids are impressionable, then I don’t have any objections with what you’re saying. I haven’t seen any Chinese state-sponsored content that plays well with kids, but I wouldn’t expect to either since my recommendation feed looks different (and I don’t use TikTok).


Would it really be a shock to anyone if a global superpower spread propaganda to be viewed more positively by people around the world? Russia and the USA do it all the time. Why wouldn’t China?
Completely reasonable.
I’d say it’s quite likely.
But this isn’t reasonable at all, and it’s what you’re trying to defend.
Lemmy has no value. It’s a waste of resources. Your assertion wasn’t that China is has propaganda. I know they do, there are hundreds of officially disclosed initiatives. Your assertion is that Lemmy users aren’t genuine.
You also implied that TikTok - A platform globally moderated by the the USA - is a hotbed for PR Chinese propaganda, which isn’t reasonable either.
Yes but why Lemmy? There are more mature forum softwares. Your proposal doesn’t benefit from federation and doesn’t benefit from a UI optimised for discussing external links. You can of course continue using your real-ID account and interact with the Fediverse that way but no one else is doing real-ID so I don’t see that as a positive.