

Even the small local AI niche hates ChatGPT, heh.
Even the small local AI niche hates ChatGPT, heh.
Clickbait.
There may be thought in a sense.
A analogy might be a static biological “brain” custom grown to predict a list of possible next words in a block of text. It’s thinking, sorta. Maybe it could acknowledge itself in a mirror. That doesn’t mean it’s self aware, though: It’s an unchanging organ.
And if one wants to go down the rabbit hole of “well there are different types of sentience, lines blur,” yada yada, with the end point of that being to treat things like they are…
All ML models are static tools.
For now.
It’s about 5000 headlines back :(
Unfortunately, most are moving to Discord :(
There’s plenty of good journalism eeking by out there, it’s just buried by feeds and spam.
making the most with what you have
That was, indeed, the motto of ML research for a long time. Just hacking out more efficient approaches.
It’s people like Altman that introduced the idea of not innovating and just scaling up what you already have. Hence many in the research community know he’s full of it.
Oh and to answer this, specifically, Nvidia has been used in ML research forever. It goes back to 2008 and stuff like the desktop GTX 280/CUDA 1.0. Maybe earlier.
Most “AI accelerators” are basically the same thing these days: overgrown desktop GPUs. They have pixel shaders, ROPs, video encoders and everything, with the one partial exception being the AMD MI300X and beyond (which are missing ROPs).
CPUs were used, too. In fact, Intel made specific server SKUs for giant AI users like Facebook. See: https://www.servethehome.com/facebook-introduces-next-gen-cooper-lake-intel-xeon-platforms/
Machine learning has been a field for years, as others said, yeah, but Wikipedia would be a better expansion of the topic. In a nutshell, it’s largely about predicting outputs based on trained input examples.
It doesn’t have to be text. For example, astronmers use it to find certain kinds of objects in raw data feeds. Object recognition (identifying things in pictures with little bounding boxes) is an old art at this point. Series prediction models are a thing, languagetool uses a tiny model to detect commonly confused words for grammar checking. And yes, image hashing is another, though not entirely machine learning based. IDK what Tineye does in their backend, but there are some more “oldschool” approaches using more traditional programming techniques, generating signatures for images that can be easily compared in a huge database.
You’ve probably run ML models in photo editors, your TV, your phone (voice recognition), desktop video players or something else without even knowing it. They’re tools.
Seperately, image similarity metrics (like lpips or SSIM) that measure the difference between two images as a number (where, say, 1 would be a perfect match and 0 totally unrelated) are common components in machine learning pipelines. These are not usually machine learning based, barring a few execptions like VMAF (which Netflix developed for video).
Text embedding models do the same with text. They are ML models.
LLMs (aka models designed to predict the next ‘word’ in a block of text, one at a time, as we know them) in particular have an interesting history, going back to (If I even remember the name correctly) BERT in Google’s labs. There were also tiny LLMS people did run on personal GPUs before ChatGPT was ever a thing, like the infamous Pygmalion 6B roleplaying bot, a finetune of GPT-J 6B. They were primitive and dumb, but it felt like witchcraft back then (before AI Bros marketers poisoned the well).
Other people.
Make connections in your little circle/tribe; make people happy. It’s our biology, it’s what we evolved to do, and it’s what you leave behind.
Isn’t that a textbook Fourth Amendment case?
I know they supposedly have some kind of holding period and this has been happening to minorities forever, and techically the mother requested she remain with her children, with no mention of her citizenship status in any of the reporting, other than she likely had a legal visa or something. But she was denied council and held. A congresswoman and her office are witnesses.
It feels so dramatic that you’d think the ACLU or someone would jump on it as a test case.
Like, not as a personal dog, but the overwhelming amount of people complaining about the DNC just aren’t up to date on what’s happening
Fair point! I am not up to date TBH.
I guess I’m pretty jaded too. The DNC getting things together!? What is this?
That’s optimistic.
It’s assuming the Dem Party doesn’t sabotage their own candidates. It’s assuming they don’t campaign like it’s 1960 again. It’s assuming social media will somehow be reigned in.
It’s assuming there will even be a fair environment for an election, instead of the government (and whoever’s conflated with them) putting thumbs on the scales kinda like Hungary, or worse. It doesn’t take much pressure to sway elections in environments this polarized.
In the future, when we’re transcendent tentacled robofurries doing poly in virtual space, on drugs (think Yivo from Futurama), we will look back in confusion at why so many people hate homosexuality so much. Like… don’t they have other things to worry about?
Or humanity will be all dead, I guess.
And I’m talking about the mega conservatives protesting this; at least the Vatican is baby stepping and trying to minimize their cruelty.
Funny thing is Teslas already have something more sophisticated. They could pipe the FSD’s diagnostics to a HUD as a more polished, standard ‘overlay’ for the driver, literally run with the car’s own hardware. You’d think Tesla execs would know about that since it’s literally their business, and predates the LLM craze.
…But no.
This is so stupid.
To me, “AI” in a car would be like highlighting pedestrians in a HUD, or alerting you if an unknown person messes with the car, or maybe adjusting mood lighting based on context. Or safety features.
…Not a chatbot.
I’m more “pro” (locally hostable, task specific) machine learning than like 99% of Lemmy, but I find the corporate obsession with cloud instruct textbots bizarre. It would be like every food corp living and breathing succulents. Cacti are neat, but they don’t need to be strapped to every chip bag, every takeout, every pack of forks.
A lot, but less than you’d think! Basically a RTX 3090/threadripper system with a lot of RAM (192GB?)
With this framework, specifically: https://github.com/ikawrakow/ik_llama.cpp?tab=readme-ov-file
The “dense” part of the model can stay on the GPU while the experts can be offloaded to the CPU, and the whole thing can be quantized to ~3 bits average, instead of 8 bits like the full model.
That’s just a hack for personal use, though. The intended way to run it is on a couple of H100 boxes, and to serve it to many, many, many users at once. LLMs run more efficiently when they serve in parallel. Eg generating tokens for 4 users isn’t much slower than generating them for 2, and Deepseek explicitly architected it to be really fast at scale. It is “lightweight” in a sense.
…But if you have a “sane” system, it’s indeed a bit large. The best I can run on my 24GB vram system are 32B - 49B dense models (like Qwen 3 or nemotron), or 70B mixture of experts (like the new Hunyuan 70B).
DeepSeek, now that is a filtered LLM.
The web version has a strict filter that cuts it off. Not sure about API access, but raw Deepseek 671B is actually pretty open. Especially with the right prompting.
There are also finetunes that specifically remove China-specific refusals. Note that Microsoft actually added saftey training to “improve its risk profile”:
https://huggingface.co/microsoft/MAI-DS-R1
https://huggingface.co/perplexity-ai/r1-1776
That’s the virtue of being an open weights LLM. Over filtering is not a problem, one can tweak it to do whatever you want.
Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.
Instruct LLMs aren’t trained on raw data.
It wouldn’t be talking like this if it was just trained on randomized, augmented conversations, or even mostly Twitter data. They cherry picked “anti woke” data to placate Musk real quick, and the result effectively drove the model crazy. It has all the signatures of a bad finetune: specific overused phrases, common obsessions, going off-topic, and so on.
…Not that I don’t agree with you in principle. Twitter is a terrible source for data, heh.
Nitpick: it was never ‘filtered’
LLMs can be trained to refuse excessively (which is kinda stupid and is objectively proven to make them dumber), but the correct term is ‘biased’. If it was filtered, it would literally give empty responses for anything deemed harmful, or at least noticably take some time to retry.
They trained it to praise hitler, intentionally. They didn’t remove any guardrails. Not that Musk acolytes would know any different.
It’s as if IG is controlled by a billionaire so cowardly and manipulative the far left and right hate his guts.