Yeah, there’s a mysticism that’s sprung up around LLMs as if they’re some magic blackbox, rather than a well understood construct to the point where you can buy books from Amazon on how to write one from scratch.
It’s not like ChatGPT or Claude appeared from nowhere, the people who built them do talks about them all the time.
EDIT: Sorry, I’ll expand. When AI researchers give talks about how AI works, they say things like, “on a fundamental level, we don’t actually know what’s going on.”
Also, even if there are books available about how to write an AI from scratch(?) somehow, the basic understanding of what happens deep within the neural networks is still a “magic black box”. They’ll crack it open eventually, but not yet.
The ideas that people have that AI is simple and stupid & a passing fad are naive.
If these AI researchers really have no idea how these things work, then how can they possibly improve the models or techniques?
Like how they now claim all that after upgrades that now these LLMs can “reason” about problems, how did they actually go and add that if it’s a black box?
Yeah, there’s a mysticism that’s sprung up around LLMs as if they’re some magic blackbox, rather than a well understood construct to the point where you can buy books from Amazon on how to write one from scratch.
It’s not like ChatGPT or Claude appeared from nowhere, the people who built them do talks about them all the time.
What a load of horseshit lol
EDIT: Sorry, I’ll expand. When AI researchers give talks about how AI works, they say things like, “on a fundamental level, we don’t actually know what’s going on.”
Also, even if there are books available about how to write an AI from scratch(?) somehow, the basic understanding of what happens deep within the neural networks is still a “magic black box”. They’ll crack it open eventually, but not yet.
The ideas that people have that AI is simple and stupid & a passing fad are naive.
If these AI researchers really have no idea how these things work, then how can they possibly improve the models or techniques?
Like how they now claim all that after upgrades that now these LLMs can “reason” about problems, how did they actually go and add that if it’s a black box?