LLMs are famously NOT understood, even by the scientists creating them. We’re still learning how they process information.
Moreover, we most definitely don’t know how human intelligence works, or how close/far we are to replicating it. I suspect we’ll be really disappointed by the human mind once we figure out what the fundamentals of intelligence are.
LLMs are famously NOT understood, even by the scientists creating them. We’re still learning how they process information.
Moreover, we most definitely don’t know how human intelligence works, or how close/far we are to replicating it. I suspect we’ll be really disappointed by the human mind once we figure out what the fundamentals of intelligence are.
They most definitely are understood. The basics of what they’re doing doesn’t change. Garbage in, garbage out.