Anyone who has spent any time with AI knows that it can easily lose track of context and essentially “hallucinate” a back story to fill in the missing context it lost.
I have experienced this discussing specific parts of a story I was writing. When I asked the AI to remember a certain detail, it made the entire thing up but swore it remembered it correctly.
If it’s name is Q* then it seems likely that it’s a combination of Q learning and A* search, which indicates that this is an approach similar to DeepMind’s AlphaZero as opposed to a transformer based LLM.
In that context, getting it to be able to solve high school level math questions is pretty nuts.
Though the details matter and right now all the articles discussing it are missing those, so we’ll have to wait and see.
The “superintelligence” in question: the same old tech, but with a larger context window, which will make it hallucinate a bit less often.
I love the way you phrased that. Accurate.
Not really. The headline is garbage but that statement is not even close to accurate unless you know nothing about the actual topic.
Anyone who has spent any time with AI knows that it can easily lose track of context and essentially “hallucinate” a back story to fill in the missing context it lost.
I have experienced this discussing specific parts of a story I was writing. When I asked the AI to remember a certain detail, it made the entire thing up but swore it remembered it correctly.
I believe in this case the breakthrough is the ability to reason out math.
Explain then pls.
Not really.
If it’s name is Q* then it seems likely that it’s a combination of Q learning and A* search, which indicates that this is an approach similar to DeepMind’s AlphaZero as opposed to a transformer based LLM.
In that context, getting it to be able to solve high school level math questions is pretty nuts.
Though the details matter and right now all the articles discussing it are missing those, so we’ll have to wait and see.