If these AI researchers really have no idea how these things work, then how can they possibly improve the models or techniques?
Like how they now claim all that after upgrades that now these LLMs can “reason” about problems, how did they actually go and add that if it’s a black box?
If these AI researchers really have no idea how these things work, then how can they possibly improve the models or techniques?
Like how they now claim all that after upgrades that now these LLMs can “reason” about problems, how did they actually go and add that if it’s a black box?