AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-23 hours agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square313fedilinkarrow-up1806arrow-down136file-textcross-posted to: apple_enthusiast@lemmy.world
arrow-up1770arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-23 hours agomessage-square313fedilinkfile-textcross-posted to: apple_enthusiast@lemmy.world
minus-squareMangoCats@feddit.itlinkfedilinkEnglisharrow-up2·6 hours agoMy impression of LLM training and deployment is that it’s actually massively parallel in nature - which can be implemented one instruction at a time - but isn’t in practice.
My impression of LLM training and deployment is that it’s actually massively parallel in nature - which can be implemented one instruction at a time - but isn’t in practice.