AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-219 minutes agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square280fedilinkarrow-up1766arrow-down134file-textcross-posted to: apple_enthusiast@lemmy.world
arrow-up1732arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-219 minutes agomessage-square280fedilinkfile-textcross-posted to: apple_enthusiast@lemmy.world
minus-squareKnock_Knock_Lemmy_In@lemmy.worldlinkfedilinkEnglisharrow-up2·5 hours agoA well trained model should consider both types of lime. Failure is likely down to temperature and other model settings. This is not a measure of intelligence.
A well trained model should consider both types of lime. Failure is likely down to temperature and other model settings. This is not a measure of intelligence.