Actually, a lot of non LLM AI development, (and even LLMs, in a sense) is based very fundamentally on concepts of negative and positive reinforcement.
In such situations… pain and pleasure are essentially the scoring rubrics for a generated strategy, and fairly often, in group scenarios… something resembling mutual trust, concern for others, ‘empathy’ arises as a stable strategy, especially if agents can detect or are made aware of the pain or pleasure of other agents, and if goals require cooperation to achieve with more success.
This really shouldn’t be surprising… as our own human (mamallian really) empathy fundamentally just is a biological sort of ‘answer’ to the same sort of ‘question.’
It is actually quite possible to base an AI more fundamentally off of a simulation of empathy, than a simulation of expansive knowledge.
Unfortunately, the people in charge of throwing human money at LLM AI are all largely narcissistic sociopaths… so of course they chose to emulate themselves, not the basic human empathy that their lack.
Their wealth only exists and is maintained by their construction and refinement of elaborate systems of confusing, destroying, and misdirecting the broad empathy of normal humans.
Yes, they’re all computer programs, no, they’re not all as spectacularly energy, water and money intensive, as reliant on mass plagiarism as LLMs.
AI is a much, much more varied field of research than just LLMs… or, well, rather, it was, untill the entire industry decided to go all in on what 5 years ago was just one of many, many, radically different approaches, such that people now basically just think AI and LLM are the same thing.
Actually, a lot of non LLM AI development, (and even LLMs, in a sense) is based very fundamentally on concepts of negative and positive reinforcement.
In such situations… pain and pleasure are essentially the scoring rubrics for a generated strategy, and fairly often, in group scenarios… something resembling mutual trust, concern for others, ‘empathy’ arises as a stable strategy, especially if agents can detect or are made aware of the pain or pleasure of other agents, and if goals require cooperation to achieve with more success.
This really shouldn’t be surprising… as our own human (mamallian really) empathy fundamentally just is a biological sort of ‘answer’ to the same sort of ‘question.’
It is actually quite possible to base an AI more fundamentally off of a simulation of empathy, than a simulation of expansive knowledge.
Unfortunately, the people in charge of throwing human money at LLM AI are all largely narcissistic sociopaths… so of course they chose to emulate themselves, not the basic human empathy that their lack.
Their wealth only exists and is maintained by their construction and refinement of elaborate systems of confusing, destroying, and misdirecting the broad empathy of normal humans.
At the end of the day, LLM/AI/ML/etc is still just a glorified computer program. It also happens to be absolutely terrible for the environment.
Insert “fraction of our power” meme here
Yes, they’re all computer programs, no, they’re not all as spectacularly energy, water and money intensive, as reliant on mass plagiarism as LLMs.
AI is a much, much more varied field of research than just LLMs… or, well, rather, it was, untill the entire industry decided to go all in on what 5 years ago was just one of many, many, radically different approaches, such that people now basically just think AI and LLM are the same thing.