- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
I agreed with most of what you said, except the part where you say that real AI is impossible because it’s bodiless or “does not experience hunger” and other stuff. That part does not compute.
A general AI does not need to be conscious.
It’s only as intelligent as the people that control and regulate it.
Given all the documented instances of Facebook and other social media using subliminal emotional manipulation, I honestly wonder if the recent cases of AI chat induced psychosis are related to something similar.
Like we know they’re meant to get you to continue using them, which is itself a bit of psychological manipulation. How far does it go? Could there also be things like using subliminal messaging/lighting? This stuff is all so new and poorly understood, but that usually doesn’t stop these sacks of shit from moving full speed with implementing this kind of thing.
It could be that certain individuals have unknown vulnerabilities that make them more susceptible to psychosis due to whatever manipulations are used to make people keep using the product. Maybe they’re doing some things to users that are harmful, but didn’t seem problematic during testing?
Or equally as likely, they never even bothered to test it out, just started subliminally fucking with people’s brains, and now people are going haywire because a bunch of unethical shit heads believe they are the chosen elite who know what must be done to ensure society is able to achieve greatness. It just so happens that “what must be done,” also makes them a ton of money and harms people using their products.
It’s so fucking absurd to watch the same people jamming unregulated AI and automation down our throats while simultaneously forcing traditionalism, and a legal system inspired by Catholic integralist belief on society.
If you criticize the lack of regulations in the wild west of technology policy, or even suggest just using a little bit of fucking caution, then you’re trying to hold back progress.
However, all non-tech related policy should be based on ancient traditions and biblical text with arbitrary rules and restrictions that only make sense and benefit the people enforcing the law.
What a stupid and convoluted way to express you just don’t like evidence based policy or using critical thinking skills, and instead prefer to just navigate life by relying on the basic signals from your lizard brain. Feels good so keep moving towards, feels bad so run away, or feels scary so attack!
Such is the reality of the chosen elite, steering us towards greatness.
What’s really “funny” (in a we’re all doomed sort of way) is that while writing this all out, I realized the “chosen elite” controlling tech and policy actually perfectly embody the current problem with AI and bias.
Rather than relying on intelligence to analyze a situation in the present, and create the best and most appropriate response based on the information and evidence before them, they default to a set of pre-concieved rules written thousands of years ago with zero context to the current reality/environment and the problem at hand.
I think we should start by not following this marketing speak. The sentence “AI isn’t intelligent” makes no sense. What we mean is “LLMs aren’t intelligent”.
So couldn’t we say LLM’s aren’t really AI? Cuz that’s what I’ve seen to come to terms with.
To be fair, the term “AI” has always been used in an extremely vague way.
NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we’ve been referring to those as “AI” for decades without anybody taking an issue with it.
I don’t think the term AI has been used in a vague way, it’s that there’s a huge disconnect between how the technical fields use it vs general populace and marketing groups heavily abuse that disconnect.
Artificial has two meanings/use cases. One is to indicate something is fake (video game NPC, chess bots, vegan cheese). The end product looks close enough to the real thing that for its intended use case it works well enough. Looks like a duck, quacks like a duck, treat it like a duck even though we all know it’s a bunny with a costume on. LLMs on a technical level fit this definition.
The other definition is man made. Artificial diamonds are a great example of this, they’re still diamonds at the end of the day, they have all the same chemical makeups, same chemical and physical properties. The only difference is they came from a laboratory made by adult workers vs child slave labor.
My pet theory is science fiction got the general populace to think of artificial intelligence to be using the “man-made” definition instead of the “fake” definition that these companies are using. In the past the subtle nuance never caused a problem so we all just kinda ignored it
Dafuq? Artificial always means man-made.
Nature also makes fake stuff. For example, fish that have an appendix that looks like a worm, to attract prey. It’s a fake worm. Is it “artificial”? Nope. Not man made.
LLMs are one of the approximately one metric crap ton of different technologies that fall under the rather broad umbrella of the field of study that is called AI. The definition for what is and isn’t AI can be pretty vague, but I would argue that LLMs are definitely AI because they exist with the express purpose of imitating human behavior.
Huh? Since when an AI’s purpose is to “imitate human behavior”? AI is about solving problems.
I make the point to allways refer to it as LLM exactly to make the point that it’s not an Inteligence.
Mind your pronouns, my dear. “We” don’t do that shit because we know better.
The idea that RAGs “extend their memory” is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.
deleted by creator
I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it… AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…
E: I use it to give me ideas that I then test out solo.
So, you say AI is a tool that worked well when you (a human) used it?
That sounds fucking dangerous… You really should consult a HUMAN expert about your problem, not an algorithm made to please the interlocutor…
I mean, sure, but that’s really easier said than done. Good luck getting good mental healthcare for cheap in the vast majority of places
This is very interesting… because the general saying is that AI is convincing for non experts in the field it’s speaking about. So in your specific case, you are actually saying that you aren’t an expert on yourself, therefore the AI’s assessment is convincing to you. Not trying to upset, it’s genuinely fascinating how that theory is true here as well.
I use it to give me ideas that I then test out. It’s fantastic at nudging me in the right direction, because all that it’s doing is mirroring me.
If it’s just mirroring you one could argue you don’t really need it? Not trying to be a prick, if it is a good tool for you use it! It sounds to me as though your using it as a sounding board and that’s just about the perfect use for an LLM if I could think of any.
Are we twins? I do the exact same and for around a year now, I’ve also found it pretty helpful.
I did this for a few months when it was new to me, and still go to it when I am stuck pondering something about myself. I usually move on from the conversation by the next day, though, so it’s just an inner dialogue enhancer
This article is written in such a heavy ChatGPT style that it’s hard to read. Asking a question and then immediately answering it? That’s AI-speak.
Asking a question and then immediately answering it? That’s AI-speak.
HA HA HA HA. I UNDERSTOOD THAT REFERENCE. GOOD ONE. 🤖
And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.
“…” (Unicode U+2026 Horizontal Ellipsis) instead of “…” (three full stops), and using them unnecessarily, is another thing I rarely see from humans.
Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.
Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.
However, that’s on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.
I’ve been getting into the habit of also using em/en dashes on the computer through the Compose key. Very convenient for typing arrows, inequality and other math signs, etc. I don’t use it for ellipsis because they’re not visually clearer nor shorter to type.
Compose key?
I’ve long been an enthusiast of unpopular punctuation—the ellipsis, the em-dash, the interrobang‽
The trick to using the em-dash is not to surround it with spaces which tend to break up the text visually. So, this feels good—to me—whereas this — feels unpleasant. I learnt this approach from reading typographer Erik Spiekermann’s book, *Stop Stealing Sheep & Find Out How Type Works.
My language doesn’t really have hyphenated words or different dashes. It’s mostly punctuation within a sentence. As such there are almost no cases where one encounters a dash without spaces.
What language is this?
Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character.
Not on my phone it didn’t. It looks as you intended it.
Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I’m paid in full for the six month period. It’s been days now with no follow-up . . . I’m pretty sure AI snuck that one through for me.
Be careful… If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you’d see some money but at that point half of it goes to the lawyer and you’re still screwed.
AI didn’t write the insurance policy. It only helped him search for the best deal. That’s like saying your insurance company will cancel you because you used a phone to comparison shop.
In that case let’s stop calling it ai, because it isn’t and use it’s correct abbreviation: llm.
Its*
It’s means “it is”.
My auto correct doesn’t care.
But your brain should.
Yours didn’t and read it just fine.
That’s irrelevant. That’s like saying you shouldn’t complain about someone running a red light if you stopped in time before they t-boned you - because you understood the situation.
Are you really comparing my repsonse to the tone when correcting minor grammatical errors to someone brushing off nearly killing someone right now?
Kinda dumb that apostrophe s means possessive in some circumstances and then a contraction in others.
I wonder how different it’ll be in 500 years.
I’d agree with you if I saw “hi’s” and “her’s” in the wild, but nope. I still haven’t seen someone write “that car is her’s”.
It’s called polymorphism. It always amuses me that engineers, software and hardware, handle complexities far beyond this every day but can’t write for beans.
Software engineer here. We often wish we can fix things we view as broken. Why is that surprising ?Also, polymorphism is a concept in computer science as well
It’s “its”, not “it’s”, unless you mean “it is”, in which case it is “it’s “.
Would you rather use the same contraction for both? Because “its” for “it is” is an even worse break from proper grammar IMO.
Proper grammar means shit all in English, unless you’re worrying for a specific style, in which you follow the grammar rules for that style.
Standard English has such a long list of weird and contradictory roles with nonsensical exceptions, that in every day English, getting your point across in communication is better than trying to follow some more arbitrary rules.
Which become even more arbitrary as English becomes more and more a melting pot of multicultural idioms and slang. Although I’m saying that as if that’s a new thing, but it does feel like a recent thing to be taught that side of English rather than just “The Queen’s(/King’s) English” as the style to strive for in writing and formal communication.
I say as long as someone can understand what you’re saying, your English is correct. If it becomes vague due to mishandling of the classic rules of English, then maybe you need to follow them a bit. I don’t have a specific science to this.
I understand that languages evolve, but for now, writing “it’s” when you meant “its” is a grammatical error.
Good luck. Even David Attenborrough can’t help but anthropomorphize. People will feel sorry for a picture of a dot separated from a cluster of other dots. The play by AI companies is that it’s human nature for us to want to give just about every damn thing human qualities. I’d explain more but as I write this my smoke alarm is beeping a low battery warning, and I need to go put the poor dear out of its misery.
This is the current problem with “misalignment”. It’s a real issue, but it’s not “AI lying to prevent itself from being shut off” as a lot of articles tend to anthropomorphize it. The issue is (generally speaking) it’s trying to maximize a numerical reward by providing responses to people that they find satisfactory. A legion of tech CEOs are flogging the algorithm to do just that, and as we all know, most people don’t actually want to hear the truth. They want to hear what they want to hear.
LLMs are a poor stand in for actual AI, but they are at least proficient at the actual thing they are doing. Which leads us to things like this, https://www.youtube.com/watch?v=zKCynxiV_8I
I’m still sad about that dot. 😥
David Attenborrough is also 99 years old, so we can just let him say things at this point. Doesn’t need to make sense, just smile and nod. Lol
The machinery needed for human thought is certainly a part of AI. At most you can only claim its not intelligent because intelligence is a specifically human trait.
We don’t even have a clear definition of what “intelligence” even is. Yet a lot of people art claiming that they themselves are intelligent and AI models are not.
Even if we did if it’s human it can’t live on this planet and claim it’s intelligent. Just look around and you will know why.
People who don’t like “AI” should check out the newsletter and / or podcast of Ed Zitron. He goes hard on the topic.
Citation Needed (by Molly White) also frequently bashes AI.
I like her stuff because, no matter how you feel about crypto, AI, or other big tech, you can never fault her reporting. She steers clear of any subjective accusations or prognostication.
It’s all “ABC person claimed XYZ thing on such and such date, and then 24 hours later submitted a report to the FTC claiming the exact opposite. They later bought $5 million worth of Trumpcoin, and two weeks later the FTC announced they were dropping the lawsuit.”
I’m subscribed to her Web3 is Going Great RSS. She coded the website in straight HTML, according to a podcast that I listen to. She’s great.
I didn’t know she had a podcast. I just added it to my backup playlist. If it’s as good as I hope it is, it’ll get moved to the primary playlist. Thanks!
I’ve never been fooled by their claims of it being intelligent.
Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.
It very much isn’t and that’s extremely technically wrong on many, many levels.
Yet still one of the higher up voted comments here.
Which says a lot.
Calling these new LLM’s just if statements is quite a over simplification. These are technically something that has not existed before, they do enable use cases that previously were impossible to implement.
This is far from General Intelligence, but there are solutions now to few coding issues that were near impossible 5 years ago
5 years ago I would have laughed in your face if you came to suggest that can you code a code that summarizes this description that was inputed by user. Now I laugh that give me your wallet because I need to call API or buy few GPU’s.
Given that the weights in a model are transformed into a set of conditional if statements (GPU or CPU JMP machine code), he’s not technically wrong. Of course, it’s more than just JMP and JMP represents the entire class of jump commands like JE and JZ. Something needs to act on the results of the TMULs.
That is not really true. Yes, there are jump instructions being executed when you run interference on a model, but they are in no way related to the model itself. There’s no translation of weights to jumps in transformers and the underlying attention mechanisms.
I suggest reading https://en.m.wikipedia.org/wiki/Transformer_(deep_learning_architecture)
I love this resource, https://thebullshitmachines.com/ (i.e. see lesson 1)…
In a series of five- to ten-minute lessons, we will explain what these machines are, how they work, and how to thrive in a world where they are everywhere.
You will learn when these systems can save you a lot of time and effort. You will learn when they are likely to steer you wrong. And you will discover how to see through the hype to tell the difference. …
Also, Anthropic (ironically) has some nice paper(s) about the limits of “reasoning” in AI.
ChatGPT 2 was literally an Excel spreadsheet.
I guesstimate that it’s effectively a supermassive autocomplete algo that uses some TOTP-like factor to help it produce “unique” output every time.
And they’re running into issues due to increasingly ingesting AI-generated data.
Get your popcorn out! 🍿
I really hate the current AI bubble but that article you linked about “chatgpt 2 was literally an Excel spreadsheet” isn’t what the article is saying at all.
Fine, *could literally be.
The thing is, because Excel is Turing Complete, you can say this about literally anything that’s capable of running on a computer.
Removed by mod
And they’re running into issues due to increasingly ingesting AI-generated data.
There we go. Who coulda seen that coming! While that’s going to be a fun ride, at the same time companies all but mandate AS* to their employees.
Steve Gibson on his podcast, Security Now!, recently suggested that we should call it “Simulated Intelligence”. I tend to agree.
reminds me of Mass Effect’s VI, “virtual intelligence”: a system that’s specifically designed to be not truly intelligent, as AI systems are banned throughout the galaxy for its potential to go rogue.
Same, I tend to think of llms as a very primitive version of that or the enterprise’s computer, which is pretty magical in ability, but no one claims is actually intelligent
I’ve taken to calling it Automated Inference
you know what. when you look at it this way, its much easier to get less pissed.
Pseudo-intelligence
I love that. It makes me want to take it a step further and just call it “imitation intelligence.”
If only there were a word, literally defined as:
Made by humans, especially in imitation of something natural.
Fair enough 🙂
throws hands up At least we tried.