We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • Geodad@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    27
    ·
    1 day ago

    I’ve never been fooled by their claims of it being intelligent.

    Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      7
      ·
      17 hours ago

      It very much isn’t and that’s extremely technically wrong on many, many levels.

      Yet still one of the higher up voted comments here.

      Which says a lot.

      • El Barto@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        I’ll be pedantic, but yeah. It’s all transistors all the way down, and transistors are pretty much chained if/then switches.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        7 hours ago

        Given that the weights in a model are transformed into a set of conditional if statements (GPU or CPU JMP machine code), he’s not technically wrong. Of course, it’s more than just JMP and JMP represents the entire class of jump commands like JE and JZ. Something needs to act on the results of the TMULs.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            That is not really true. Yes, there are jump instructions being executed when you run interference on a model, but they are in no way related to the model itself.

            The model is data. It needs to be operated on to get information out. That means lots of JMPs.

            If someone said viewing a gif is just a bunch of if-else’s, that’s also true. That the data in the gif isn’t itself a bunch of if-else’s isn’t relevant.

            Executing LLM’S is particularly JMP heavy. It’s why you need massive fast ram because caching doesn’t help them.

            • tmpod@lemmy.pt
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 hour ago

              You’re correct, but that’s like saying along the lines of manufacturing a car is just bolting and soldering a bunch of stuff. It’s technically true to some degree, but it’s very disingenuous to make such a statement without being ironic. If you’re making these claims, you’re either incompetent or acting in bad faith.

              I think there is a lot wrong with LLMs and how the public at large uses them, and even more so with how companies are developing and promoting them. But to spread misinformation and polute an already overcrowded space with junk is irresponsible at best.

      • Hotzilla@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 hours ago

        Calling these new LLM’s just if statements is quite a over simplification. These are technically something that has not existed before, they do enable use cases that previously were impossible to implement.

        This is far from General Intelligence, but there are solutions now to few coding issues that were near impossible 5 years ago

        5 years ago I would have laughed in your face if you came to suggest that can you code a code that summarizes this description that was inputed by user. Now I laugh that give me your wallet because I need to call API or buy few GPU’s.

        • JcbAzPx@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          I think the point is that this is not the path to general intelligence. This is more like cheating on the Turing test.

    • adr1an@programming.dev
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      24 hours ago

      I love this resource, https://thebullshitmachines.com/ (i.e. see lesson 1)…

      In a series of five- to ten-minute lessons, we will explain what these machines are, how they work, and how to thrive in a world where they are everywhere.

      You will learn when these systems can save you a lot of time and effort. You will learn when they are likely to steer you wrong. And you will discover how to see through the hype to tell the difference. …

      Also, Anthropic (ironically) has some nice paper(s) about the limits of “reasoning” in AI.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        ·
        edit-2
        1 day ago

        I really hate the current AI bubble but that article you linked about “chatgpt 2 was literally an Excel spreadsheet” isn’t what the article is saying at all.

      • A_norny_mousse@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        1 day ago

        And they’re running into issues due to increasingly ingesting AI-generated data.

        There we go. Who coulda seen that coming! While that’s going to be a fun ride, at the same time companies all but mandate AS* to their employees.