Sometimes it can be hard to tell if we’re chatting with a bot or a real person online, especially as more and more companies turn to this seemingly cheap way of providing customer support. What are some strategies to expose AI?

  • zappy@lemmy.ca
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    Generally, very short term memory span so have longer conversations as in more messages. Inability to recognize concepts/nonsense. Hardcoded safeguards. Extremely consistent (typically correct) writing style. The use of the Oxford comma always makes me suspicious ;)

    • hallettj@beehaw.org
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      Oh no - I didn’t realize my preference for the Oxford comma might lead to trouble! I am a fan. When that Vampire Weekend song comes on I always whisper, “me…”

      • chinpokomon@lemmy.ml
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        Someone on Reddit once thought I was a bot because I use proper grammar. 12 years of comment history would have demonstrated otherwise, but it wasn’t a battle worth fighting.

    • tikitaki@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      very short term memory span so have longer conversations as in more messages

      Really, this is a function of practicality and not really one of capability. If someone were to give an LLM more context it would be able to hold very long conversations. It’s just that it’s very expensive to do so on any large scale - so for example OpenAI’s API gives a maximum token length to requests.

      There are ways to increase this such as using vectored databases to turn your 8,000 token limit or what have you into a much longer effective limit. And this is how you preserve context.

      When you talk to ChatGPT in the web browser, it’s basically sending a call to its own API and re-sending the last few messages (or what it thinks is most important in the last few messages) but that’s inherently lossy. After enough messages, context gets lost.

      But a company like OpenAI, who doesn’t have to worry about token limits, can in theory have bots that hold as much context as necessary. So while your advice is good in a practical sense - most chatbots you run into will likely have those limits because of financial reasons… it is in theory possible to have a chatbot that doesn’t have these limits and therefore this strategy would not work.

      • zappy@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        The problem isn’t the memory capacity, even thought the LLM can store the information, it’s about prioritization/weighting. For example, if I tell chatgpt not to include a word (for example apple) in it’s responses then ask it some questions then ask it a question about what are popular fruit-based pies then it will tend to pick the “better” answer of including apple pie rather than the rule I gave it a while ago about not using the word apple. We do want decaying weights on memory because most of the time old information isn’t as relevant but it’s one of those things that needs optimization. Imo I think we’re going to get to the point where the optimal parameters for maximizing “usefullness” to the average user is different enough from what’s needed to pass someone intentionally testing the AI. Mostly bc we know from other AI (like Siri) that people don’t actually need that much context saved to find them helpful

        • tikitaki@kbin.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          1 year ago

          The reason is that the web browser chatgpt has a maximum amount of data per request. This is so they can minimize cost at scale. So for example you ask a question and tell it not to include a word. What will happen is your questions gets sent like this

          {‘context’: ‘user asking question’, ‘message’: {user question here} }

          then it gives you a response and you ask it another question. typically if it’s a small question the context is saved from one message to another.

          {‘context’: ‘user asking question - {previous message}’, ‘message’: {new message here} }

          so it literally just copies the previous message until it reaches the maximum token length

          however there’s a maximum # of words that can be in the context + message combined. therefore the context is limited. after a certain amount of words input into chatgpt, it will start dropping things. it does this with a method to try and find out what is the “most important words” but this is inherently lossy. it’s like a jpeg- it gets blurry in order to save data.

          so for example if you asked “please name the best fruit to eat, not including apple” and then maybe on the third or fourth question the “context” in the request becomes

          ‘context’: ‘user asking question - user wanted to know best fruit’

          it would cut off the “not including apple bit” in order to save space

          but here’s the thing - that exists in order to save space and processing power. it’s necessary at a large scale because millions of people could be talking to chatgpt and it couldn’t handle all that.

          BUT if chatgpt wanted some sort of internal request that had no token limit, then everything would be saved. it would turn from a lossy jpeg into a png file. chatgpt would have infinite context.

          this is why i think for someone who wants to keep context (ive been trying to develop specific applications which context is necessary) then chatgpt api just isn’t worth it.

          • zappy@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            I’m trying to tell you limited context is a feature not a bug, even other bots do the same thing like Replika. Even when all past data is stored serverside and available, it won’t matter because you need to reduce the weighting or you prevent significant change in output values (and less change as the history grows larger). Time decay of information is important to making these systems useful.

            • tikitaki@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              give an example please, because i don’t see how in normal use the weighting would matter at a significant scale based on the massive volume of training data

              any interact the chatbot has with one person is dwarfed by the amount of total text data the AI has consumed through training. it’s like saying saggitarius a gets changed over time by adding in a few planets. while definitely true it’s going to be a very small effect

              • zappy@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                That’s kind of the point and how’s it different than a human. A human is going to weight local/recent contextual information as much more relevant to the conversation because they’re actively learning and storing the information (our brains work on more of an associative memory basis than temporal). However, with our current models it’s simulated by decaying weights over the data stream. So when you get conflicts between contextual correct vs “global” correct output, global has a tendency to win out that is more obvious. Remember you can’t actually make changes to the model as a user without active learning. Thus the model will always eventually return to it’s original behaviour as long as you can fill up the memory.