Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.

  • NumbersCanBeFun@kbin.social
    link
    fedilink
    arrow-up
    57
    arrow-down
    8
    ·
    edit-2
    1 year ago

    I agree with your position. In all of your examples, the actions and choices are morally wrong but we cannot deny facts that lead to those outcomes. If we do, that is how these mistakes will get repeated by future generations.

    • livus@kbin.social
      link
      fedilink
      arrow-up
      35
      arrow-down
      11
      ·
      1 year ago

      Your and @WoodenBleachers’s idea of “effective” is very subjective though.

      For example Germany was far worse off during the last few weeks of Hitler’s term than it was before him. He left it in ruins and under the control of multiple other powers.

      To me, that’s not effective leadership, it’s a complete car crash.

      • NumbersCanBeFun@kbin.social
        link
        fedilink
        arrow-up
        28
        arrow-down
        9
        ·
        1 year ago

        That’s getting far deeper into the topic than I’d like. As a surface level description it still remains valid. He was able to convince the majority that his way of thinking was the right way to go and deployed a plan to that effect to great success for a sustained period of time.

        • livus@kbin.social
          link
          fedilink
          arrow-up
          17
          arrow-down
          11
          ·
          edit-2
          1 year ago

          He was able to convince the majority that his way of thinking was the right way to go and deployed a plan to that effect

          So, you’re basically saying an effective leader is someone who can convince people to go along with them for a sustained period. Jim Jones was an effective leader by that metric. Which I would dispute. So was the guy who led the Donner Party to their deaths.

          This is why I see a problem with this. You and I are able to discuss this and work out what each other means.

          But in a world where people are time-poor and critical thinking takes time, errors based on fundamental misunderstandings of consensual meanings can flourish.

          And the speed and sheer amount of global digital communication means that they can be multiplied and compounded in ways that individual fact checkers will not be able to challenge sucessfully.

          • ScrimbloBimblo@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            2
            ·
            1 year ago

            I mean Jim Jones was pretty damn effective at convincing a large group of people to commit mass suicide. If he’d been ineffective, he’d have been one of the thousands of failed cult leaders you and I have never heard of. Similarly, if Hitler had been ineffective, it wouldn’t have takes the combined forces of half the world to fight him.

            • livus@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              This is true, I guess the difference in the Jim Jones scenario is whether you define effective leadership as being able to get your plan carried out (even if that plan is killing everyone you lead) or whether you define it as achieving good outcomes for those you lead.

              Hitler didn’t do either of those things in the end so I still don’t rate him, but I can see why you would if you just look at the first part of his reign.

              AI often produces unintended consequences based on its interpretations - there’s a great TED talk on some of these - and I think with the LLMs we have way more variables in our inputs than we have time to define them. That will probably change as they get refined.

          • NumbersCanBeFun@kbin.social
            link
            fedilink
            arrow-up
            12
            arrow-down
            19
            ·
            edit-2
            1 year ago

            No I didn’t and you’re not going to straw man me into a debate. You’re looking for a fight that I won’t give you. Re-read my previous statements if you failed to understand what I was trying to say.

            • livus@kbin.social
              link
              fedilink
              arrow-up
              16
              arrow-down
              9
              ·
              edit-2
              1 year ago

              Huh? Yikes this feels like being back on reddit.

              No I am not trying to “fight” you or “straw man” you at all!!!

              I thought we were having a pleasant and civilized conversation about the merits and pitfalls of AI , using our different ideas about the word “effective” as an example.

              Unfortunately I didn’t see that you’re handing me downvotes until just now, so I didn’t pick up on your vibe.

          • ninjakitty7@kbin.social
            link
            fedilink
            arrow-up
            16
            arrow-down
            1
            ·
            1 year ago

            Honestly AI doesn’t think much at all. They’re scary clever in some ways but also literally don’t know what anything is or means.

            • aesthelete@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              1 year ago

              They don’t think. They think 0% of the time.

              It’s algorithms, randomness, probability, and statistics through and through. They don’t think any more than a calculator thinks.

          • Bluskale@kbin.social
            link
            fedilink
            arrow-up
            3
            arrow-down
            3
            ·
            1 year ago

            LLMs aren’t AI… they’re essentially a glorified autocorrect system that are stuck at the surface level.

          • NumbersCanBeFun@kbin.social
            link
            fedilink
            arrow-up
            7
            arrow-down
            7
            ·
            1 year ago

            Incorrect. If we are relying on AI as our ONLY source of information then we are doomed. We should always fact check things we believe we know and seek additional information on topics we are researching. Especially if they offer opposing factual positions.

            Ironically though you’ve just proven that you think at only a surface level.

            • aesthelete@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              edit-2
              1 year ago

              We should always fact check things we believe we know and seek additional information on topics we are researching.

              Yay yet another person saying that primary information sources should be verified using secondary information sources. Yes, you’re right it’s great actually that in your vision of the future everyone will have to be a part time research assistant to have any chance of knowing anything about anything because all of their sources will be rubbish.

              And that’s definitely a thing people will do, instead of just leaning into occultism, conspiratorial thinking, and group think in alternating shifts.

              All I have to say is thank fuck Wikipedia exists.

            • oo1@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              ai ain’t going to be much “worse” or “better” than humans.

              but re earlier points I don’t think things should be judged on a timescale of a few years.
              relevant timescales are more like generation(s) to me.

            • somethingsnappy@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              Nobody said we were relying on that. We’ll all keep searching. We’ll all keep hoping it will bring abundance, as opposed to every other tech revolution since farming. I can only think at the surface level though. I definitely have not been in the science field for 25 years.

      • lolcatnip@reddthat.com
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        1 year ago

        If you ask it for evidence Hitler was effective, it will give you what you asked for. It is incapable of looking at the bigger picture.

        • andallthat@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          1 year ago

          it doesn’t even look at the smaller picture. LLMs build sentences by looking at what’s most statistically likely to follow the part of the sentence they have already built (based on the most frequent combinations from their training data). If they start with “Hitler was effective” LLMs don’t make any ethical consideration at all… they just look at how to end that sentence in the most statistically convincing imitation of human language that they can.

          Guardrails are built by painstakingly trying to add ad-hoc rules not to generate “combinations that contain these words” or “sequences of words like these”. They are easily bypassed by asking for the same concept in another way that wasn’t explicitly disabled, because there’s no “concept” to LLMs, just combination of words.

          • lolcatnip@reddthat.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Yes, but in many defense the “smaller picture” I was alluding to was more like the 4096 tokens of context ChatGPT uses. I didn’t mean to suggest it was doing anything we’d recognize as forming an opinion.

            • andallthat@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Sorry if I gave you the impression that I was trying to disagree with you. I just piggy-backed on your comment and sort of continued it. If you read them one after the other as one comment (at least iny head), they seem to flow well