• xor@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    3
    ·
    1 year ago

    Okay, so let’s do a thought experiment, and take off all the safeguards.

    Oops, you made:

    • a bomb design generator
    • an involuntary pornography generator
    • a CP generator

    Saying “don’t misuse it” isn’t enough to stop people misusing it

    And that’s just with chatgpt - AI isn’t just a question and answer machine - I suggest you read about “the paperclip maximiser” as a very good example of how misalignment of general purpose AI can go horribly wrong

    • El Barto@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      1 year ago

      I was going to say that a well-determined individual would find this information regardless. But the difference here is that it being so easily accessible would increase the risks of someone doing something reaaaally stupid by a factor of 100. Yikes.

        • El Barto@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          For you or many others, for sure it won’t be complicated. The world is vast, and the environment you are in is very specific to you. Many other kids may have phones, sure, but they are not in the same environment as you or me.

          Some non-sciency kid will have a hard time getting to do what their edgy mind wants them to do, unless an AI guides them mini-step by mini-step.

          • PsychedSy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            I don’t think AI, especially chat bots, will be more useful than a youtuber. It’s not particularly easy to make powerful explosives, and gun powder is kind of trash for bombs. I’d imagine chatgpt would blow up more curious kids than aid them lol

    • PsychedSy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      1 year ago

      I mean half that is deviant art and you can look up how to make explosives on youtube chem channels or in books. It’s not hard to rig up a custom detonator if you can get the energetics.

    • Socsa@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      ChatGPT was very far from the first publically available generative AI. It didn’t even do images at first.

      Also, there are plenty of YouTube channels which show you how to make all sorts of extremely dangerous explosives already.

      • xor@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        But the concern isn’t which was the first generative ai - their “idea” was that AIs - of all types, including generalised - should just be released as-is, with no further safeguards.

        That doesn’t consider that OpenAI doesn’t only develop text generation AIs. Generalised AI can do horrifying things, even just by accidental misconfiguration (see the paperclip optimiser example).

        But even a GANN like chatGPT can be coerced to generate non-text data with the right prompting.

        Even in that example, one can’t just dig up those sorts of videos without, at minimum, leaving a trail. But an unresticted pretrained model can be distributed and run locally, and used without trace to generate any content whatsoever that it’s capable of generating.

        And with a generalised AI, the only constraint to the prompt “kill everybody except me” becomes available compute.