ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

  • d3Xt3r@lemmy.nz
    link
    fedilink
    English
    arrow-up
    133
    arrow-down
    13
    ·
    edit-2
    1 year ago

    private

    If it’s on the public facing internet, it’s not private.

    • perviouslyiner@lemm.ee
      link
      fedilink
      English
      arrow-up
      70
      arrow-down
      2
      ·
      edit-2
      1 year ago

      “We don’t infringe copyright; The model output is an emergent new thing and not just a recital of its inputs”

      “so these questions won’t reveal any copyrighted text then?”

      (padme stare)

      “right?”

      • QuaternionsRock@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        We don’t infringe copyright; The model output is an emergent new thing and not just a recital of its inputs

        This argument always seemed silly to me. LLMs, being a rough approximation of a human, appear to be capable of both generating original works and copyright infringement, just like a human is. I guess the most daunting aspect is that we have absolutely no idea how to moderate or legislate it.

        This isn’t even particularly surprising result. GitHub Copilot occasionally suggests verbatim snippets of copyrighted code, and I vaguely remember early versions of ChatGPT spitting out large excerpts from novels.

        Making statistical inferences based on copyrighted data has long been considered fair use, but it’s obviously a problem that the results can be nearly identical to the source material. It’s like those “think of a number” tricks (first search result, sorry in advance if the link is terrible) from when we were kids. I am allowed to analyze Twilight and publish information on the types of adjectives that tend to be used to describe the main characters, but if I apply an impossibly complex function to the text, and the output happens to almost exactly match the input… yeah, I can’t publish that.

        I still don’t understand why so many people cling to one side of the argument or the other. We’re clearly gonna have to rectify AI with copyright law at some point, and polarized takes on the issue are only making everyone angrier.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      29
      arrow-down
      8
      ·
      1 year ago

      Indeed. People put that stuff up on the Internet explicitly so that it can be read. OpenAI’s AI read it during training, exactly as it was made available for.

      Overfitting is a flaw in AI training that has been a problem that developers have been working on solving for quite a long time, and will continue to work on for reasons entirely divorced from copyright. An AI that simply spits out copies of its training data verbatim is a failure of an AI. Why would anyone want to spend millions of dollars and massive computing resources to replicate the functionality of a copy/paste operation?

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        1 year ago

        Storing a verbatim copy and using it for commercial purposes already breaks a lot of copyright terms, even if you don’t distribute the text further.

        The exceptions you’re thinking about are usually made for personal use, or for limited use, like your browser obtaining a copy of the text on a page temporarily so you can read it. The licensing on most websites doesn’t grant you any additional rights beyond that — nevermind the licensing of books and other stuff they’ve got in there.

    • pntha@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 year ago

      how do we know the ChatGPT models haven’t crawled the publicly accessible breach forums where private data is known to leak? I imagine the crawler models would have some ‘follow webpage-attachments and then crawl’ function. surely they have crawled all sorts of leaked data online but also genuine question bc i haven’t done any previous research.

      • d3Xt3r@lemmy.nz
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 year ago

        We don’t, but from what I’ve seen in the past, those sort of forums either require registration or payment to access the data, and/or some special means to download it (eg: bittorrent link, often hidden behind a URL forwarders + captchas so that the uploader can earn some bucks). A simple web crawler wouldn’t be able to access such data.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      8
      ·
      edit-2
      1 year ago

      If it’s on the public facing internet, it’s not private.

      A very short sighted idea.

      1. Copyrighted texts exist. Even in public.

      2. Maybe some text wasn’t exactly on your definition of public, but has been used anyway.

      • Papergeist@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Perhaps this person didn’t present thier opinion in the best way. I believe I agree with the sentiment they were possibly trying to convey. You should assume anything you post on the Internet is going to be public.

        If you post some pictures of youself getting trashed at club, you should know those pictures have a possibility of resurfacing when you’re 40 something and working in a stuffy corporate environment. I doubt I am alone in saying I made the wrong decision because I never saw myself in that sort of workplace. I still might escape it, but it could go either way at this point.

        To your point, I believe, there are instances where privacy is absolutely required. I agree with you too. We obviously need some set of unambiguous rules in place at this point.

        • NeoNachtwaechter@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          1 year ago

          You should assume anything you post on the Internet is going to be public.

          Oh, I know that very well. I even knew it before I wrote my post.

          Now breathe three times and then you can read my post again.