We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

  • 0ops@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Actually, most models are already doing some form of filtering AFAIK, but I don’t know how comparable it is to our sensory system. CNN’s, for example, work the way our eyes work. The short of it is image data goes through a few layers, each node in the next layer collecting the aggregate data of several from the last (usually a 3x3) grid. Each of these layers has filters to determine the output of that node, which need to be trained to collectively recognize specific patterns in the data, like a dog. Source: lecture notes and homework from my applied neural networks class

    • grabyourmotherskeys@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 months ago

      This sounds like what I was learning 20-some years ago. The hardware and software are better (and easier!) now and the compute is so, so much better. I priced out a terabyte data server with some colleagues back then using off the shelf hardware: $10k CDN. :)

      Edit: point being we are seeing things now that were predicted almost a century ago but it takes time to build all the infrastructure. That pace is accelerating. The next ten years are going to be wild.

      • 0ops@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        I’m only finishing the class now and it’s pretty wild to hear “We’re only learning this model to help you understand a fundamental concept, the model itself is ancient and obsolete”, and said model came out in 2018. Wild