Did nobody really question the usability of language models in designing war strategies?

  • huginn@feddit.it
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    9 months ago

    To be fair they’re not accidentally good enough: they’re intentionally good enough.

    That’s where all the salary money went: to find people who could make them intentionally.

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      9 months ago

      GPT 2 was just a bullshit generator. It was like a politician trying to explain something they know nothing about.

      GPT 3.0 was just a bigger version of version 2. It was the same architecture but with more nodes and data as far as I followed the research. But that one could suddenly do a lot more than the previous version, so by accident. And then the AI scene exploded.

      • Limitless_screaming@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        9 months ago

        It was the same architecture but with more nodes and data

        So the architecture just needed more data to generate useful answers. I don’t think that was an accident.