• 0 Posts
  • 1.89K Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle


  • Your “probably not” argument gets thinner every major AI update.

    Right, but I’m talking about whether they’re already using it, not whether they will in the future. It’s certainly interesting to speculate about it though. I don’t think we really know for sure how good it will get, and how fast.

    Something interesting that’s come up is scaling laws. Compute, dataset size, and parameters so far appear to create a limit to how low the error rate can go, regardless of the model’s architecture. And dataset size and model size appear to require being scaled up in tandem to avoid over-/under-fitting. It’s possible, although not guaranteed, that we’re discovering fundamental laws about pattern recognition. Or maybe it’s just an issue with our current approach.


  • Alphago was designed entirely within the universe of Go. It is fundamentally tied to the game; a game with simple rules and nothing but rule-following patterns to analyze. So it can make good go moves, because it has been trained on good go moves. Or self-trained using a simulated game maybe, idk how they trained it.

    ChatGPT is trained the same way, but on human speech. It is very, very good at writing human speech. This requires it to be able to mimick our speech patterns, which means its mimickry will resemble coherent thought, but it’s not. In short, ChatGPT is not trained to make political decisions. If you’ve seen the paper where they ask it to run a vending machine company, you can see some of the issues with trying to force it to make real-world decisions like running a political campaign.

    You could train an AI specifically to make political campaign decisions, but I’m not aware of a good dataset you could use for it.

    Could AI have been used to help run a campaign? Yes. Would it have been better than humans doing it? Probably not.