• 366 Posts
  • 2.02K Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle






  • Certainly there’s a lot of strategic voting going on. But you don’t see the Liberal (centrist) seat count increasing as the NDP goes down: the gains are all with the Conservatives. If it were a matter of progressives deciding to just consolidate with Liberals, you’d expect to see the Liberal seat count go up as the smaller parties went down. To me this suggests either that some people are flipping directly from left to right or that there is a general rightwards drift, with right-wing Liberals going over to Conservatives and left-wing strategic voters filling in some of the gap they leave for the Liberals. In either case it’s concerning that when the Conservatives fielded their most far-right leader so far, their share of the seats went up.











  • President Donald Trump has said pollsters that have shown his approval ratings sliding in recent weeks should be investigated for “election fraud.”

    Responding to the polls, Trump wrote on Truth Social on Monday: “They are negative criminals who apologize to their subscribers and readers after I win elections big, much bigger than their polls showed I would win, loose a lot of credibility, and then go on cheating and lying for the next cycle, only worse.”

    It’s not an election, and I hurt myself reading that deeply moronic quote. How is this man in charge of anything?






  • Is there even any suitable “confidence” measure within the LLM that it could use to know when it needs to emit an “I don’t know” response? I wonder whether there’s even any consistent and measurable difference between times when it seems to know what it’s talking about and times when it is talking BS. That might be something that exists in our own cognition but has no counterpart in the workings of an LLM. So it may not even be feasible to engineer it to say “I don’t know” when it doesn’t know. It can’t just straightforwardly look at how many sources it has for an answer and how good they were, because LLMs have typically worked in a more holistic way: each item of training data nudges the behaviour of the whole system, but it doesn’t leave behind any sign that says “I did this,” or any particular piece of knowledge or behaviour that can be ascribed to that training item.