• 0 Posts
  • 262 Comments
Joined 1 year ago
cake
Cake day: July 22nd, 2024

help-circle




  • Using the term “politically correct” as a pejorative is a dog whistle. It is not literally political but communicates a right wing frustration over social consequences when they engage in overt racist, sexist, hateful, bigoted, or exclusionary speech or behavior. In more recent parlance it has been largely supplanted by a pejorative usage of “woke.”

    Any AI that is trained on the internet – which is ostensibly all of them – will provide a broad reflection of the public zeitgeist. Since the prompt specified “politically incorrect” as a positive attribute its generated text reflected the training data where “politically incorrect” was presented as a positive trait. Since we know that it’s a dog whistle, by having lived through decades of it’s use in mass media and online, it comes as no surprise that an AI instructed to ape that behavior has done exactly what it was told.













  • You conflated the way the website displays content (literal versus paraphrase translations) with the translations themselves (you literally painted the paraphrase translation as “fan fiction”). I clearly explained how the different translations work, that the content itself is the same for all available translations, and why the translations are likely to be displayed differently. Additionally, the website is freely accessible and confirming these things takes seconds.

    But you appear to have ignored everything I said and then doubled down on your own bizarre take. And again, this is all easily verifiable in seconds on the website you, yourself, mentioned.

    It doesn’t really matter whether you are trying to be deceptive on purpose or whether you are simply clueless and obstinate. Doubling down on a bad take after getting something so wrong makes for some potent fremdschämen.



  • Apple is being sued because they announced and demonstrated all of these AI features and then never delivered. They are actually doing good work in the AI field, including a recent paper that demonstrates that AI/LLM technology is incapable of reasoning, and any apparent logic seen in current approaches is simply an illusion. It’s not that they “aren’t moving fast enough”, it’s that they intentionally lied about their capabilities and timelines.