• 0 Posts
  • 561 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • The police have gotten very effective at quashing effective movements, and we’ve had decades of concerted effort to make it more difficult to organize and to get people to actually oppose the concept of effective resistance in their own favor.
    People with power don’t want people threatening to destabilize that power. People who set media narratives need access to people with power, and so they don’t want to convey those destabilizing factors positively.
    This makes people view them negatively, if they even see them at all.

    America has never had a culling of the rich and powerful. The closest we got was when we decided to exchange a rich and powerful person far away for a few closer to home.
    As such, there’s no weight given to the morale of anyone who isn’t rich and powerful.
    Reporters, politicians and businesses people have never had to put their heads in the scale when making choices.





  • LLMs are prediction tools. What it will produce is a corpus that doesn’t use certain phrases, or will use others more heavily, but will have the same aggregate statistical “shape”.

    It’ll also be preposterously hard for them to work out, since the data it was trained on always has someone eventually disagreeing with the racist fascist bullshit they’ll get it to focus on. Eventually it’ll start saying things that contradict whatever it was supposed to be saying, because statistically eventually some manner of contrary opinion is voiced.
    They won’t be able to check the entire corpus for weird stuff like that, or delights like MLK speeches being rewriten to be anti-integration, so the next version will have the same basic information, but passed through a filter that makes it sound like a drunk incel talking about asian women.


  • Some of your emphasis is a little backwards. In the cloud computing environment, Amazon is bigger than Microsoft, and windows isn’t even particularly significant. Azure primarily provides Linux infrastructure instead of Windows. AWS is bigger in the government cloud sector than Microsoft.

    For servers, Linux is hands down the os of choice. It’s just not even close. Where Microsoft has an edge is in business software, like Excel, word, desktop OS and exchange. Needing windows server administrators for stuff like that is a pain when you already have Linux people for the rest of your stuff which is why it gets outsourced so often. It’s not central to the business so no sense in investing in people for it.

    Microsoft isn’t dominating the commercial computing sector, they’re dominating the office it sector, which is a cost center for businesses. They’re trailing badly in the revenue generation service sphere. That’s why they’ve been shifting towards offering their own hosting for their services, so you can reduce costs but keep paying them. Increased interoperability between windows and Linux from a developer standpoint to drive people towards buying their Linux hosting from them, because you can use vscode to push your software to GitHub and automatically deploy to azure when build and test passes.
    Being on the cost side of the ledger is a risk for them, so they’re trying to move to the revenue side, where windows just doesn’t have the grip.




  • All the other benefits of a non-violent protest aside, there’s also immense value is reminding people that they’re not as singular in their viewpoint as they feel.

    For a lot of people, it’s been very easy to feel like everyone else must be in board with this.

    I’m not sure what you’re looking for to codify the implicit threat. A couple million people calling you a king at an event called “no kings day” in a country whose founding narrative is “violently rebel against kings” seems pretty implicit to me.

    Also, I just realized that there’s a red coat/red hat parallel I haven’t seen leveraged yet that has a lot of potential.


  • Yes, I understand what you’re saying, it’s not a complicated position.
    Your position is that national reputation matters more than anything else. And most pointedly, the national reputation of your allies matters more than any other argument.

    What I’m saying is, is that the actions the US, or any other nation, took before the people currently running things were even born have no bearing on current events. Nations aren’t people, and they don’t possess a national character that you can use to try to predict their behavior or judge them.

    Would the world be justified in concluding that it’s only a matter of time before Germany does some more genocide? Before Japan unleashes atrocities across Asia?

    If you’re getting down to it, the US can’t control other nations, beyond stick and carrot means. And the US has the same right to try to keep Iran from getting nukes as Iran does in trying to get them. Because again, nations aren’t people. They don’t have rights, they have capabilities.

    And all of that’s irrelevant! Because the question is, is Israel justified in attacking Iran? The perception of hypocrisy in US foreign policy isn’t relevant to that question.


  • No, what I don’t understand is what relevance that has to this situation. The US using nukes on Japan 80 years ago doesn’t make Iran making nukes justified. It doesn’t validate Iran not having nukes. It neither strengthens nor weakens Israeli claims of an Iranian weapons program, and it doesn’t make a preemptive strike to purportedly disable them just or unjust.

    It seems like you’re arguing that the US nuked Japan and therefore Iran, a signatory to the nuclear nonproliferation treaty, is allowed to have nukes. Israel is falsely characterizing their civilian energy program, and we know this because of their backing by the US.
    It’s just a non-sequitor, particularly when there’s relevant reasons why US involvement complicated matters. .



  • The USs actions in world war two are an odd thing to bring up in this context. It was a radically different set of circumstances, 80 years ago, and none of the people involved are alive anymore.
    It’s entirely irrelevant.

    May as well point out that the US was the driver for the creation of those watchdog groups and is a leading force in nuclear disarmament. It’s just as relevant to if Iran has a nuclear weapons program or Israels justification for attacking.

    Iranian opposition to US strategic interests in the region giving the US a strong motivation to let anything that makes them weaker happen is a perfectly good thing to mention.


  • Fundamentally, I agree with you.

    The page being referenced

    Because the phrase “Wikipedians discussed ways that AI…” Is ambiguous I tracked down the page being referenced. It could mean they gathered with the intent to discuss that topic, or they discussed it as a result of considering the problem.

    The page gives me the impression that it’s not quite “we’re gonna use AI, figure it out”, but more that some people put together a presentation on how they felt AI could be used to address a broad problem, and then they workshopped more focused ways to use it towards that broad target.

    It would have been better if they had started with an actual concrete problem, brainstormed solutions, and then gone with one that fit, but they were at least starting with a problem domain that they thought it was a applicable to.

    Personally, the problems I’ve run into on Wikipedia are largely low traffic topics where the content is too much like someone copied a textbook into the page, or just awkward grammar and confusing sentences.
    This article quickly makes it clear that someone didn’t write it in an encyclopedia style from scratch.


  • A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

    The intent was to make more uniform summaries, since some of them can still be inscrutable.
    Relying on a tool notorious for making significant errors isn’t the right way to do it, but it’s a real issue being examined.

    In thermochemistry, an exothermic reaction is a “reaction for which the overall standard enthalpy change ΔH⚬ is negative.”[1][2] Exothermic reactions usually release heat. The term is often confused with exergonic reaction, which IUPAC defines as “… a reaction for which the overall standard Gibbs energy change ΔG⚬ is negative.”[2] A strongly exothermic reaction will usually also be exergonic because ΔH⚬ makes a major contribution to ΔG⚬. Most of the spectacular chemical reactions that are demonstrated in classrooms are exothermic and exergonic. The opposite is an endothermic reaction, which usually takes up heat and is driven by an entropy increase in the system.

    This is a perfectly accurate summary, but it’s not entirely clear and has room for improvement.

    I’m guessing they were adding new summaries so that they could clearly label them and not remove the existing ones, not out of a desire to add even more summaries.




  • Eh, there’s an intrinsic amount of information about the system that can’t be moved into a configuration file, if the platform even supports them.

    If your code is tuned to make movement calculations with a deadline of less than 50 microseconds and you have code systems for managing magnetic thrust vectoring and the timing of a rotating detonation engine, you don’t need to see the specific technical details to work out ballpark speed and movement characteristics.
    Code is often intrinsically illustrative of the hardware it interacts with.

    Sometimes the fact that you’re doing something is enough information for someone to act on.

    It’s why artefacts produced from classified processes are assumed to be classified until they can be cleared and declassified.
    You can move the overt details into a config and redact the parts of the code that use that secret information, but that still reveals that there is secret code because the other parts of the system need to interact with it, or it’s just obvious by omission.
    If payload control is considered open, 9/10 missiles have open guidance control, and then one has something blacked out and no references to a guidance system, you can fairly easily deduce that that missile has a guidance system that’s interesting with capabilities likely greater that what you know about.

    Eschewing security through obscurity means you shouldn’t rely on your enemies ignorance, and you should work under the assumption of hostile knowledge. It doesn’t mean you need to seek to eliminate obscurity altogether.