• 0 Posts
  • 99 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle

  • And I think staying home was stupid. But at the end of the day, i actually don’t care much about Harris losing…Trump winning is what I’m upset with. And sure, there are strategic and comms failings by the Harris campaign that made room for they eventuality, but fundamentally the metastasized cancer is personified by the large number of people who specifically elected this piece of shit. I refuse to absolve voters of culpability for their fucking votes and that’s where the ultimate responsibility lies.



  • Bud, I’m too tired to care about arguing about reality with you. The entire settler movement and likudnik side of Israel is fully aligned with and happy about Trump. Disappearance of Gaza and West Bank entirely was extremely unlikely under Harris and is very likely under Trump. As is actual US troops side by side with IDF in the Levant in a very obvious very different way than they have been so far. Lot of people, perhaps an entire people, are very pprobability going to die.









  • If he loses, I think we see the fractures in the republican party really start to cascade even more than they have since the last election. It’ll still be a stewpot of regressive, reactionary, fascist fellating bullshit, but their goose stepping will be out of sync.

    Meanwhile, a collapse of the Republican part opens the door for a likewise restructuring of the Democratic party, which has for decades now been a big tent party for people who aren’t braindead, but is otherwise pretty ideideologically disparate. And before you wah wah wah FPTP, keep in mind that all RCV initiatives in the US have arisen out of Democratic party affiliated groups.


  • AI in health and medtech has been around and in the field for ages. However, two persistent challenges make roll out slow-- and they’re not going anywhere because of the stakes at hand.

    The first is just straight regulatory. Regulators don’t have a very good or very consistent working framework to apply to to these technologies, but that’s in part due to how vast the field is in terms of application. The second is somewhat related to the first but really is also very market driven, and that is the issue of explainability of outputs. Regulators generally want it of course, but also customers (i.e., doctors) don’t just want predictions/detections, but want and need to understand why a model “thinks” what it does. Doing that in a way that does not itself require significant training in the data and computer science underlying the particular model and architecture is often pretty damned hard.

    I think it’s an enormous oversimplification to say modern AI is just “fancy signal processing” unless all inference, including that done by humans, is also just signal processing. Modern AI applies rules it is given, explicitly or by virtue of complex pattern identification, to inputs to produce outputs according to those “given” rules. Now, what no current AI can really do is synthesize new rules uncoupled from the act of pattern matching. Effectively, a priori reasoning is still out of scope for the most part, but the reality is that that simply is not necessary for an enormous portion of the value proposition of “AI” to be realized.


  • Summary judgement is not a thing separate from a lawsuit. It’s literally a standard filling made in nearly every lawsuit (even if just as a hail mary). You referenced “beyond a reasonable doubt” earlier. This is also not the standard used in (US) civil cases–it’s typically a standard consisting of the preponderance of the evidence.

    I’m also not sure what you mean by “court approved documentation.” Different jurisdictions approach contract law differently, but courts don’t “approve” most contracts–parties allege there was a binding and contractual agreement, present their evidence to the court, and a mix of judge and jury determines whether under the jurisdictions laws and enforceable agreement occurred and how it can be enforced (i.e., are the obligations severable, what damages, etc.).


  • There’s plenty you could do if no label was produced with a sufficiently high confidence. These are continuous systems, so the idea of “rerunning” the model isn’t that crazy, but you could pair that with an automatic decrease in speed to generate more frames, stop the whole vehicle (safely of course), divert path, and I’m sure plenty more an actual domain and subject matter expert might come up with–or a whole team of them. But while we’re on the topic, it’s not really right to even label these confidence intervals as such–they’re just output weighting associated with respective levels. We’ve sort of decided they vaguely match up to something kind of sort approximate to confidence values but they aren’t based on a ground truth like I’m understanding your comment to imply–they entirely derive out of the trained model weights and their confluence. Don’t really have anywhere to go with that thought beyond the observation itself.




  • Maybe I’m wrong, and definitely correct me if so, but I thought the houthis formed well before the Saudi lead effective genocide occurring in Yemen. In fact, the current conflict is the result of the houthis basically couping the preceding government? If that’s the case, it doesn’t make much sense to characterize them as a resistance or reactionary force to anything externally?