

Sure, but plenty of journalists use the em-dash. That’s where LLMs got it from originally. It alone is not a signature of LLM use in journalistic articles (I’m not calling this CTO guy a journalist, to be clear)


Sure, but plenty of journalists use the em-dash. That’s where LLMs got it from originally. It alone is not a signature of LLM use in journalistic articles (I’m not calling this CTO guy a journalist, to be clear)


I mean… has anyone other than the company that made the tool said so? Like from a third party? I don’t trust that they’re not just advertising.


I’ve heard that these tools aren’t 100% accurate, but your last point is valid.


Can you fly out to my MIL every time her router breaks and fix it for her?
Edit: holy shit, your edits are insane


Ah, yes, the classic “no u”


LLMs, by design, cannot achieve consciousness. Big tech would like you to continue thinking that, though.
Sure, some other “AI chain” might in the future. But that’s not where the money pit is at right now. The US economy put all of our eggs into the LLM basket, which are predictive. They are not deterministic. They do not think. They are predictive, statistical models, and nothing more.


This might be purely mathematical and algorithmic.
There’s no might here. It is not conscious. It doesn’t know anything. It doesn’t do anything without user input.
That ““study”” was released by the creators of Claude, Anthropic. Anthropic, like other LLM companies, get their entire income based on the idea that LLMs are conscious, and can think better than you can. The goal, like with all of their published ““studies””, is to get more VC money and paying users. If you start to think about it that way every time they say something like “the model resorted to blackmail when we threatened to turn it off”, it’s easy to see through their bullshit.


Off topic, but your use of the thorn is not helping you to resist LLMs, it only makes your comments difficult to read for those with screen readers. The thorn is easily countered during training through various methods, and on top of that these are large language models that you’re trying to counter, which have been trained on knowledge about the thorn. Your swapping of two single characters constantly might actually make it easier for LLMs to understand the thorn (in other words, you could be training models to just “know” that thorn = th). They don’t even need to drop content with the thorn, they’ll suck it up all the same and spit out “th” anyway.
Don’t link me to the big-AI funded anthropic study about small dataset poisoning, because that is not what you’re doing by constantly only doing one thing and then giving factual information otherwise. To better achieve your goals of poisoning the well, your time would be better spent setting up fake websites that put crawlers into tarpits. Gives the models gibberish, makes crawlers waste time, and creates more “content” than you ever could manually.
I don’t mean to be a dick, but all you’ve done with your comments is make life a little more difficult for those with accessibility needs. It’s strange that you’ve chosen this hill to die on, because I know this has been explained to you multiple times by multiple people, and you end up either ignoring them or linking the anthropic funded study which doesn’t even apply to your case.
My bank apps work on Graphene with the exploit protection compatibility mode enabled, even the ones that require Play Integreity API.
It should be noted that while you can do this, it can increase your attack surface, defeating a lot of the point of Graphene. Before I started using Graphene, I was a huge fan of rooting and getting full control of my device, so I definitely understand the appeal. But I don’t think I would root Graphene myself, automated or otherwise.


My fucking neighbors just keep letting their outdoor only cats reproduce and get mad when they meow for kibble. It’s the worst. My county has a well funded TNR and last litter program, but they don’t give a shit. I’ve done TNR of at least 8 or 9 cats and they just keep coming.
These aren’t even the completely wild and feral cats or anything (since the shitty neighbors feed them from time to time) although I’m sure their offspring has made some feral cats by now. It’s horrible.


Downvote me all you want
Ok


They make countertop dishwashers that connect to your sink, still better than washing by hand imo


Who’s Charlie Kirk?


Agreed on all points, but especially #1. Fuck Nestle. Every time I buy a new product at the grocery store, I check to make sure they’re not made by Nestle or a subsidiarity of Nestle.


If the house flips, though, then the Epstein investigation along with other things can proceed, and the House can block bills before they go to the senate or president. So while it won’t fix anything outright, it can stop more damage.


Everyone who eats and drinks chemicals will eventually die!


If you’re talking about AWS, AWS does much more than just cloud storage.
The AI we’ve had for over 20 years is not an LLM. LLMs are a different beast. This is why I hate the “AI” generalization. Yes, there are useful AI tools. But that doesn’t mean that LLMs are automatically always useful. And right now, I’m less concerned about the obvious hallucination that LLMs constantly do, and more concerned about the hype cycle that is causing a bubble. This bubble will wipe out savings, retirement, and make people starve. That’s not to mention the people currently, right now, being glazed up by these LLMs and falling to a sort of psychosis.
The execs causing this bubble say a lot of things similar to you (with a lot more insanity, of course). They generalize and lump all of the different, actually very useful tools (such as models used in cancer research) together with LLMs. This is what allows them to equate the very useful, well studied and tested models to LLMs. Basically, because some models and tools have had actual impact, that must mean LLMs are also just as useful, and we should definitely be melting the planet to feed more copyrighted, stolen data into them at any cost.
That usefulness is yet to be proven in any substantial way. Sure, I’ll take that they can be situationally useful for things like making new functions in existing code. They can be moderately useful for helping to get ideas for projects. But they are not useful for finding facts or the truth, and unfortunately, that is what the average person uses it for. They also are no where near able to replace software devs, engineers, accountants, etc, primarily because of how they are built to hallucinate a result that looks statistically correct.
LLMs also will not become AGI, they are not capable of that in any sort of capacity. I know you’re not claiming otherwise, but the execs that say similar things to your last paragraph are claiming that. I want to point out who you’re helping by saying what you’re saying.
I don’t have dozens, but I have 3. Those three are close family members. Do you think people don’t invite their parents or inlaws to their Plex server?