• 177 Posts
  • 624 Comments
Joined 2 years ago
cake
Cake day: March 19th, 2024

help-circle







  • This might be purely mathematical and algorithmic.

    There’s no might here. It is not conscious. It doesn’t know anything. It doesn’t do anything without user input.

    That ““study”” was released by the creators of Claude, Anthropic. Anthropic, like other LLM companies, get their entire income based on the idea that LLMs are conscious, and can think better than you can. The goal, like with all of their published ““studies””, is to get more VC money and paying users. If you start to think about it that way every time they say something like “the model resorted to blackmail when we threatened to turn it off”, it’s easy to see through their bullshit.


  • Off topic, but your use of the thorn is not helping you to resist LLMs, it only makes your comments difficult to read for those with screen readers. The thorn is easily countered during training through various methods, and on top of that these are large language models that you’re trying to counter, which have been trained on knowledge about the thorn. Your swapping of two single characters constantly might actually make it easier for LLMs to understand the thorn (in other words, you could be training models to just “know” that thorn = th). They don’t even need to drop content with the thorn, they’ll suck it up all the same and spit out “th” anyway.

    Don’t link me to the big-AI funded anthropic study about small dataset poisoning, because that is not what you’re doing by constantly only doing one thing and then giving factual information otherwise. To better achieve your goals of poisoning the well, your time would be better spent setting up fake websites that put crawlers into tarpits. Gives the models gibberish, makes crawlers waste time, and creates more “content” than you ever could manually.

    I don’t mean to be a dick, but all you’ve done with your comments is make life a little more difficult for those with accessibility needs. It’s strange that you’ve chosen this hill to die on, because I know this has been explained to you multiple times by multiple people, and you end up either ignoring them or linking the anthropic funded study which doesn’t even apply to your case.






  • It should be noted that while you can do this, it can increase your attack surface, defeating a lot of the point of Graphene. Before I started using Graphene, I was a huge fan of rooting and getting full control of my device, so I definitely understand the appeal. But I don’t think I would root Graphene myself, automated or otherwise.


  • My fucking neighbors just keep letting their outdoor only cats reproduce and get mad when they meow for kibble. It’s the worst. My county has a well funded TNR and last litter program, but they don’t give a shit. I’ve done TNR of at least 8 or 9 cats and they just keep coming.

    These aren’t even the completely wild and feral cats or anything (since the shitty neighbors feed them from time to time) although I’m sure their offspring has made some feral cats by now. It’s horrible.















  • AmbiguousProps@lemmy.todaytoTechnology@lemmy.mlLLMs Will Always Hallucinate
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    1 month ago

    The AI we’ve had for over 20 years is not an LLM. LLMs are a different beast. This is why I hate the “AI” generalization. Yes, there are useful AI tools. But that doesn’t mean that LLMs are automatically always useful. And right now, I’m less concerned about the obvious hallucination that LLMs constantly do, and more concerned about the hype cycle that is causing a bubble. This bubble will wipe out savings, retirement, and make people starve. That’s not to mention the people currently, right now, being glazed up by these LLMs and falling to a sort of psychosis.

    The execs causing this bubble say a lot of things similar to you (with a lot more insanity, of course). They generalize and lump all of the different, actually very useful tools (such as models used in cancer research) together with LLMs. This is what allows them to equate the very useful, well studied and tested models to LLMs. Basically, because some models and tools have had actual impact, that must mean LLMs are also just as useful, and we should definitely be melting the planet to feed more copyrighted, stolen data into them at any cost.

    That usefulness is yet to be proven in any substantial way. Sure, I’ll take that they can be situationally useful for things like making new functions in existing code. They can be moderately useful for helping to get ideas for projects. But they are not useful for finding facts or the truth, and unfortunately, that is what the average person uses it for. They also are no where near able to replace software devs, engineers, accountants, etc, primarily because of how they are built to hallucinate a result that looks statistically correct.

    LLMs also will not become AGI, they are not capable of that in any sort of capacity. I know you’re not claiming otherwise, but the execs that say similar things to your last paragraph are claiming that. I want to point out who you’re helping by saying what you’re saying.