I am so happy I have an account on here, even if some people can be quite abrasive
Tildes (a closed garden Reddit alternative) frequently love to reminisce about the days of small forum communities. Maybe we need to bring them back.
I have a feeling this place and other decentralized social medias will be banned in the near future. Look at what’s happening to TIktok. You either bend the knee or you get axed. It’s why the other social media giants bent the knee. They understand the writing on the wall. There’s more going on behind the scenes that they don’t share with us. I think we’re sort of watching a quiet coup.
If social media becomes decentralized we might even gain traction reversing some of the brainwashing on the masses. The current giants are just propaganda machines. Always have been, but it’s now blatant and obvious. They don’t even care to hide it.
Let’s call it by it’s name: neofeudalism/technofeudalism
In the same way that email has been decentralized from the get go, social media could have been equally decentralized, and I don’t mean in the older php forums, but in a different way that would allow people to reconnect with others and maintain contacts.
Hey, that’s us!
Tech Broligarchy*
There’s another alternative, which is no social media at all. There is no particular problem that it solved. If it disappeared, would your quality of life be worse in any way?
Word.
Preaching to the choir!
It might be good to reiterate (in part) why we’re all in here.
Removed by mod
Agreed. But we need a solution against bots just as much. There’s no way the majority of comments in the near future won’t just be LLMs.
Decentralized authentication system that support pseudonymous handles. The authentication system would have optional verification levels.
So I wouldn’t know who you are but I would know that you have verified against some form of id.
The next step would then by attributes one of which is your real name but also country of birth, race, gender, and other non-mutable attributes that can be used but not polled.
So I could post that I am Bob living in Arizona and I was born in Nepal and those would be tagged as verified, but someone couldn’t reverse that and request if I want to post without revealing those bits of data.
Closed instances with vetted members, there’s no other way.
Too high of a barrier to entry is doomed to fail.
Programming.dev does this and is the tenth largest instance.
Techy people are a lot more likely to jump through a couple of hoops for something better, compared to your average Joe who isn’t even aware of the problem
Techy people are a lot more likely to jump through hoops because that knowledge/experience makes it easier for them, they understand it’s worthwhile or because it’s fun. If software can be made easier for non-techy people and there’s no downsides then of course that aught to be done.
Ok, now tell the linux people this.
It’s not always obvious or easy to make what non-techies will find easy. Changes could unintentionally make the experience worse for long-time users.
I know people don’t want to hear it but can we expect non-techies to meet techies half way by leveling their tech skill tree a bit?
Yeah that was kinda my point
10th largest instance being like 10k users… we’re talking about the need for a solution to help pull the literal billions of users from mainstream social media
There isn’t a solution. People don’t want to pay for something that costs huge resources. So their attention becoming the product that’s sold is inevitable. They also want to doomscroll slop; it’s mindless and mildly entertaining. The same way tabloid newspapers were massively popular before the internet and gossip mags exist despite being utter horseshite. It’s what people want. Truly fighting it would requires huge benevolent resources, a group willing to finance a manipulative and compelling experience and then not exploit it for ad dollars, push educational things instead or something. Facebook, twitter etc are enshitified but they still cost huge amounts to run. And for all their faults at least they’re a single point where illegal material can be tackled. There isn’t a proper corollary for this in decentralised solutions once things scale up. It’s better that free, decentralised services stay small so they can stay under the radar of bots and bad actors. When things do get bigger then gated communities probably are the way to go. Perhaps until there’s a social media not-for-profit that’s trusted to manage identity, that people don’t mind contributing costs to. But that’s a huge undertaking. One day hopefully…
They also want to doomscroll slop; it’s mindless and mildly entertaining. The same way tabloid newspapers were massively popular before the internet and gossip mags exist despite being utter horseshite. It’s what people want.
The same analogy is applicable to food.
People want to eat fastfood because it’s tasty, easily available and cheap. Healthy food is hard to come by, needs time to prepare and might not always be tasty. We have the concepts of nutrition taught at school and people still want to eat fast-food. We have to do the same thing about social/internet literacy at school and I’m not sure whether that will be enough.
We have a human vetted application process too and that’s why there’s rarely any bots or spam accounts originating from our instance. I imagine it’s a similar situation for programming.dev. It’s just not worth the tradeoff to have completely open signups imo. The last thing lemmy needs is a massive influx of Meta users from threads, facebook or instagram, or from shitter. Slow, organic growth is completely fine when you don’t have shareholders and investors to answer to.
The bar is not particularly high with lemmy and that is a focused community.
People aren’t (generally) being made aware of the injustice on the other side of the planet while they are asking a question about C#.
It’s how most large forums ran back in the day and it worked great. Quality over quantity.
@a1studmuffin @ceenote the only reason these massive Web 2.0 platforms achieved such dominance is because they got huge before governments understood what was happening and then claimed they were too big to follow basic publishing law or properly vet content/posters. So those laws were changed to give them their own special carve-outs. We’re not mentally equipped for social networks this huge.
I disagree, I think we’re built for social networks that huge. The problems happen when money comes into the equation. If we lived in a world without price tags, and resources went where they needed to go instead of to who has the most money, and we were free to experiment with new lifestyles and ideas, we would thrive with a huge and diverse social network. Money is like a religious mind-virus that triggers psycopathy and narcissism in human beings by design, yet we believe in it like it’s a force of nature like God or something. A new enlightenment is happening all thanks to huge social networks allowing us to express our nature, it’s the institutions of control that aren’t equipped to handle such breakdown of social barriers (like the printing press protestant revolution, or the indigenous critiques before the enlightenment period)
I dunno man. Discord has thousands of closed servers that are doing great.
If we’re talking about breaking tech oligarchs hold on social media, no closed server anywhere comes close as a replacement to meta or Twitter.
We’re talking about the need for a system to deal with major access of a main facebook/insta/twitter etc… to a majority of people.
IE of the scale that someone can go “Hey I bet my aunt that I haven’t talked to in 15 years might be on here, let me check”. Not a common occourance in a closed off discord community.
Also, noting that doesn’t fully solve the primary problem… of still being at the whims and controls of a single point of failure. of which if Discord Inc could at any point in time decide to spy on closed rooms, censor any content they dislike etc…
I question if we really need spaces like that anymore. But I see where you are coming from.
I was definitely only thinking about social places like Lemmy and Discord. Not networking places like Facebook and LinkedIn.
It really feels like there are zero solutions available. I’m at a point where I realize that all social networks have major negative impacts on society. And I can’t imagine anything fixing it that isn’t going back to smaller, local, and private. Maybe we don’t need places where you can expect everyone to be there.
When we can expect everyone on the planet to be present in a network the conflict and vitrol would be perpetual. We are not mature enough and all on the same page enough as a species to not resort to mud slinging
Could do something like discord. Rather than communities, you have “micro instances” existing on top of the larger instance, and communities existing within the micro instances. And of course make it so that making micro instances are easier to create.
There might be clever ways of doing this: Having volunteers help with the vetting process, allowing a certain number of members per day + a queue and then vetting them along the way…
If you could vet members in any meaningful way, they’d be doing it already.
Most instances are open wide to the public.
A few have registration requirements, but it’s usually something banal like “say I agree in Spanish to prove your Spanish enough for this instance” etc.
This is a choice any instance can make if they want, none are but that doesn’t mean they can’t or it doesn’t work.
I was referring to some of the larger players in the space, ie Meta, Twitter, etc.
Right, but they’re shit and don’t good things out of principle.
We, the Fediverse, are the alternative to them.
Doesn’t matter if they’re shit or not, they don’t want bots crawling their sites, straining their resources, or constantly shit posting, but they do anyway. And if the billion dollar corporations can’t stop them, it’s probably a good bet that you can’t either.
Because they want user data over anything.
We want quality communities over anything.
We can be selective, they go bankrupt without consistent growth.
It could be cool to get a blue check mark for hosting your own domain (excluding the free domains)
It would be more expensive than bot armies are willing to deal with.
Well, what doesn’t work, it seems, is giving (your) access to “anyone”.
Maybe a system where people, I know this will be hard, has to look up outlets themselves, instead of being fed a “stream” dictated by commercial incentives (directly or indirectly).
I’m working on a secure decentralised FOSS network where you can share whatever you want, like websites. Maybe that could be a start.
I think you replied to the wrong comment.
Well no?
What did I miss?
I’m speaking broadly in general terms in the post, about sharing online.
This conversation was about bots. Yours is about “outlets” and “streams”, whatever that is.
If you have some algorithm or few central points distributing information, any information, you’ll get bot problems. If you instead yourself hook up with specific outlets, you won’t have that problem, or if one is bot infested you can switch away from it. That’s hard when everyone is in the same outlet or there are only few big outlets.
Sorry if it’s not clear.
How is it going to be as big as reddit if EVERYONE is vetted?
Why do you want it to be as big as Reddit?
Isn’t that basically the same result though…
Problem with tech oligarchy is it just takes one person to get corrupted and then he blocks out all opinion that attacks his goals.
So the solution is federation, free speech instances that everyone can say whatever they want no matter how unpopular.
How do we counteract the bots…
Well we need the instances to verify who gets in, and make sure the members aren’t bots or saying unpopular things. These instances will need to be big, and well funded.
How do we counter these instance owners getting bought out, corrupted (repeat loop).
No? The problem of tech oligarchy is that they control the systems. Here anyone can start up a new instance at the press of a button. That is the solution, not allowing unfiltered freeze peach garbage.
Small “local” human sized groups are the only way we ensure the humanity of a group. These groups can vouch for each-other just as we do with Fediseer.
One big gatekeeper is not the answer and is exactly the problem we want to get away from.
You counter them by moving to a different instance.
Concept is however that if a new instance is detatched from the old one… then it’s basically the same story of leaving myspace for facebook etc… we go through the long vetting process etc… over and over again, userbase fragments reaching critical mass is a challange every time. I mean yeah if we start with a circle of 10 trusted networks. One goes wrong it defederates, people migrate to one of the 9 or a new one gets brought into the circle. but actual vetting is a difficult process to go with, and makes growing very difficult.
Vetted members could still bot though or have ther accounts compromised. Not a realistic solution.
Can you have an instance that allows viewing other instances, but others can’t see in?
we have to use trust from real life. it’s the only thing that centralized entities can’t fake
I feel like it’s only a matter of time before most people just have AI’s write their posts.
The rest of us with brains, that don’t post our status as if the entire world cares, will likely be here, or some place similar… Screaming into the wind.
I feel like it’s only a matter of time before most people just have AI’s write their posts.
That’s going right into /dev/null as soon as I detect it-- both user and content.
Instances that don’t vet users sufficiently get defederated for spam. Users then leave for instances that don’t get blocked. If instances are too heavy handed in their moderation then users leave those instances for more open ones and the market of the fediverse will balance itself out to what the users want.
I wish this was the case but the average user is uninformed and can’t be bothered leaving.
Otherwise the bigger service would be lemmy, not reddit.
the market of the fediverse will balance itself out to what the users want.
Just like classical macroeconomics, you make the deadly (false) assumption that users are rational and will make the choice that’s best for them.
The sad truth is that when Reddit blocked 3rd party apps, and the mods revolted, Reddit was able to drive away the most nerdy users and the disloyal moderators. And this made Reddit a more mainstream place that even my sister and her friends know about now.
We could ask for anonymous digital certificates. It works this way.
Many countries already emit digital certificates for it’s citizens. Only one certificate by id. Then anonymous certificates could be made. The anonymous certificate contains enough information to be verificable as valid but not enough to identify the user. Websites could ask for an anonymous certificate for register/login. With the certificate they would validate that it’s an human being while keeping that human being anonymous. The only leaked data would probably be the country of origin as these certificates tend to be authentificated by a national AC.
The only problem I see in this is international adoption outside fully developed countries: many countries not being able to provide this for their citizens, having lower security standards so fraudulent certificates could be made, or a big enough poor population that would gladly sell their certificate for bot farms.
Your last sentence highlights the problem. I can have a bot that posts for me. Also, if an authority is in charge of issuing the certificates then they have an incentive to create some fake ones.
Bots are vastly more useful as the ratio of bots to humans drops.
Also the problem of relying on a nation state to allow these certificates to be issued in the first place. A repressive regime could simply refuse to give its citizens a certificate, which would effectively block them from access to a platform that required them.
I mentioned this in another comment, but we need to somehow move away from free form text. So here’s a super flawed makes-you-think idea to start the conversation:
Suppose you had an alternative kind of Lemmy instance where every post has to include both the post like normal and a “Simple English” summary of your own post. (Like, using only the “ten hundred most common words” Simple English) If your summary doesn’t match your text, that’s bannable. (It’s a hypothetical, just go with me on this.)
Now you have simple text you can search against, use automated moderation tools on, and run scripts against. If there’s a debate, code can follow the conversation and intervene if someone is being dishonest. If lots of users are saying the same thing, their statements can be merged to avoid duplicate effort. If someone is breaking the rules, rule enforcement can be automated.
Ok so obviously this idea as written can never work. (Though I love the idea of brand new users only being allowed to post in Simple English until they are allow-listed, to avoid spam, but that’s a different thing.) But the essence and meaning of a post can be represented in some way. Analyze things automatically with an LLM, make people diagram their sentences like English class, I don’t know.
It sounds like you’re describing doublespeak from 1984.
Simplifying language removes nuance. If you make moderation decisions based on the simple English vs. what the person is actually saying, then you’re policing the simple English more than the nuanced take.
I’ve got a knee-jerk reaction against simplifying language past the point of clarity, and especially automated tools trying to understand it.
A bot can do that and do it at scale.
I think we are going to need to reconceptualize the Internet and why we are on here at all.
It already is practically impossible to stop bots and I’m a very short time it’ll be completely impossible.
I think I communicated part of this badly. My intent was to address “what is this speech?” classification, to make moderation scale better. I might have misunderstood you but I think you’re talking about a “who is speaking?” problem. That would be solved by something different.
What? I post a lot, but the majority?
…oh, you said LLM. I thought you said LMM.
A simple thing that may help a lot is for all new accounts to be flagged as bots, requiring opt out of the status for normal users. It’s a small thing, but any barrier is one more step a bot farm has to overcome.
I subscribed to the arch gitlab last week and there was a 12 step identification process that was completely ridiculous. It’s clear 99.99% of users will just give up.
Also is data scraping as much of an issue?
Data scraping is a logical consequence of being an open protocol, and as such I don’t think it’s worth investing much time in resisting it so long as it’s not impacting instance health. At least while the user experience and basic federation issues are still extant.
Reputation systems. There is tech that solves this but Lemmy won’t like it (blockchain)
You don’t need blockchain for reputations systems, lol. Stuff like Gnutella and PGP web-of-trust have been around forever. Admittedly, the blockchain can add barriers for some attacks; mainly sybil attacks, but a friend-of-a-friend/WoT network structure can mitigate that somewhat too,
Slashdot had this 20 years ago. So you’re right this is not new.or needing some new technology.
Space is much more developed. Would need ever improving dynamic proof of personhood tests
I think a web-of-trust-like network could still work pretty well where everyone keeps their own view of the network and their own view of reputation scores. I.e. don’t friend people you don’t know; unfriend people who you think are bots, or people who friend bots, or just people you don’t like. Just looked it up, and wikipedia calls these kinds of mitigation techniques “Social Trust Graphs” https://en.wikipedia.org/wiki/Sybil_attack#Social_trust_graphs . Retroshare kinda uses this model (but I think reputation is just a hard binary, and not reputation scores).
I dont see how that stops bots really. We’re post-Turing test. In fact they could even scan previous reputation points allocation there and divise a winning strategy pretty easily.
I mean, don’t friend, or put high trust on people you don’t know is pretty strong. Due to the “six degrees of separation” phenomenon, it scales pretty easily as well. If you have stupid friends that friend bots you can cut them off all, or just lower your trust in them.
“Post-turing” is pretty strong. People who’ve spent much time interacting with LLMs can easily spot them. For whatever reason, they all seem to have similar styles of writing.
I mean, don’t friend, or put high trust on people you don’t know is pretty strong. Due to the “six degrees of separation” phenomenon, it scales pretty easily as well. If you have stupid friends that friend bots you can cut them off all, or just lower your trust in them.
Know IRL? Seems it would inherently limit discoverability and openness. New users or those outside the immediate social graph would face significant barriers to entry and still vulnerable to manipulation, such as bots infiltrating through unsuspecting friends or malicious actors leveraging connections to gain credibility.
“Post-turing” is pretty strong. People who’ve spent much time interacting with LLMs can easily spot them. For whatever reason, they all seem to have similar styles of writing.
Not the good ones, many conversations online are fleeting. Those tell-tale signs can be removed with the right prompt and context. We’re post turing in the sense that in most interactions online people wouldn’t be able to tell they were speaking to a bot, especially if they weren’t looking - which most aren’t.
Do you have a proof of concept that works?
Are they just putting everything on layer 1, and committing to low fees? If so, then it won’t remain decentralized once the blocks are so big that only businesses can download them.
It has adjustable block size and computational cost limits through miner voting, NiPoPoWs enable efficient light clients. Storage Rent cleans up old boxes every four years. Pruned (full) node using a UTXO Set Snapshot is already possible.
Plus you don’t need to bloat the L1, can be done off-chain and authenticated on-chain using highly efficient authenticated data structures.
There are simple tests to out LLMs, mostly things that will trip up the tokenizers or sampling algorithms (with character counting being the most famous example). I know people hate captchas, but it’s a small price to pay.
Also, while no one really wants to hear this, locally hosted “automod” LLMs could help seek out spam too. Or maybe even a Kobold Hoard type “swarm.”
Captchas don’t do shit and have actually been training for computer vision for probably over a decade at this point.
Also: Any “simple test” is fixed in the next version. It is similar to how people still insist “AI can’t do feet” (much like rob liefeld). That was fixed pretty quick it is just that much of the freeware out there is using very outdated models.
I’m talking text only, and there are some fundamental limitations in the way current and near future LLMs handle certain questions. They don’t “see” characters in inputs, they see words which get tokenized to their own internal vocabulary, hence any questions along the lines of “How many Ms are in Lemmy” is challenging even for advanced, fine tuned models. It’s honestly way better than image captchas.
They can also be tripped up if you simulate a repetition loop. They will either give a incorrect answer to try and continue the loop, or if their sampling is overturned, give incorrect answers avoiding instances where the loop is the correct answer.
They don’t “see” characters in inputs, they see words which get tokenized to their own internal vocabulary, hence any questions along the lines of “How many Ms are in Lemmy” is challenging even for advanced, fine tuned models.
And that is solved just by keeping a non-processed version of the query (or one passed through a different grammar to preserve character counts and typos). It is not a priority because there are no meaningful queries where that matters other than a “gotcha” but you can be sure that will be bolted on if it becomes a problem.
Again, anything this trivial is just a case of a poor training set or an easily bolted on “fix” for something that didn’t have any commercial value outside of getting past simple filters.
Sort of like how we saw captchas go from “type the third letter in the word ‘poop’” to nigh unreadable color blindness tests to just processing computer vision for “self driving” cars.
They can also be tripped up if you simulate a repetition loop.
If you make someone answer multiple questions just to shitpost they are going to go elsewhere. People are terrified of lemmy because there are different instances for crying out loud.
You are also giving people WAY more credit than they deserve.
Well, that’s kind of intuitively true in perpetuity
An effective gate for AI becomes a focus of optimisation
Any effective gate with a motivation to pass will become ineffective after a time, on some level it’s ultimately the classic “gotta be right every time Vs gotta be right once” dichotomy—certainty doesn’t exist.
@NuXCOM_90Percent @brucethemoose would some kind of proof of work help solve this? Ifaik its workingnon tor
Somehow I didn’t get pinged for this?
Anyway proof of work scales horrendously, and spammers will always beat out legitimate users of that even holds. I think Tor is a different situation, where the financial incentives are aligned differently.
But this is not my area of expertise.
We also need a solution to fucking despot mods and admins deleting comments and posts left-and-right because it doesn’t align with their personal views.
I’ve seen it happen to me personally across multiple Lemmy domains (I’m a moron and don’t care much to have empathy in my writing, and it sets these limp-wrist morbidly obese mods/admins to delete my shit and ban me), and it happens to many people as well.
Don’t go blaming your inability to have empathy on adhd. That is in absolutely no way connected. You’re just a rude person.
I’m also rude in real life too! 😄
Yeah you can go fuck yourself for pinning your flavor of bullshit on ADHD. Take some accountability for your actions.
So much irony in this one
Good job chief 🤡
Freedom of expression does not mean freedom from consequences. As someone who loves to engage on trolling for a laugh online I can tell you that if you get banned for being an asshole you deserve it. I know I have.
- Dude says he is regarded BC reasons in civil manner
- Another dude proceeds to aggressively insult him… I would say not civil.
Who is the asshole here?
limp- wrist morbidly obese
That tells me all I need to know
Yes
I do indeed fuck myself, every day, thanks.
You have that tool, it’s called finding or hosting your own instance.
Just create your own comm.
lemm.ee and lemmy.dbzer0.com both seem like very level-headed instances. You can say stuff even if the admins disagree with it, and it’s not a crisis.
Some of the big other ones seem some other way, yes.
Lemm.ee hasn’t booted me yet? Much like OP, I’m not the most empathetic person, and if I’m annoyed then what little filter that I have disappears.
Shockingly, I might offend folks sometimes!
Communities should be self moderated. Once we have that we can really push things forward.
Self Moderated is just fine. Why do I need to doxx myself to be online? I’m not giving away my birth certificate or SSN just to post on social media that idea is crazy lmao.
google hasn’t done much with YouTube yet