

I’ve said similar before. When they finally have that moment of realization, that’s when we can welcome them back to a sane worldview.
Leopards Eating Faces is cathartic, but if we shun them for being wrong and realizing it they will double down.
Bio field too short. Ask me about my person/beliefs/etc if you want to know. Or just look at my post history.


I’ve said similar before. When they finally have that moment of realization, that’s when we can welcome them back to a sane worldview.
Leopards Eating Faces is cathartic, but if we shun them for being wrong and realizing it they will double down.


Fair enough. My point wasn’t to equate code with github, but to suggest that github, and any other code repository, is effectively an app store by the definition of the California law, and is therefore supposedly responsible for handling this ‘age signal’ bullshit.
Similarly, GeoCities from the 90’s is a publicly accessible website (*actually it’s not – just tried and it seems to completely dead now as opposed to mostly dead in early 2000’s, RIP) with the ability to (and did) distribute software and would have also been an “application store”. Archive.org is maybe a better example now: you can download tons of ‘applications’ from there and none of them will ever have age verification baked in. Is Archive.org now illegal? Let’s find out.


I get really upset when there’s this association between “the libs” and “non-authoritarian voters”. I’ve not done and don’t support lobbying for state control of social media.
I fully agree that there are shitty people elected as democrats.
I could go on a whole mile-long monologue about this, but I won’t do it here. I’m aware of the various definitions of liberal and I don’t want to talk about that – I’m using the US scope of the two parties that can actually matter: Conservative/Republican vs Liberal/Democratic. [If you Identify as Liberal in the US, you’re probably actually Socialist, but the media hates that word, so we don’t use it]
The short version is that the PEOPLE want things to be better; the voters called “the libs” want things to be better. Not enough of us are engaged at the low-level to fix this and I think it’ll only take a few of us to do the grass-roots remix that the conservatives did as the tea party, and fix the situation with the democratic org that will both win us elections (if we get anymore) and cut out the rot.
Gotta start local though. If you’re mad, join your precinct and choose who votes in the district, etc. Don’t wait for November and then be mad at your choices. Primaries are over for 2026, but you can influence choices for local offices in 2027 and other state and national ones in 2028.
Don’t just be mad at your options, help make the options better. And 'Both Sides’ing is either malicious or at least detrimental:
“Oh, the system is fucked. Guess we’ll keep aiming for the dystopia! We can’t possibly change the system!”


It wasn’t Microslop-owned for most of my experience with it, but even sourceforge went the way of enshittification. The only real hope is, unfortunately, decentralization. My worry that is that discoverability is the payment.
We can’t trust that any single point of failure won’t eventually fall to corporate greed. But if there’s a central place to locate things, there’s a central place to control them; even if it’s literally “search”!


I used to be corporate IT, and this would have ruined my weekend… but my CEO donated to McConnell, Trump and the RNC, so fuck them.


I’ve been posting this in other threads too and while the OS angle is huge, and worth picking a fight with, I haven’t seen any coverage over how this goes after developers too.
I think this is an attack on ALL open-source.
These bills are written by people who are clearly or maliciously tech illiterate and don’t understand either the terminology or the practical impacts. And of course it’s wrapped in ‘what about the children?!’
They include definitions like (paraphrasing; not quoting a specific bill, but New York, Colorado and California do this):
And then require both developers and operating system providers to handshake this age verification data or face financial ruin. I think the original intent or appearance of intent is that the store developer needs to do the handshake. I’m not a lawyer, but I can’t imagine these definitions aren’t vague enough that they can’t be weaponized against basically anything software.
I have a github account, and have contributed to “applications”. As I read them, these bills pose a serious threat to me if I continue to do so, as that makes me a “developer” and would need to ensure the things I contribute to are doing age verification – which I don’t want to do.
I think that even outside the surveillance aspect, the chilling effect of devs not publishing applications is the end-goal. Gatekeeping software to the big publishers who have both the capacity to follow the law and the lawyers/pockets to handle a suit. These laws are going to be like the DMCA 1201 language (which had much much more prose wrapped around it and was at least attempting to limit scope), which HAS been weaponized against solo devs trying to make the world better.
I fully expect some suit against multiple github repo owners on Jan 2, 2027.
I’ve emailed the office of Buffy Wicks, the author of the California bill, with similar details as the above. I haven’t yet identified the authors of the NY and CO bills, but I’m working on that too. If you live in one of these places, please contact your state officials and tell them this is a bad idea – and if you don’t live there, keep an eye on your state bills.


The fun part is that these cameras are not owned, operated or property of your local jurisdiction. They are hardware/software as a service. IANAL, but I think Flock would have to sue you.
I have not personally de-flocked anything, but my local area doesn’t have any that aren’t tied to a business parking lot.
If some show up, I can’t imagine a 5 minute walk with a hat, face mask and a can of spraypaint wouldn’t be sufficient to disable one without risk. Might need a stool.


Agreed. For anyone not already following Louis Rossmann, he’s a right-to-repair guy on an anti-surveilance arc and is always posting good information that will make you seethe.
His city tried to buy Flock cameras and he organized enough resistance that they cancelled… but then they are trying again a while later, assuming the scrutiny is off: https://www.youtube.com/watch?v=3MiLiQ6olkI
Ring will do the same thing.
Microslop, Meta, Google, et al will get their hands slapped when they are too proud about how they are fucking you, and then will issue a retraction, but only long enough to let the anger die out before doing it again more quietly.


I hear that. My local ABC store doesn’t scan my ID, though I don’t see a future where they don’t eventually scan every time; and my local grocery store scans occasionally, but not always.
I can’t just not buy age-verified products, though, because sometimes it’s cold medicine or a prescription. **
Back to the original thread, this is not a Discord problem, this is a privacy problem. We need to push back on data capture in general and tell legislators that privacy is important to all people, even those who buy booze.
** could we make little sneaky stickers that obfuscate the barcode enough to prevent it scanning? The cashier would likely revert to visual inspection without the data retention: face matches photo, age is good, override.


so… you do want face verification for online interactions?
edit: In-person, for a “regulated” substance - it seems reasonable to require that proof be checked as part of policy, regardless of appearance. There’s no storage (in most cases) and the cashier is the only one who looks at the ID and they are supposed to do it to keep their job. The only place I’ve seen recently where your ID is actually tracked is stuff like sudafed, where buying too much makes you a potential meth maker.
Online, the rule has been “trust me, bro” forever. There’s no person testing you, aside from maybe a paywall to ensure you have a credit card as an age check. Steam is doing the online equivalent of minimal validation and minimal retention that the booze store is.
This is hardly OMGVALVE.


Perfect. And then later, “I hope you enjoyed your glue pizza. We don’t have enough fuel to reach Paris or return to land. This plane has no emergency beacon or flotation devices and is about to “land” in the ocean. Sorry for the inconvenience!”


I wish him to sleep in a bed made by AI. Eat a meal made by AI. And then take a flight in a plane made and piloted by AI.


Does that make this better? A translated French search query would be ‘joining video call isn’t working’ and that will return results for every conference tool known to man.
Call it something like FVC (la France VisioConférence) , or some French play on the way that sounds, which would be a uniquely searchable term in this domain.
This is not a hill I’m dying on, but it’s terminally short sighted and a bad user experience to name your product the same thing as a microslop trademark. They are the worst for this already with their multiple active variants of office 365 tools like outlook and their xbox name nonsense.
Oh, I have a great idea for a new car company. Lets call it ‘Car’! Then people can have a Car Car, or maybe even a Car Car 2026… oh or a Car Truck when we branch out. (future google search: replace car truck 2028 oil filter)


Came to comment this. I know there are only so many letters, and so many combinations of 4-8 of them, but can we quit naming new things with the name of an old thing?
Finding any details about France’s Visio is going to be a cluster.


I’m 90% on-board with disliking these, but I can see uses for ‘Augmented Reality’ glasses. I just wish they worked the way they do in Sci-fi and video games.
Lots of interactions we have on our phones could be done hands-free on a HUD
automatic translation of text or voice when traveling navigation/directions and similar guidance, like automatic subway/train maps instant access to biometric data trends like heart rate, glucose levels and more
I’ve also been part of a pilot to get a HUD to provide AR data to a manufacturing operator, showing things like line speed, temperature and other kinds of data they would otherwise have to go to a computer for. This was around the google glass era, though, and the devices were too pricey to justify and the tech wasn’t there yet.
I do think these devices need to be more obvious. We called them glassholes when google was starting this wearable computing trend and people were using them inappropriately; and we’ve seen how any internet-connected camera like Ring and Flock can be abused.
The concept of the personal HUD is useful, but it still needs workshopping to make it socially safe. Also, the ones like the Meta/Rayban glasses are just pervert tools. No AR, just a camera has no value other than creeping.


I’m certainly not a microslop supporter, but…
They designed a system that recommended that the average user use full disk encryption as part of device setup, and then provided a way that Grandma could easily recover her family photos when she set it up with their cloud.
This was built by an engineer trying to prevent a foreseeable issue. The intent was not malicious. The intent was to get more people more secure by default, since random hacker couldn’t compell ms to give them keys, while still allowing low tech literacy people to not get fucked.
It’s been a while since I installed a new Windows OS, but I’m pretty sure it prompts you to allow uploading your bitlocker key. It probably defaults to yes, but I doubt you can’t say no, or reset the key post onboarding if you want the privacy, and now it’s on you to record your key. You do have to have some technical understanding of the process, though, which is true of just about everything.
That all said, if a company has your data, it can be demanded by the government. This is a cautionary tale about keeping your secrets secret. Don’t put them in GitHub, don’t put them in Chrome, don’t put them online anywhere because the Internet never forgets.


The big difference is that smart phones and centralized internet are somewhat useful. Smartphones at least. Centralized internet… meh, but maybe a dependency.
AI is useful in only very niche and intentional cases. A ‘generic’ LLM is pretty bad at almost everything.
If ‘AI’ had been sold more like: “Give us a year of data samples from your production line and we can use ML to optimize time and temperature based on current weather patterns…” (real world use case I was working in on 2019) etc. then they would have really made the world better. Instead, I have crappy clippy constantly reading my email and suggesting words I wasn’t going to type*.


Big tech boss tells delegates at Davos that broader global use is essential if technology is to deliver lasting growth
Let me rephrase:
“Smart” entitled person says our product is not showing value, so we need to force people to use it more than we already are after years of cramming it down their throats.


I really like this comment. It covers a variety of use cases where an LLM/AI could help with the mundane tasks and calls out some of the issues.
The ‘accuracy’ aspect is my 2nd greatest concern: An LLM agent that I told to find me a nearby Indian restaurant, which it then hallucinated is not going to kill me. I’ll deal, but be hungry and cranky. When that LLM (which are notoriously bad at numbers) updates my spending spreadsheet with a 500 instead of a 5000, that could have a real impact on my long-term planning, especially if it’s somehow tied into my actual bank account and makes up numbers. As we/they embed AI into everything, the number of people who think they have money because the AI agent queried their bank balance, saw 15, and turned it into 1500 will be too damn high. I don’t ever foresee trusting an AI agent to do anything important for me.
“trust”/“privacy” is my greatest fear, though. There’s documentation for the major players that prompts are used to train the models. I can’t immediately find an article link because ‘chatgpt prompt train’ finds me a ton of slop about the various “super” prompts I could use. Here’s OpenAI’s ToS about how they will use your input to train their model unless you specifically opt-out: https://openai.com/policies/how-your-data-is-used-to-improve-model-performance/
Note that that means when you ask for an Indian restaurant near your home address, Open AI now has that address in it’s data set and may hallucinate that address as an Indian restaurant in the future. The result being that some hungry, cranky dude may show up at your doorstep asking, “where’s my tikka masala”. This could be a net-gain, though; new bestie.
The real risk, though, is that your daily life is now collected, collated, harvested and added to the model’s data set; all without your clear explicit actions: using these tools requires accepting a ToS that most people will not really read and understand. Maaaaaany people will expose what is otherwise sensitive information to these tools without understanding that their data becomes visible as part of that action.
To get a little political, I think there’s a huge downside on the trust aspect of: These companies have your queries(prompts), and I don’t trust them to maintain my privacy. If I ask something like “where to get abortion in texas”, I can fully see OpenAI selling that prompt to law enforcement. That’s an egregious example for impact, but imagine someone could query prompts (using an AI which might make shit up) and asks “who asked about topics anti-X” or “pro-Y”.
My personal use of ai: I like the NLP paradigm for turning a verbose search query into other search queries that are more likely to find me results. I run a local 8B model that has, for example, helped me find a movie from my childhood that I couldn’t get google to identify.
There’s use-case here, but I can’t accept this as a SaaS-style offering. Any modern gaming machine can run one of these LLMs and get value without the tradeoff from privacy.
Adding agent power just opens you up to having your tool make stupid mistakes on your behalf. These kinds of tools need to have oversight at all times. They may work for 90% of the time, but they will eventually send an offensive email to your boss, delete your whole database, wire money to someone you didn’t intend, or otherwise make a mistake.
I kind of fear the day that you have a crucial confrontation with your boss and the dialog goes something like:
Why did you call me an asshole?
I didn’t the AI did and I didn’t read the response as much as I should have.
Oh, OK.
Edit: Adding as my use case: I’ve heard about LLMs being described as a blurry JPEG of the internet, and to me this is their true value.
We don’t need a 800B model, we need an easy 8B model that anyone can run that helps turn “I have a question” into a pile of relevant actual searches.
Claude, code me a robot dog admin platform. I want to be able to monitor and control the dogs from my iOS tablet.