Artificial intelligence was not developed to usher in a dystopia, in fact it had a rather utopic mission. By further automating mundane tasks, AI has the potential to ease the workload of millions of workers worldwide in every job and field, potentially giving them back their precious time of the day without sacrificing overall productivity.
Yeah, good luck with that.
In reality, everyone gets fired, the rich get richertge poor get poorer and 99.9% of humans will live in a dystopia, if AI doesn’t kill us all.
Yet the AI bros go like “that won’t happen to ME though!”
Every business that fires their employees and tries to replace them with AI ends up hilariously screwed over by it. It’s not going to get to take over the world levels in 2 years.
Anyway I’m safe, the company I work for still use a software written in 1995 so I reckon I have until at least 2045 before they introduce any AI.
If we had a proper social safety net for all the displaced people, I’d be more open to it. But as things are now with rampant greed and a government for the corporations, fuck AI.
I can’t shake the feeling that all this talk of UBI and other social safety nets that are meant to support the majority of the populace after some notional post-work future society ignore a really big elephant in the room:
If most people are solely reliant on the good grace of a single entity, the government, for their whole means of survival, their entire existence is at the pleasure of that government. The populace becomes completely beholden to them, not the other way around.
The whole idea feels suspiciously like a trap set by bad actors with a long-term plan to steal the government from the governed.
I could see your argument if UBI was a recent concept (made by bad actors aimed at fixing today’s problem in order to steal the government from the governed tomorrow), but the concept has been around for a while: wiki article.
Yeah, good luck with that.
In reality, everyone gets fired, the rich get richertge poor get poorer and 99.9% of humans will live in a dystopia, if AI doesn’t kill us all.
Yet the AI bros go like “that won’t happen to ME though!”
Every business that fires their employees and tries to replace them with AI ends up hilariously screwed over by it. It’s not going to get to take over the world levels in 2 years.
Anyway I’m safe, the company I work for still use a software written in 1995 so I reckon I have until at least 2045 before they introduce any AI.
If we had a proper social safety net for all the displaced people, I’d be more open to it. But as things are now with rampant greed and a government for the corporations, fuck AI.
I can’t shake the feeling that all this talk of UBI and other social safety nets that are meant to support the majority of the populace after some notional post-work future society ignore a really big elephant in the room:
If most people are solely reliant on the good grace of a single entity, the government, for their whole means of survival, their entire existence is at the pleasure of that government. The populace becomes completely beholden to them, not the other way around.
The whole idea feels suspiciously like a trap set by bad actors with a long-term plan to steal the government from the governed.
I could see your argument if UBI was a recent concept (made by bad actors aimed at fixing today’s problem in order to steal the government from the governed tomorrow), but the concept has been around for a while: wiki article.