Everybody wants money, that’s why they call it “money.”
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
Everybody wants money, that’s why they call it “money.”
You won’t be taken away for “study”, you’ll be taken away for pension fraud. Probably much earlier than 150.
Why would participating in studies be bad, though? Major pharmaceutical companies would pay you an absolute fortune in exchange for participation and you could advance medical science tremendously. You’d be a hero and get incredibly rich in the process.
If I could get glasses that told me “that guy enthusiastically greeting you by name right now is Marty, you last met him in university in such-and-such class eight years ago” I would pay any amount of money for that.
“Doxing people” and “recognizing people” have a pretty blurry border.
And yet it’s accomplishing those tasks. I guess that means “understanding” wasn’t necessary for them after all.
Alright, since you find this such an important issue, consider the first bullet point cropped off of my humorous list of milestones.
Doesn’t change the underlying point.
Phone operators weren’t call center staff, they were literally routers in human form. Secretaries were your email program, calendar, and your folders full of word documents.
Too good for a human to have written so it must have been AI? I guess I’ll take it as a compliment that I’m writing at that level.
The total market cap across all cryptocurrencies is currently about 2.5 trillion dollars, which isn’t far below its all-time high of 3 trillion. If that’s something you’d say “hasn’t fully died yet” then AI’s not going to go away any time soon by that standard.
Whereas I have been finding uses for it to produce things that simply could not have produced myself without it, making it far more than a mere “productivity boost.”
I think people are mainly seeing what they want to see.
Words often have multiple meanings in different contexts. “Intelligence” is one of those words.
Another meaning of “Intelligence” is “the collection of information of military or political value.” Would you go up to CIA headquarters and try to argue with them that “the collection of information of military or political value” lacks understanding, and therefore they’re using the wrong word and should take the “I” out of their name?
Did you check the link I posted? The term “Artificial Intelligence” is literally used for the sorts of topics in computer science that LLMs fall under, and has been for almost 70 years now.
You are the one who is insisting that the meaning of the words should now be changed to something else.
The term AI was coined in 1956 at a computer science conference and was used to refer to a broad range of topics that certainly would include machine learning and neural networks as used in large language models.
I don’t get the “it’s not really AI” point that keeps being brought up in discussions like this. Are you thinking of AGI, perhaps? That’s the sci-fi “artificial person” variety, which LLMs aren’t able to manage. But that’s just a subset of AI.
Yeah, I’ve got my own anecdote to chip in with on that, my dad was in the hospital for a month with a plethora of various potentially-fatal difficulties he was fighting with. There were ups and downs but many of the problems were being addressed. Then the diagnosis finally came in that the root cause was advanced lymphoma and there was no realistic chance of “beating” it, he died later that very day.
I don’t think that it’s necessarily a question of “willing yourself to die” or “willing yourself to live,” but I do think that one can decide how much effort is worth putting into the fight versus deciding to relax and let it go. Whether consciously or subconsciously.
Yeah. Scientific papers may teach an AI about science, but Reddit posts teach AI how to interact with people and “talk” to them. Both are valuable.
The term AI was coined in 1956 at a computer science conference and was used to refer to a broad range of topics that certainly would include machine learning and neural networks as used in large language models.
I don’t get the “it’s not really AI” point that keeps being brought up in discussions like this. Are you thinking of AGI, perhaps? That’s the sci-fi “artificial person” variety, which LLMs aren’t able to manage. But that’s just a subset of AI.
Good point! OP, are you eating on the restaurant’s open-air patio? Perhaps chemtrail chemicals are filtering down into your food from the sky.
Did you sign up for any life insurance policies with her recently? Add her to your will? Is she currently borrowing something and has mentioned “jokingly” about how she’d really like to keep it?
Not a high probability, mind you, but since the subject was raised…
Fun fact! When the effect on your health is negative instead of positive it’s known as the nocebo effect.
…
Well, I thought that fact was fun…
They’re not talking about the same thing.
That’s in reference to the size of the model itself.
That’s in reference to the size of the training data that was used to train the model.
Minimizing both of those things is useful, but for different reasons. Smaller training sets make the model cheaper to train, and a smaller model makes the model cheaper to run.