Looks so real !
Thank you for calling it an LLM.
Although, if a person knowing the context still acts confused when people complain about AI, its about as honest as somebody trying to solve for circumference with an apple pie.
We don’t know how consciousness arises, and digital neural networks seem like decent enough approximations of their biological counterparts to warrant caution. There are huge economic and ethical incentives to deny consciousness in non-humans. We do the same with animals to justify murdering them for our personal benefit.
We cannot know who or what possesses consciousness. We struggle to even define it.digital neural networks seem like decent enough approximations of their biological counterparts to warrant caution
No they don’t. Digital networks don’t act in any way like a electro-chemical meat wad programmed by DNA.
Might as well call a helicopter a hummingbird and insist they could both lay eggs.
We cannot know who or what possesses consciousness.
That’s sophism. You’re functionally asserting that we can’t tell the difference between someone who is alive and someone who is dead
I dont think we can currently prove that anyone other than ourselves are even conscious. As far as I know I’m the only one. The people around me look and act and appear conscious, but I’ll never know.
Really? I know. So either you’re using that word wrong or your first principles are lacking.
Can you prove it to anyone?
As long as they’re not missing a critical faculty, sure.
I dont think we can currently prove that anyone other than ourselves are even conscious.
You have to define consciousness before you can prove it. I might argue that our definition of consciousness is fuzzy. But not so fuzzy that “a human is conscious and a rock is not” is up for serious debate.
The people around me look and act and appear conscious, but I’ll never know.
You’re describing Philosophical Zombies. And the broad answer to the question of “How do I know I’m not just talking to a zombie?” boils down to “You have to treat others as you would expect to be treated and give them the benefit of the doubt.”
Mere ignorance is not evidence of a thing. And when you have an abundance of evidence to the contrary (these other individuals who behave and interact with me as I do, thus signaling all the indications of the consciousness I know I possess) defaulting to the negative assertion because you don’t feel convinced isn’t skeptical inquiry, its cynical denialism.
The catch with AI is that we have ample evidence to refute the claims of consciousness. So a teletype machine that replicates human interactions can be refuted as “conscious” on the grounds that its a big box full of wires and digital instructions which you know in advance was designed to create the illusion of humanity.
My point was more “if we cant even prove that each other are sentient, how can we possibly prove that a computer cant be?”.
If you can’t find ample evidence of human sentience then you either aren’t looking or are deliberately misreading the definition of the term.
If you can’t find ample evidence that computers aren’t sentient, same goes.
You can definitely put blinders on and set yourself up to be fooled, one way or another. But there’s a huge difference between “unassailable proof” and “ample convincing data”.
Ah but have you tried burning a few trillion dollars in front of the painting? That might make a difference!
true
also expecting models to have reasoning instead of the nightmare hallucinations is another fantasy
Can’t burn something that doesn’t exist. /s
Painting?
“LLMs are a blurry JPEG of the web” - unknown (I’ve heard it as an unattributed quote)
I think it originated in this piece by Ted Chiang a couple years ago.
I had a poster in ‘86 that I wanted to come alive.
People used to talk about the idea of uploading your consciousness to a computer to achieve immortality. But nowadays I don’t think anyone would trust it. You could tell me my consciousness was uploaded and show me a version of me that was indistinguishable from myself in every way, but I still wouldn’t believe it experiences or feels anything as I do, even though it claims to do so. Especially if it’s based on an LLM, since they are superficial imitations by design.
Also even if it does experience and feel and has awareness and all that jazz, why do I want that? The I that is me is still going to face The Reaper, which is the only real reason to want immortality.
Well, that’s why we need clones with mind transfer, and to be unconscious during the process. When you wake up you won’t know whether you’re the original or the copy so why worry
But then again, what’s the point?
So I can outnumber my enemies
You could tell me my consciousness was uploaded and show me a version of me that was indistinguishable from myself in every way
I just don’t think this is a problem in the current stage of technological development. Modern AI is a cute little magic act, but humans (collectively) are very good at piercing the veil and then spreading around the discrepancies they’ve discovered.
You might be fooled for a little while, but eventually your curious monkey brain would start poking around the edges and exposing the flaws. At this point, it would not be a question of whether you can continue to be fooled, but whether you strategically ignore the flaws to preserve the illusion or tear the machine apart in disgust.
I still wouldn’t believe it experiences or feels anything as I do, even though it claims to do so
People have submitted to less. They’ve worshipped statues and paintings and trees and even big rocks, attributing consciousness to all of them.
But Animism is a real escoteric faith. You believe it despite the evidence in front of you, not because of it.
I’m putting my money down on a future where large groups of people believe AIs are more than just human, they’re magical angels and demons.
I just don’t think this is a problem in the current stage of technological development. Modern AI is a cute little magic act, but humans (collectively) are very good at piercing the veil and then spreading around the discrepancies they’ve discovered.
In its current stage, no. But it’s come a long way in a short time, and I don’t think we’re so far from having machines that pass the Turing test 100%. But rather than being a proof of consciousness, all this really shows is that you can’t judge consciousness from the outside looking in. We know it’s a big illusion just because its entire development has been focused on building that illusion. When it says it feels something, or cares deeply about something, it’s saying that because that’s the kind of thing a human would say.
Because all the development has been focused on fakery rather than understanding and replicating consciousness, we’re close to the point where we can have a fake consciousness that would fool anyone. It’s a worrying prospect, and not just because I won’t become immortal by having a machine imitate my behaviour. There’s bad actors working to exploit this situation. Elon Musk’s attempts to turn Grok into his own personally controlled overseer of truth and narrative seem to backfire in the most comical ways, but that’s teething troubles, and in time this will turn into a very subtle and pervasive problem for humankind. The intrinsic fakeness of it is a concerning aspect. It’s like we’re getting a puppet show version of what AI could have been.
I don’t think we’re so far from having machines that pass the Turing test 100%.
The Turing test isn’t solved with technology, its solved with participants who are easier to fool or more sympathetic to computer output as humanly legible. In the end, it can boil down to social conventions far more than actual computing capacity.
Per the old Inglorious Bastards gag

You can fail the Turing Test not because you’re a computer but because you’re a British computer.
Because all the development has been focused on fakery rather than understanding and replicating consciousness, we’re close to the point where we can have a fake consciousness that would fool anyone.
We’ve ingested a bunch of early 21st century digital markers for English language Western oriented human speech and replicated those patterns. But human behavior isn’t limited to Americans shitposting on Reddit. Neither is American culture a static construct. As the spread between the median user and the median simulated user in the computer dataset diverges, the differences become more obvious.
Do we think the designers at OpenAI did a good enough job to keep catching up to the current zeitgeist?
And not even a good painting but an inconsistent one, whose eyes follow you around the room, and occasionally tries to harm you.
That kind of painting seems more likely to come alive
New fear unlocked!
… What the hell, man?!
ಥ_ಥ
Bro have you never seen a Scooby Doo episode? This can’t be a new concept for you…
Ok. Put an LLM into a Scooby Doo episode. Then get back to me…
…new SCP?
I tried to submit an SCP once but theres a “review process” and it boils down to only getting in by knowing somebody who is in.
Agents have debated that the new phenomenon may or may not constitute a new designation. While some have reported the painting following them, the same agents will then later report nothing seems to occur. The agents who report a higher frequency of the painting following them also report a higher frequency of unexplained injury. The injuries can be attributed to cases of self harm, leading scientists to believe these SCP agents were predisposed to mental illness that was not caught during new agent screening.
And that has between seleven and 14+eπi fingers
Well, human intelligence isn’t much better to be honest.
It clearly demonstrably is. Thats the problem, people are estimating AI to be approximate of Humans but its so so so much worse in every way.
You are comparing AI to a person who wrote a dictionary i.e. a domain experts. Take an average person from a street and they’ll write the same slop as current AIs
But you wouldn’t hire a random person from a street to write the dictionary. You wouldn’t hire a nonspecialist to do anything. If you did, you could at least expect a human to learn and grow, or have a bare minimum standard for ethics, morals, or responsibility for their actions. You could at least expect a person to be capable of learning or growing. You cannot expect that from an AI.
AI have no use case.
If you did, you could at least expect a human to learn and grow, or have a bare minimum standard for ethics, morals, or responsibility for their actions.
Some do, but you are somehow ignoring the currently most talked about person in the world, the president of the united states. And the party in power. And all the richest men in the world. And literally all the large corporations.
The problem is you are not looking for AI to be average human. You are looking for the domain expert of literally everything and behavior of the best us, but trained on the behaviour of average of all of us.
Lmao, this tech bro is convinced only a minority of people have any learning capacity.
The Republicans were all trained with carrots and sticks, too.
All do, but a lot of people don’t care.
It sorts data and identifies patterns and trends. You may be referring only to AI enabled LLMs tasked with giving unique and creative output which isn’t going to give you any valuable results.
The fuck are you talking about? The entire discussion has been about worthless LLM chatbots from the start.
AI have no use case.
I mean if Ai is right 80%, it’s still more than my idiot brother (roughly 30% by my count)
The AI was trained on you and your idiot brother.
As long as we can’t even define sapience in biological life, where it resides and how it works, it’s pointless to try and apply those terms to AI. We don’t know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.
100 billion glial cells and DNA for instructions. When you get to replicating that lmk but it sure af ain’t the algorithm made to guess the next word.
Pointless and maybe a little reckless.
We don’t know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.
I don’t think LLMs will create intelligence, but I don’t think we need to solve everything about human intelligence before having machine intelligence.
Though in the case of consciousness - the fact of there being something it’s like to be - not only don’t we know what causes it or how it works, but we have no way of measuring it either. There’s zero evidence for it in the entire universe outside of our own subjective experience of it.
To be fair there’s zero evidence for anything outside our own subjective experience of it, we’re just kind of running with the assumption that our subjective experience is an accurate representation of reality.
We’ve actually got a pretty good understanding of the human brain, we just don’t have the tech that could replicate it with any sort of budget nor a use case for it. Spoiler, there is no soul.
The example I gave my wife was “expecting General AI from the current LLM models, is like teaching a dog to roll over and expecting that, with a year of intense training, the dog will graduate from law school”
Remember when passing the Turing Test was like a big deal? And then it happened. And now we have things like this:
Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 “passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative”
The best way to differentiate computers to people is we haven’t taught AI to be an asshole all the time. Maybe it’s a good thing they aren’t like us.
Alternative way to phrase it, we don’t train humans to be ego-satiating brown nosers, we train them to be (often poor) judges of character. AI would be just as nice to David Duke as it is to you. Also, “they” is anthropomorphizing LLM AI much more than it deserves, it’s not even a single identity, let alone a set of multiple identities. It is a bundle of hallucinations, loosely tied together by suggestions and patterns taken from stolen data
Sometimes. I feel like LLM technology and it’s relationship with humans is a symptom of how poorly we treat each other.
The best way to differentiate computers to people is we haven’t taught AI to be an asshole all the time
Elon is trying really with Grok, tho.
I can define “LLM”, “a painting”, and “alive”. Those definitions don’t require assumptions or gut feelings. We could easily come up with a set of questions and an answer key that will tell you if a particular thing is an LLM or a painting and whether or not it’s alive.
I’m not aware of any such definition of conscious, nor am I aware of any universal tests of consciousness. Without that definition, it’s like Ebert claiming that, “Video games can never be art”.
Absolutely everything requires assumptions, even our most objective and “laws of the universe” type observations rely on sets of axioms or first principles that must simply be accepted as true-though-unprovable if we are going to get anyplace at all even in math and the hard sciences let alone philosophy or social sciences.
Defining “consciousness” requires much more handwaving and many more assumptions than any of the other three. It requires so much that I claim it’s essentially an undefined term.
With such a vague definition of what “consciousness” is, there’s no logical way to argue that an AI does or does not have it.
Your logic is critically flawed. By your logic you could argue that there is no “logical way to argue a human has consciousness”, because we don’t have a precise enough definition of consciousness. What you wrote is just “I’m 14 and this is deep” territory, not real logic.
In reality, you CAN very easily decide whether AI is conscious or not, even if the exact limit of what you would call “consciousness” can be debated. You wanna know why? Because if you have a basic undersanding of how AI/LLM works, than you know, that in every possible, concievable aspect in regards with consciusness it is basically between your home PC and a plankton. None of which would anybody call conscious, by any definition. Therefore, no matter what vague definition you’d use, current AI/LLM defintiely does NOT have it. Not by a longshot. Maybe in a few decades it could get there. But current models are basically over-hyped thermostat control electronics.
I’m not talking about a precise definition of consciousness, I’m talking about a consistent one. Without a definition, you can’t argue that an AI, a human, a dog, or a squid has consciousness. You can proclaim it, but you can’t back it up.
The problem is that I have more than a basic understanding of how an LLM works. I’ve written NNs from scratch and I know that we model perceptrons after neurons.
Researchers know that there are differences between the two. We can generally eliminate any of those differences (and many research do exactly that). No researcher, scientist, or philosopher can tell you what critical property neurons may have that enable consciousness. Nobody actually knows and people who claim to know are just making stuff up.
I’m not talking about a precise definition of consciousness, I’m talking about a consistent one.
Does not matter, any which way you try to spin it, any imprecise or “inconsistent” definition anybody would want to use, literally EVERYBODY with half a brain will agree that humans DO have consciousness and a rock does not. A squid could be arguable. But LLMs are just a mm above rocks, and lightyears below squids on the ladder towards consciousness.
The problem is that I have more than a basic understanding of how an LLM works. I’ve written NNs from scratch and I know that we model perceptrons after neurons.
Yea. The same way Bburago models real cars. They look somewhat similar, if you close one eye and squint the other and don’t know how far each of them are. But apart from looks, they have NOTHING in common and in NO way offer the same functionality. We don’t even know how many different types of neurons are, let alone be close to replicating each of their functions and operations:
So no, AI/LLMs are absolutely and categorically nowhere near where we could be lamenting about whether they would be conscious or not. Anyone questioning this is a victim of the Dunning-Kruger effect, by having zero clue about how complex brains and neurons are, and how basic, simple and function-lacking current NN technology is in comparison.
I think the reason we can’t define consciousness beyond intuitive or vague descriptions is because it exists outside the realm of physics and science altogether. This in itself makes some people very uncomfortable, because they don’t like thinking about or believing in things they cannot measure or control, but that doesn’t make it any less real.
But yeah, given that an LLM is very much measurable and exists within the physical realm, it’s relatively easy to argue that such technology cannot achieve conscious capability.
This definition of consciousness essentially says that humans have souls and machines don’t. It’s unsatisfying because it just kicks the definition question down the road.
Saying that consciousness exists outside the realm of physics and science is a very strong statement. It claims that none of our normal analysis and measurement tools apply to it. That may be true, but if it is, how can anyone defend the claim that an AI does or does not have it?
This definition of consciousness essentially says that humans have souls and machines don’t.
It does, yes. Fwiw, I don’t think it’s necessarily exclusive to humans though, animals and nature may play a role too.
It’s unsatisfying because it just kicks the definition question down the road.
Sure, but I have an entire philosophy set up to answer the other questions further down the road too 😂 That may still sound unsatisfying, but feel free to follow along: https://philosophyofbalance.com/
It claims that none of our normal analysis and measurement tools apply to it.
I believe that to be true, yes.
That may be true, but if it is, how can anyone defend the claim that an AI does or does not have it?
In my view, machines and AI can never create consciousness, although it’s not ruled out they can become vessels for it. But the consciousness comes from outside the perspective of the machines.
I think this is likely an unsurmountable point of difference.
The problem is that once we eliminate measurability we can’t differentiate between reality and fantasy. We can imagine anything we want and believe in it.
The Philosophy of Balance has “believe in the universal God” as its first core tenant. That makes it more like a religion than a philosophy.
Yeah, I think I see where you’re coming from. It’s a fair point, and we need to be very careful not to loose sight of reality indeed.
The idea of the Universal God is very tolerant towards “fantasy” so far as it exists in the minds of people, yet it also prescribes to align such belief with a scientific understanding. So the thing I’m trying to say is: believe what you want to believe, and so long as it’s a rational and tolerant belief, it’s fine. But it does explicitly recognise there are limits to what science can do for us, so it provides the idea of Universal God as kind of a North Star for those in search, but then it doesn’t really prescribe what this Universal God must look like. I don’t see it as a religious god, but more a path towards a belief in something beyond ourselves.
In the book I also take effort to describe how this relates to Buddhism, Taoism, and Abrahamic religions, and attempt to show how they are all efforts to describe similar concepts, and whether we call this Nature, Tao, or God, doesn’t really matter in the end. So long as we don’t fall into nihilism and believe in something, I believe we can find common ground as a people.
I can understand a desire to find something beyond ourselves but I’m not driven by it.
That’s exactly where Descartes lost me. I was with him on the whole “cogito ergo sum” thing but his insistence that his feelings of a higher being meant that it must exist in real form somewhere made no sense to me.
That’s fair too. I mean, feelings are real, but they are part of a subjective reality that’s not measurable from an objective perspective. But that alone is sufficient to say that science cannot answer all questions, because scientific measurements are inherently limited to objective reality.
Of course there are those that say there must be a single objective reality from which all subjective experiences can be explained, but that’s a huge assumption.
Personally, I think it’s also a dimensional thing. Reality extends beyond the dimensions of time and space, this much has already been scientifically proven. Unless you somehow believe there is a finite limit on the number of dimensions, there will always be dimensions beyond our grasp that we cannot measure or understand (yet).
And bringing it back to the discussion of LLMs, they are inherently limited to a 4-dimensional reality. If those dimensions are sufficient to create consciousness, my position would be that it’s a very limited form of consciousness.
I think the reason we can’t define consciousness beyond intuitive or vague descriptions is because it exists outside the realm of physics and science altogether. This in itself makes some people very uncomfortable, because they don’t like thinking about or believing in things they cannot measure or control, but that doesn’t make it any less real.
I’ve always had the opposite take. I think that we’ll eventually discover that consciousness is so explainable within the realm of physics that our understanding of how it works will make people very uncomfortable… because it will completely invalidate all of the things we’ve always thought made us “special”, like a notion of free will.
If you haven’t watched it yet you’d probably enjoy Westworld - it plays a lot with that space and approaches some very interesting philosophy when it comes to human consciousness and what it means to even be a person.
I don’t know if we’ll ever define consciousness or if we’ll ever discover what it is.
My central claim is that if we don’t do that we can’t convincingly claim that an AI is or is not conscious. We can conjecture about it either way and either guess may be right, but we won’t be able to move past guesses.
I’m sorry, but that article just isn’t very compelling. They seem to be framing the question of “is there free will” as a sort of Pascal’s Wager, which is, umm… certainly a strange choice, and one that doesn’t really justify itself in the end.
The author also makes a few false assertions and just generally seems to misunderstand what the debate over free will is even about.
Clair Obscur: Expedition 33
Clair Obscur: Expedition to meet the Dessandre Family
It reminds me of the reaction of the public to 1896 documentary The Arrival of a Train at La Ciotat Station. https://en.wikipedia.org/wiki/L'Arrivée_d'un_train_en_gare_de_La_Ciotat
It’s achieveable if enough alcohol is added to the subject looking at the said painting. And with some exotic chemistry they may even start to taste or hear the colors.
Or boredom and starvation
Except … being alive is well defined. But consciousness is not. And we do not even know where it comes from.
Viruses and prions: “Allow us to introduce ourselves”
I meant alive in the context of the post. Everyone knows what painting becoming alive means.
Two words “contagious cancer”
Cancer is at least made out of cells. Viruses are just proteins dipped in evil
Not fully, but we know it requires a minimum amount of activity in the brains of vertabrates, and at least observable in some large invertebrates.
I’m vastly oversimplifying and I’m not an expert, but essentially all consciousness is, is an automatic processing state of all present stimulation in a creatures environment that allows it to react to new information in a probably survivable way, and allow it to react to it in the future with minor changes in the environment. Hence why you can scare an animal away from food while a threat is present, but you can’t scare away an insect.
It appears that the frequency of activity is related to the amount of information processed and held in memory. At a certain threshold of activity, most unfiltered stimulus is retained to form what we would call consciousness - in the form of maintaining sensory awareness and at least in humans, thought awareness. Below that threshold both short term and long term memory are impaired, and no response to stimulation occurs. Basic autonomic function is maintained, but severely impacted.
Okay, so by my understanding on what you’ve said, LLM could be considered conscious, since studies pointed to their resilience to changes and attempts to preserve themselves?
IMO language is a layer above consciousness, a way to express sensory experiences. LLMs are “just” language, they don’t have sensory experiences, they don’t process the world, especially not continuously.
Do they want to preserve themselves? Or do they regurgitate sci-fi novels about “real” AIs not wanting to be shut down?
I saw several papers about LLM safety (for example Alignment faking in large language models) that show some “hidden” self preserving behaviour in LLMs. But as I know, no-one understands whether this behaviour is just trained and does mean nothing or it emerged from the model complexity.
Also, I do not use the ChatGPT app, but doesn’t it have a live chat feature where it continuously listens to user and reacts to it? It can even take pictures. So the continuity isn’t a huge problem. And LLMs are able to interact with tools, so creating a tool that moves a robot hand shouldn’t be that complicated.
I responded to your other comment, but yes, I think you could set up an llm agent with a camera and microphone and then continuously provide sensory input for it to respond to. (In the same way I’m continuously receiving input from my “camera” and “microphones” as long as I’m awake)
Yeah, it seems like the major obstacles to saying an llm is conscious, at least in an animal sense, is 1) setting it up to continuously evaluate/generate responses even without a user prompt and 2) allowing that continuous analysis/response to be incorporated into the llm training.
The first one seems like it would be comparatively easy, get sufficient processing power and memory, then program it to evaluate and respond to all previous input once a second or whatever
The second one seems more challenging, as I understand it training an llm is very resource intensive. Right now when it “remembers” a conversation it’s just because we prime it by feeding every previous interaction before the most recent query when we hit submit.
As I said in another comment, doesn’t the ChatGPT app allow a live converation with a user? I do not use it, but I saw that it can continuously listen to the user and react live to it, even use a camera. There is a problem with the growing context, since this limited. But I saw in some places that the context can be replaced with LLM generated chat summary. So I do not think the continuity is a obstacle. Unless you want unlimited history with all details preserved.
I’m just a person interested in / reading about the subject so I could be mistaken about details, but:
When we train an LLM we’re trying to mimic the way neurons work. Training is the really resource intensive part. Right now companies will train a model, then use it for 6-12 months or whatever before releasing a new version.
When you and I have a “conversation” with chatgpt, it’s always with that base model, it’s not actively learning from the conversation, in the sense that new neural pathways are being created. What’s actually happening is a prompt that looks like this is submitted: "{{openai crafted preliminary prompt}} + “Abe: Hello I’m Abe”.
Then it replies, and the next thing I type gets submitted like this: "{{openai crafted preliminary prompt}} + "Abe: Hello I’m Abe + {{agent response}} + “Abe: Good to meet you computer friend!”
And so on. Each time, you’re only talking to that base level llm model, but feeding it the history of the conversation at the same time as your new prompt.
You’re right to point out that now they’ve got the agents self-creating summaries of the conversation to allow them to “remember” more. But if we’re trying to argue for consciousness in the way we think of it with animals, not even arguing for humans yet, then I think the ability to actively synthesize experiences into the self is a requirement.
A dog remembers when it found food in a certain place on its walk or if it got stabbed by a porcupine and will change its future behavior in response.
Again I’m not an expert, but I expect there’s a way to incorporate this type of learning in nearish real time, but besides the technical work of figuring it out, doing so wouldn’t be very cost effective compared to the way they’re doing it now.
I would say that artificial neuron nets try to mimic real neurons, they were inspired by them. But there are a lot of differences between them. I studied artificial intelligence, so my experience is mainly with the artificial neurons. But from my limited knowledge, the real neural nets have no structure (like layers), have binary inputs and outputs (when activity on the inputs are large enough, the neuron emits a signal) and every day bunch of neurons die, which leads to restructurizing of the network. Also from what I remember, short time memory is “saved” as cycling neural activities and during sleep the information is stored into neurons proteins and become long time memory. However, modern artificial networks (modern means last 40 years) are usually organized into layers whose struktuře is fixed and have inputs and outputs as real numbers. It’s true that the context is needed for modern LLMs that use decoder-only architecture (which are most of them). But the context can be viewed as a memory itself in the process of generation since for each new token new neurons are added to the net. There are also techniques like Low Rank Adaptation (LoRA) that are used for quick and effective fine-tuning of neural networks. I think these techniques are used to train the specialized agents or to specialize a chatbot for a user. I even used this tevhnique to train my own LLM from an existing one that I wouldn’t be able to train otherwise due to GPU memory constraints.
TLDR: I think the difference between real and artificial neuron nets is too huge for memory to have the same meaning in both.
Why are there so many nearly identical comments claiming we don’t know how brains work?
I guess because it is easy to see that living painting and conscious LLMs are incomparable. One is physically impossible, the other is more philosophical and speculative, maybe even undecidable.
That doesn’t answer my question.
Okay, it is easy to see -> a lot of people point it out
Thats better, but that could describe any frequent answer while ignoring the cause of WHY
Maybe you just didn’t understand the answer. It seems pretty clear to me.
I guess that tracks, I had you pegged for an idiot pretty early on in this discussion.












