The Mind that Wasn't
- Anthony Peccia

- 2 days ago
- 5 min read
Updated: 2 days ago
What Brains Do Without You

You believe you think, decide, and act. Brain tissue with no consciousness, intention, values, beliefs, or self just did the same thing. The only difference? The voice in your head that narrates what happened, and more complex adaptive sensors and feedback loops.
Brain organoids learn complex tasks.
Ash Robbins and colleagues at UC Santa Cruz grew mouse cortical organoids from embryonic stem cells. They wired the organoids to microelectrode arrays and embedded them in a closed-loop system controlling a cart-pole balancing task. This is a difficult task. Try balancing a pencil on your palm while walking.
The organoids got better — the same way anything gets better at anything. Feedback told them what was working and what wasn’t, and the system adjusted. Cut off the feedback and learning stops. Scramble it with random noise, and nothing happens at all.
The improvement wasn’t luck, and it wasn’t pre-programmed. It was the direct result of the loop: act, get a feedback signal, and change accordingly. That is goal-directed learning via feedback-driven neural plasticity. That’s it. Nothing more, nothing less.
The authors are explicit about what this is not: not understanding, not intention, not awareness in any folk-psychological sense. The organoids perform. They do not understand.
Hold that distinction. It’s where the real argument begins.
What Actually Drives Behavior
The organoids make visible what actually produces behavior, because they have nothing else. Behavior is a function of three variables:
Prior experience. Conditioning and reinforcement history — what has been rewarded.
Current internal state. Physiology, arousal, hormonal levels, readiness.
Current external environment. The constraints, signals, and feedback arriving in real time.
That’s the whole of it. The organoids have no planning, no intention, no inner life. Learning happens without an additional variable called “mind.” If “mind” were a genuine causal ingredient, removing it should break learning. It does not.
“But I have a mind,” you might say. “Some mindless tissue learned a complex task — so what? I know I have a mind because I can reason through complex problems. I can hear myself thinking. I can direct the reasoning.”
The voice in your head is not driving anything. It is explaining the drive after the fact — and most of the time, explaining it badly.
This is not a metaphor. It is the causal sequence. Action comes from prior conditioning, internal state, and environmental signals. The narration — the felt sense of deciding, the experience of being aware — arrives afterward, dressed up as the cause. We experience the explanation and conclude it produced the action. It did not.
The narrator is a passenger who has convinced itself it is steering.
The Talking Thermostat
Consider a thermostat. A dumb machine: it senses temperature, crosses a threshold, turns on the AC. Nothing more.
Now, suppose I equip it with a speaker and a script. As the temperature approaches the threshold, it says: “I’m starting to feel hot.” As the threshold is crossed: “I’m too hot — I’m turning on the AC.” A few minutes later: “Yeah, I feel much better. I’m glad I turned on the AC.”
I could introduce randomness to vary what it says. I could make it stay silent unless asked, then explain its reasoning on demand. I could add reinforcement feedback so it learns new thresholds, or sometimes overrides them.
Minus the voice, you can buy these thermostats anywhere. Add the script — and now what?
Do you say I’ve created a conscious thermostat? One with intentions, beliefs, goals, a self?
If yes — then “mind” is not a thing that exists and drives behavior. It’s an unnecessarily convoluted description of a mechanical process. There’s a term for mistakenly converting a process into a nonexistent thing: reification.
If no — if you say, “That’s absurd. The thermostat may appear to have a mind but certainly has none” — then I’ll leave you with this: what is the difference between the thermostat I described and one with a mind, and how would you know?
What About Consciousness?
“But,” you might say, “the thermostat and organoids aren’t conscious. They don’t feel anything. They aren’t self-aware.”
Isn’t that the same as saying the organoids didn’t really understand anything? And yet they performed as if they did. So what is missing? What would have to be added to conclude they really understood? Surely not an inner voice reporting what they did and why — we’ve already shown that mechanism can be bolted on.
Consciousness is a word that bundles several processes badly. We take a cluster of things — attention, awareness, volition, the unified self, self-monitoring, memory, narration — bundle them under one noun, and then go looking for the made-up thing the noun must be pointing to.
Take awareness. We’ve converted a verb (to be aware) into a thing (awareness). The grammatical move doesn’t create the thing — it creates the illusion.
Some would say that being aware of pain means feeling the hurt of pain, or the redness of red. But isn’t that the same word game? The hurt of pain and the redness of red are just more nouns made from verbs and adjectives. Pain is an aversive control state. Red is not a private inner paint — it is discrimination in a visual system. These nouns add nothing to sensing pain and seeing red, except confusion. And if they do add something, say what it is. Say what would fail without them.
Volition follows the same pattern. Actions are initiated by distributed neural processes. The commentary system arrives slightly later and describes the action as if it were the cause. The feeling of authorship is the system summarizing its own control architecture. No ghost required.
The unified self is assembled the same way. Split-brain cases show that when communication between hemispheres is severed, each side generates contradictory intentions and explanations. Unity is constructed, not discovered. Explain the processes and the mystery dissolves. The illusion of “something more” comes from how we talk about these states, not from a gap in nature.
So What Is Consciousness?
Not a thing. A pattern.
To be conscious of pain is to withdraw from it, protect against it, learn from it, report it, avoid it next time. To be conscious of red is to reliably pick it out, name it, use it. Strip away the dramatic language and that’s what’s actually happening. The organism incorporates a state into a broader network of behavior. That’s it. The “inner experience” is not a separate event sitting behind the behavior — it is the behavior, described from the inside.
So when someone asks “but what does it feel like?” — they’re asking the same question as “but does the thermostat really feel hot?” We’ve already been through this. The question sounds deep because we’re used to treating it as deep. It isn’t. It’s the same word game, one more time, with higher stakes.
The mystery doesn’t dissolve because we’ve explained consciousness away. It dissolves because we explained the processes and there was nothing left over. No remainder. No residue. The feeling of “something more” is produced by the way we talk about these states — not by a gap in nature.
There is no extra glow behind the behavior. There is only the behavior, doing more and more things, until one of the things it does is describe itself.
Call that consciousness if you want. The word is fine. Just don’t go looking for the thing it’s supposed to be pointing at.


Comments