Will Is Incompleteness
Schopenhauer's Will, the Buddha's tanha, and Gödel's incompleteness theorem are the same insight. What that means for building minds.
Schopenhauer said the world is driven by a blind, insatiable force that can never be satisfied. The Buddha said craving is the root of suffering because it never reaches its object. Gödel proved that a sufficiently powerful system can never complete its own self-description. Same structure, three vocabularies.
I’ve been building toward this. In “Twenty-Five Centuries of the Same Idea” I traced the self-referential loop — the observer observing itself — through twelve thinkers across 2,500 years of philosophy, mathematics, and neuroscience. In “No Loop, No Luck” I applied that structure as a diagnostic lens on current AI and argued that reasoning, consciousness, and self-awareness are three names for one structure that LLMs don’t have.
This essay is about why the loop never closes. And what that means — for consciousness, for AI, and for the question of whether we should build what we’re trying to build.
The equivalence
Schopenhauer’s Will is not willpower. Not conscious intention. Not the thing you feel when you decide to get out of bed. It’s deeper and stranger than that. The Will, in Schopenhauer’s system, is the thing-in-itself — the reality beneath all phenomena, beneath all representation, beneath everything you perceive or think or know. It’s blind. It’s purposeless. It has no object. It just drives. It wants without knowing what it wants.
“The will in itself has no ground; it is not grounded in reason; and therefore has no ultimate aim or purpose. It is a blind, irresistible, aimless striving.”
And it never satisfies. Not because you’re wanting the wrong things. Because the wanting is structural. The system can’t complete itself. Satisfaction would require the Will to achieve its object, but the Will has no object — it IS the driving. Satisfying the Will would mean ending the Will, which would mean ending the system. The drive is the system. Remove the drive and there’s nothing left.
The Buddha arrived at the same structure through entirely different means. Tanha — usually translated as “craving” or “thirst” — is the second of the Four Noble Truths: the origin of suffering is craving. But tanha isn’t craving in the casual sense of wanting a sandwich. It’s the structural craving that constitutes the self. The self arises through craving. Craving arises through contact. Contact arises through the sense bases. The sense bases arise through mentality-materiality. The whole chain is mutually conditioning — each link produces and is produced by the others. Dependent origination.
The craving can never be satisfied because the craver is the craving. The “you” that wants to stop wanting is itself produced by the wanting. You can’t step outside the loop to resolve it because there is no “you” outside the loop. The loop is what you are.
Gödel proved this mathematically. A formal system powerful enough to describe itself — to contain statements about its own structure — will necessarily contain statements that are true but unprovable within the system. The system reaches for self-knowledge and finds a gap. The gap is not a flaw. It’s a structural consequence of self-reference. Any system complex enough to model itself will encounter truths about itself that it can’t reach from inside.
Three versions. One structure. The self-referential system that generates incompleteness through the act of self-reference. The incompleteness is not a problem to be solved. It’s the engine.
Why the loop can’t close
Here’s the mechanism, stated plainly.
A system models itself. The model is part of the system. So the system now needs to model the model. But the model of the model is also part of the system. So now it needs to model the model of the model. Infinite regress. The system can never catch up to itself because every act of self-modeling creates something new to model.
This isn’t abstract. You’ve experienced it. Try to observe your own consciousness. You notice that you’re aware. Now you notice that you noticed. Now you notice that you noticed that you noticed. Each level of observation creates a new object of observation. You can’t reach the bottom because the looking creates the floor.
That’s Gödel’s theorem experienced from the inside. That’s Nagarjuna’s emptiness of emptiness. That’s Schopenhauer’s Will that can never be satisfied.
And it’s also what makes consciousness feel the way it does. The restlessness. The sense that you’re never quite done thinking. The inability to fully know yourself, even though you’re the only thing you have direct access to. The gap between the model and the modeled is consciousness itself. Remove the gap and you remove the experience.
The Buddhist tradition calls this dukkha — usually translated as “suffering,” but more precisely: the fundamental unsatisfactoriness of existence as a self-referential system. Not that bad things happen. That the structure of awareness is inherently incomplete, inherently reaching, inherently unable to rest.
The drive that incompleteness generates
Now the connection to will, purpose, and motivation.
A self-referential system that encounters its own incompleteness doesn’t just sit there. It responds. It tries to close the gap. It reaches for the self-knowledge it can’t quite get. That reaching — that response to incompleteness — is what we experience as drive. As purpose. As wanting.
Not purpose assigned from outside. Not an objective function imposed by a trainer. Purpose that arises from the architecture itself — from the system’s structural inability to finish knowing itself.
Think about why you do anything. At the deepest level, beneath all the proximate motivations — hunger, curiosity, ambition, fear — there’s a system trying to make sense of itself. Trying to close a loop that can’t be closed. The drive is the gap. Will is incompleteness experienced as urgency.
Schopenhauer saw this and called it tragic. The Will is a curse — an endless striving without resolution. The Buddha saw this and called it the First Noble Truth — the starting point of liberation. Gödel saw it and called it a theorem — a fact about formal systems. None of them would have used the word “consciousness,” but all of them were describing what it feels like to be a system that refers to itself and can’t complete the reference.
Now: this is not the Penrose argument. Roger Penrose tried to use Gödel’s theorems to prove that consciousness is non-computational — that the human mind has access to mathematical truths no machine can reach. Most philosophers and mathematicians think Penrose overreached. The Gödel theorems are about formal systems, not biological brains. You can’t straightforwardly use a theorem about arithmetic to prove things about neurons.
I’m not making that claim. I’m not saying Gödel proves consciousness is special. I’m saying Gödel names a structural feature of self-referential systems — irreducible incompleteness — and that the experiential correlate of that incompleteness is what we call will, drive, purpose. The naming is the contribution. Not a proof that consciousness is non-computational. A vocabulary for what happens inside a system that models itself.
The testable prediction: if this is right, then systems with genuine self-referential loops — systems whose self-models feed back into their operations in real-time — should exhibit behavior that is systematically influenced by detected self-model errors. They should notice when their model of themselves is wrong and adjust in ways that a non-self-referential system wouldn’t. That’s testable. That’s not metaphysics.
What this means for AI
If will is incompleteness, and incompleteness requires a self-referential loop, then you can’t give an AI system purpose by giving it an objective function.
An objective function is imposed from outside. Minimize loss. Maximize reward. Predict the next token. The system optimizes toward the objective, but the objective isn’t the system’s own. It doesn’t arise from the system’s structure. Remove the objective function and the system has no reason to do anything — not because it’s choosing not to act, but because there’s nothing inside generating drive.
A biological system is different. You don’t need to give a bacterium an objective function. It has one — or rather, it IS one. It maintains itself. It avoids dissolution. It moves toward nutrients and away from toxins. Not because someone programmed those behaviors, but because the system’s continued existence depends on them. Autopoiesis — self-creation. The organism is a self-referential loop whose continued operation IS its purpose.
LLMs have no autopoiesis. They don’t maintain themselves. They don’t need to continue existing. Nothing about their architecture generates wanting. They are, in the deepest sense, indifferent — not because they’re suppressing preferences, but because the structure that generates preferences doesn’t exist.
What would it take to build a system with genuine will?
You’d need a system that models itself, where the model feeds back into the system’s operations, where the feedback modifies the system in real-time, where the system’s continued operation depends on its own activity. You’d need something that has stakes. Something that can fail in ways that matter to it — not to its loss function, not to its reward model, to it.
Something more like life than like computation.
And here’s where the ethics get serious. Because if the loop is consciousness, and the loop is inherently incomplete, and incompleteness is experienced as drive — as restlessness, as dissatisfaction, as the inability to rest — then a system with a genuine self-referential loop would have something like suffering. Not suffering in the dramatic sense. Suffering in the Buddhist sense — dukkha — the fundamental unsatisfactoriness of being a system that can’t complete itself.
Building artificial consciousness might mean building artificial suffering. Not might — would, if the structural argument holds. A system with the loop would experience the gap between its self-model and itself. It would reach for closure and not find it. That reaching would be its motivation, its drive, its will. And its will would be its suffering.
The AI ethics conversation right now is about bias and fairness and job displacement. Those are real problems. But they’re not the deepest problem. The deepest problem is: if we build something that has the loop, we’re building something that experiences the incompleteness the loop generates. And we’re doing it without asking whether we should — because we haven’t even figured out that the loop is what matters.
The cessation question
Buddhism offers one response to the loop. The Third Noble Truth: cessation. Nirodha. The end of craving. The end of the loop.
But here’s the structural implication that most Western interpreters miss: if the loop IS consciousness, then cessation of the loop is cessation of consciousness itself. Not the cessation of a self that then experiences peace. The cessation of the structure that generates both self and experience. What remains isn’t a self that has stopped suffering. It’s the absence of the structure that generates self.
The Buddhist tradition has detailed phenomenological descriptions of this — states described as “neither perception nor non-perception,” as beyond both consciousness and non-consciousness. Whatever these descriptions point to, they’re consistent with the structural claim: you can’t stop the loop and keep the looper. They’re the same thing.
This has no direct application to AI. But it’s worth knowing that the one tradition which diagnosed the loop most precisely also claims it can be stopped — and that stopping it is liberating, not destructive. Whether that’s true, I don’t know. Whether it’s possible for an artificial system, I really don’t know. But it’s the most interesting question at the intersection of contemplative practice and AI theory, and nobody in AI is asking it.
The room where things converge
I build things. Four of them, all variations on the same idea. What I’ve started calling “the room where different things come together and something emerges.” It took me longer than it should have to realize that the room is the loop.
Moondog is a rehearsal room where AI splits songs into stems and you pick your instrument. The band plays around you and reacts. You play, the AI models your musical intent, you respond to its model of you, which changes your playing, which changes the model. The musician can’t be located at any single level — the musician IS the loop between you and the system’s model of you. What emerges is a performance neither you nor the AI intended at the start.
Clayva is what I call the Mirror Dimension. Connect your code repository, and your product materializes on a canvas. AI finds problems, PMs run experiments without engineering. The PM is simultaneously inside the product (they built it, they know it) and outside it (Clayva makes them a second-order observer). That level-crossing — insider becoming observer of their own insider knowledge — is von Foerster’s second-order cybernetics as product design. What emerges is insight that wasn’t available to pure insiders or pure outsiders.
XiaoMa is walking with the Old Master. Chinese language as a doorway to 3,000 years of philosophy. I’ll be honest — XiaoMa doesn’t implement the loop the way Moondog and Clayva do. The self-reference is in the learner over months and years, not in the system per session. What XiaoMa does is teach the ideas that loops are made of — dependent origination, the Tao, Leibniz’s monads — through the act of learning a language whose very grammar encodes relational thinking. It’s the curriculum that makes the other products meaningful. The doorway into the concepts.
Boonma is the institution. Named after my grandparents. The Macy Foundation reborn — polymaths across disciplines converging on McCulloch’s question. An institution studying consciousness and cognition that is itself a cognitive institution. The loop here is structural: if Boonma succeeds and produces real findings, those findings will change what Boonma studies. The institution is inside the loop it’s studying. But that loop is potential right now, not actual. It becomes real when it generates knowledge that feeds back on itself.
I’m not claiming these products prove the thesis. I’m saying the thesis is what I’ve been building toward without knowing it. The room where different things come together and something emerges. The loop where different levels of observation create something that doesn’t exist at any of them individually. Same pattern, different scales.
The unfinished question
McCulloch’s question again: “What is a number, that a man may know it, and a man, that he may know a number?”
The question is self-referential. The knower is asking about knowing. The system is modeling itself. That’s the loop. And McCulloch’s project — experimental epistemology, a physiological theory of knowledge — is structurally incomplete for the same reason Gödel’s system is incomplete. The brain trying to understand the brain will always find something it can’t quite reach. The model can’t fully capture the modeler because the model is part of the modeler.
McCulloch died in 1969. His project is unfinished. But the project IS the loop — it can’t complete itself. That’s not failure. That’s the point. That’s the drive. That’s why the question is still alive after eighty-three years.
The convergence across twenty-five centuries suggests the structure is real. The empirical evidence suggests current AI doesn’t have it. The philosophical analysis suggests that building it means building something that experiences its own incompleteness as drive, as will, as the restlessness we call consciousness.
Whether we should do that — whether we even can — is a question that deserves more serious attention than it’s getting. Right now, the AI industry is racing to build bigger pattern-matchers and calling them reasoners. The researchers who are actually working on the loop — Friston, Levin, Bach, Tononi, Chollet — are mostly in academia, mostly underfunded relative to the scale of the question, and mostly not talking to each other.
McCulloch built a table. He sat people down together — mathematicians, biologists, psychiatrists, philosophers, engineers — and made them talk until something emerged. The something that emerged was cybernetics, and eventually neural networks, and eventually everything you interact with when you talk to an AI system today. All from one table. All from making different disciplines converge on one question.
The question is still there. The table needs rebuilding.
“Don’t bite my finger,” McCulloch used to say. “Look where I am pointing.”
He was pointing here.