“What is a number, that a man may know it,
and a man, that he may know a number?”

— Warren McCulloch, 1961

Self-referential processing has a measurable thermodynamic cost.
The cost scales. There’s a wall. The wall is why AI doesn’t think.

Slim Saelim
Neuroscience. Consciousness. The math between your brain and your machine.

Tier 1

The $800 Test

The entire thesis reduces to one question: can you detect recursion depth from electrical signals on the scalp?

OpenBCI Cyton board. 8 EEG channels. Five conditions — from baseline cognition to deep recursive self-observation. If frontal theta-gamma coupling increases monotonically with recursion depth, the equation is real.

0
Ground state Fixation cross. No self-reference. Breathe.
1
First-order cognition Mental arithmetic. 47 × 13. Thinking, not watching yourself think.
2
First-order metacognition Rate your confidence. Now you’re modeling your own performance.
3
Second-order metacognition Was your confidence rating accurate? Modeling your model.
4
The cascade Watch the watching. Where does the observer stand? One minute maximum.
Equipment OpenBCI Cyton + Ultracortex — $800
Location Bangkok apartment
Participant Zero me
Analysis MNE-Python + scikit-learn — frontal theta, theta-gamma coupling, cross-validation
Status in progress

If it works: the Landauer bridge holds. The cost is real. The scaling is real. The headband has something to measure.

If it doesn’t: the equation goes in the drawer. Not the good drawer.

imslimism/2026.03 · preprint

The Thermodynamic Cost of Self-Reference:
A Unified Framework for Measuring Metacognitive Loop Energy Across Substrates

Slim Saelim

Independent researcher · Bangkok

Abstract

Self-referential processing (metacognition) has a measurable thermodynamic cost that follows predictable scaling laws. We connect Landauer’s bit-erasure principle (Landauer, 1961) to Friston’s free energy framework (Friston, 2010) through Kwan’s synaptic-level mapping of recurrent cortico-cortical loops (Jiang et al., Cell 2025), showing that each level of recursive self-observation incurs minimum energy cost Emeta(n) ≥ n × B × kT ln(2) with superlinear scaling. We demonstrate that every major concept in this framework has a structural twin in machine learning — not by analogy, but by identical equation class — yielding a Rosetta Stone between clinical neuroscience and artificial intelligence.

metacognition thermodynamics Landauer principle free energy principle EEG self-reference consciousness machine learning

Table 1. The Rosetta Stone

Machine Learning Clinical Neuroscience
Overfitting
Rumination

The model memorizes its own noise. The brain replays its own thoughts. Same failure mode. Same equation class.

Dropout
Psilocybin

Randomly weaken pathways so the system can't rely on habits. Srivastava & Hinton, 2014. Kwan et al., Cell 2025. Same mechanism.

Regularization
Meditation

Penalize unnecessary complexity. Each 'return to breath' is a micro-penalty on recursion depth. Different traditions = different λ values.

Loss function
Suffering

The signal that something needs to change. Without it, no learning. The cost IS the teacher.

Internal energy budget
Will / Tanha

When self-reference costs the system something, the system must choose. That choice is purpose. That's what AI doesn't have.

Core equations

(1)

Emeta(n) ≥ n × B × kT ln(2) + overhead(n)

Minimum thermodynamic cost of metacognitive processing at recursion depth n, where B = bits erased per self-model update, k = Boltzmann constant, T = temperature.

(2)

F = 1 / (n × E(n) × R × |C−1| × τ)

The flow equation. Unifies recursion depth, energy cost, cortical redistribution ratio, interpersonal modulation, and recovery speed into a single measure.

Falsifiable predictions

  1. Frontal theta-gamma coupling increases monotonically with recursion depth
  2. Metabolic cost scales superlinearly with recursion level — if linear, the Landauer bridge is wrong
  3. Psilocybin, meditation, and flow states produce same-direction neural redistribution (cf. Kwan et al., 2025; Carhart-Harris, 2012–2024)
  4. There exists a measurable Gödelian wall — a recursion depth beyond which self-referential processing degrades
  5. Real-time EEG neurofeedback on recursion depth produces trained redistribution beyond placebo
All publications →