Slim Saelim

I spent a year reading a dead neurologist and now I can't stop seeing the same mistake everywhere in AI.

Builder. Philosopher. Thai.

I'm from Thailand. Did data engineering for five years — the kind of work where you build pipelines nobody thinks about until they break. Good work. But it wasn't the thing.

The thing found me at 2am, reading Warren McCulloch. Neuropsychiatrist. Died in 1969. In 1943, he and Walter Pitts wrote the paper that started neural networks — proved that neurons could compute anything a Turing machine could. Then the field forgot the question he was actually asking: not "can machines think?" but "what is thinking, that a machine might do it?"

Eighty years later, we have GPT and DALL-E and systems that can pass the bar exam. We still don't have an answer to McCulloch's question. We just have increasingly powerful wrong ones.

I named my company after my grandparents — Boon and Ma. The idea is to build the next cybernetics movement. Polymaths across disciplines, like the Macy Conferences that McCulloch helped run in the 1940s. Neurologists and mathematicians and philosophers arguing until 3am. That's the room I keep trying to build.

Updated March 2026

Three products. Same idea at different scales — put different things together, see what emerges.

Moondog

The rehearsal room.

AI splits a song into stems. You pick your instrument. The band plays around you — and reacts. Not a game. Not a tutorial. A session, waiting for you to find the hook.

XiaoMa

Walking with the Old Master.

Every Chinese character is a tiny philosophy. 心 (heart) lives inside 想 (think) and 忘 (forget) and 忙 (busy). You don't study Chinese with Ma. You walk with Ma the way Alexander walked with Aristotle — and the language is the doorway into 3,000 years of seeing.

Clayva

The Mirror Dimension.

Connect your repo. Your product materializes on a canvas — every screen, every flow, every path. See what’s invisible: where users drop off, rage-tap, get lost. Reshape reality without breaking anything in the real world.

Different rooms, same pattern.

The headband in Journey to the West doesn't remove the Monkey King's power. It only activates when his behavior diverges from the mission. RLHF is the headband. The reward model is Xuanzang — less capable than the system it constrains, applying pressure based on surface-level judgment. The difference: Wukong knows he's wearing one.

Five problems

  1. Games were the wrong proxy for intelligence.
  2. Symbolic AI was a political detour — Minsky killed neural nets for a decade.
  3. LLMs are stochastic parrots wearing RLHF costumes.
  4. World models (JEPA) are necessary but insufficient.
  5. McCulloch’s original question remains unanswered.

Four missing ingredients

  • Embodiment. No body, no stakes, no survival pressure.
  • Continuous learning. Brains have no training/inference split.
  • Emergence. When does a network produce cognition?
  • Will. Maybe purpose requires mortality.

Essays on AI, philosophy, and the history of computing. The thesis in progress.

All writing →