Book Review: Mathematica by David Bessis

Cover of Mathematica by David Bessis

Mathematica: A Secret World of Intuition and Curiosity by David Bessis

Strong coffee

My book life runs on Libby. Anytime I hear about a book on a podcast I’ll stick it on the waitlist and forget about it, until it arrives weeks later, out of context.

I’m juggling several at once, a smorgasbord of small plates, and no expectation of finishing anything. After 21 days, the ebook gets ‘returned’ to the LA Public Library, whether I’ve read it or not.

David Bessis’s Mathematica is the exception. I’ve renewed it (and waited weeks for it to come back around) three times, and now that I finally finished it, I ordered a copy so I can read it again on paper.

I’ve been reading it the way I drink a strong cup of black coffee — a few sips, then I have to set it down. Not because it’s slow going. Because it’s too good to rush. By the time I pick it up again it’s gone cold, and I have to warm it up from where I left off. Part of that is the density of powerful ideas, and part of it is that I don’t want it to end. I’ve been savoring it. Every chapter closes and I sit there thinking: okay, how do I actually live with this idea?


A mental model mental model

This is a self-help book about mathematical intuition, how important it is for mathematical breakthroughs, and how different it is from the rigorous language of proofs and formal logic.

The biggest thing Bessis gave me was a clearer working model for how learning actually works — and specifically, how my brain can work if I’m deliberate about it.

The core argument is that neuroplasticity isn’t some mysterious biological process reserved for the young or the gifted. It’s the product of a specific kind of practice: exercises in holding structures in your imagination vividly and concretely. The way a child learns to navigate a room before they have words for left and right. Spending enough time with a mental model that it starts to feel like a real physical object or capability.

Two examples stuck with me. The first is Ben Underwood, a young man whose eyes were removed at age two due to retinal cancer, who taught himself to navigate the world entirely through echolocation — clicking his tongue to map space well enough to ride a bike, play basketball, and skateboard. The second is William Thurston, a Fields Medal-winning mathematician who had no stereoscopic depth perception due to a childhood condition. His mother worked with him to reconstruct 3D images from 2D ones. That same patient discipline — building spatial models from incomplete input — is exactly what allowed him, later in life, to visualize four, five, and higher dimensions with something approaching ease. He later said all of his ability came from a decision he made as a first-grader: to practice visualizing things every day.

The training is the thing. The domain almost doesn’t matter.


Metaphorex catalyst

Bessis’s argument plugs directly into something Lakoff and Johnson mapped decades earlier in Metaphors We Live By — one of the first serious attempts to disassemble how metaphors actually work. Not as rhetorical flourishes, but as cognitive mappings: a source frame projected onto a target frame, letting embodied experience make sense of abstraction.

“Forward is progress” only works as a concept if you’re a creature with sensory-motor feedback loops. You understand forward because you’ve propelled yourself through space. That felt, physical knowledge gets borrowed to structure something otherwise wordless.

Mathematica convinced me that building a larger catalog of these mappings (my side project Metaphorex) is worthwhile, even if it doesn’t yet boost performance on any eval. The practice of dissecting metaphors, finding their structural bones, and looking for patterns across domains: that’s not math, but it’s the same exercise Bessis is prescribing, and it’s training the same underlying machinery.


Dream journaling

The book’s simple advice for where to start: write down your dreams. When you do, you practice shaping vague and disjointed internal experiences into linear narratives. The side effects are powerful. He describes how the practice led him to have more vivid dreams and more success transcribing them, which led to even more vivid dreams and clearer recall still. I experienced the same thing.

The more interesting thing I noticed was the content. Almost every dream I wrote down had a traceable connection to something I’d been working on or mulling over the day before. Not literally, but structurally. One night I dreamt of working in a basement cubicle, then navigating a labyrinth of corridors to an elevator, to a landing, across a skybridge, to another elevator, down to a sub-sub-basement cafeteria. The previous evening I’d been adding structure to the Metaphorex schema — specifically trying to build bridges between parallel metaphors that connect different pairs of source and target frames. My sleeping brain had taken the concept and walked around inside it.

This is, as best I can tell, what dreams are for: taking raw short-term memories and integrating them into the larger knowledge graph of long-term experience. Sometimes by finding narrative. Sometimes by finding archetype. The mechanism isn’t so different from what’s happening when a neural network moves from perplexity to trained — compacting noisy inputs into something coherent, durable, and reusable.

Which makes it worth noting that Anthropic has built exactly this into Claude Code. Buried in the /memory menu is a feature called Auto Dream — a background consolidation process designed to do for AI memory what REM sleep does for the human brain: consolidate, clean, and reorganize what’s been learned before the next session begins. It runs a structured cycle: surveying existing memory, gathering high-value signals from past sessions, resolving contradictions, pruning stale entries, and rebuilding a clean index. The feature is built and visible in the menu but is currently behind a server-side feature flag — not yet live for most users. You can trigger the equivalent manually today by telling Claude to “consolidate my memory files.”


Neural nets and neuroplasticity

Late in the book, Bessis talks about his own pivot from mathematics to machine learning and what the last decade of deep learning has revealed about cognition itself.

Neural networks don’t start with structure. They start with almost nothing — a basic architecture — and they develop capability through repeated exposure, through the refinement that comes from encountering things that break their existing patterns and having to build new ones. That process of responding to anomaly, of updating priors, of assembling increasingly abstract representations across layers: that might just be neuroplasticity. Not a metaphor for it. The actual mechanism.

This reframed something important for me. The people who are most adept at abstract thought — in math, in philosophy, in any domain that lives predominantly in the mind — were probably not born with special brains. They may just have, deliberately or by circumstance, developed more varied and novel internal structures for processing. More layers. More unusual training data. More exposure to the kinds of perplexity that force new pattern formation.

Reading this, I genuinely felt like I could reprogram myself, and a few months in, I’m starting to feel the difference.


Installing the upgrade

Most of my career has been about facilitating other people’s work — translating what engineers are stuck on up the chain, holding clarity on the mission while they’re heads down in the details, keeping the effort feeling meaningful. The bottleneck was rarely ideas. It was bandwidth, alignment, morale, momentum.

Working with agents is way different. The facilitation mostly takes care of itself. You point them, they move. The bottleneck shifts — to strategy, and to figuring out what’s actually worth building. Not just what can be executed, but what’s feasible. That’s a different mental model than director mode — not just a different set of responsibilities, but a different way of orienting. Less about marshaling forces to deliver on a plan, more about forming the plan. You’re not thinking about the executive function anymore, you’re trying to think as one.

This is where Bessis’s book keeps paying off in ways I didn’t expect. The whole argument is that you can install new mental models — not just learn about them, but deliberately develop the internal structures to run them. The same way the blind echolocator wasn’t just told how to navigate space, he built the apparatus to do it. I’m not there yet, but I know what the practice looks like now: combining research threads with prototyping threads, synthesizing fast enough to know which direction to push before the moment passes. Publishing things like this post. Each one a small bet that the idea is worth stress-testing beyond my own head.


Yes, ideas come unexpectedly, “as if summoned from the void,” but that’s normal. Yes, plasticity is a slow and silent mechanism that occurs without any real effort on our part, provided we’re exposed to the right mental images. Yes, we learn precisely when we force ourselves to imagine things that we don’t yet understand, which unfortunately is the same exact thing that most people run away from. Yes, paying attention to the small details that trouble us is of the utmost importance, and the fastest way to learn is to follow the path of maximum perplexity. Cartesian doubt, within this framework, can be interpreted as an “adversarial” hack to accelerate our learning.

David Bessis, Mathematica

David Bessis is on LinkedIn — worth a follow. The book is published by Yale University Press and available at yalebooks.yale.edu or your local indie via Bookshop.org. I first encountered him on Econ Talk

Comments

Get new posts by email