Intelligence Locked
On Friction, Priors, and Stagnation
For those few of you that follow me on substack: First, thank you for intersecting your threads of life with mine. Second, if you do read what I write in its entirety, you will hopefully have noticed a trend: holding contradictory thoughts together. Or as my students hear me say these days: Two things can be true.
I wanted to thread some of these clashing thoughts further.
There is a sense that having the knowledge of humanity at our fingertips will unleash or unlock intelligence; sidebar: have you ChatGPT-ers noticed how much GPT-5 loves these two words of the day - unleash, unlock.
There is this pervasive and growing belief that, suddenly, having digital companions that can reach into the thousands of years of human experiences, thoughts, and knowledge documented and stored in meshes of neurons, will be what we need to get to the next discoveries fast, faster, fastest, to the next phase in our evolution.
This thought is everywhere. Anthropic CEO Dario Amodei famously divinates our very near future as a country of geniuses in a data center. Elon Musk echoes similar predictions, that AI will probably be smarter than any single human next year…maybe by 2026. Open AI CEO Sam Altman keeps pushing abundant intelligence and predicts 2026 as the arrival of systems that can figure out novel insights. You can pick your favorite tech or thought leader. They are betting big on AI unleashing/unlocking intelligence.
This is one of those thoughts that is worth sitting with for a while, as a human, turning it and mulling it over. I find myself having two thoughts at the same time, of the form: Yes, but…
The Yes first. I am excited about the potential of AI for biology, biotechnology, bioengineering, drug discovery, and more. And yes, you read that right. Potential. I see great potential. Certainly, in my own research, the pace at which we are now moving in my lab is faster. But we still need those first real hard discoveries, diagnostics, therapeutics. Where are they? I am anticipating them, but they are not quite here yet.
And now the No. The Bigger one, a provocative thought. What if, instead of unlocking intelligence, having access to the treasure trove of all human knowledge locks us in?
I keep thinking about intelligence. Comes with the territory. When you have undergraduates in your class who are asking you very simple but oh-so-difficult questions, the kind that you cannot anticipate when you hang around adults of a certain age, you tend to think deeply about AI-enabled intelligence.
Someone that I respect a lot (and whose substack posts I read in their entirety), Michael Jabbour, echoes what many of us experience: That this new modern AI (you can pick ChatGPT if that is what you are using these days) is lowering the friction, the activation barrier, whatever your favorite terminology. If you want to read Michael’s latest post on it, The Phantom Limb of Intelligence, it is a great read.
I want to offer a counterpoint. Here it is upfront:
When friction falls to near-zero, minds tend to exploit priors and converge; they do not explore.
I will reach out first to one of my most favorite contemporary sci-fi writers, Adrian Tchaikovsky, to build this further. His “Children Of” series is a gem, and the inspiration for my counterpoint.
In Children of Time, Tchaikovsky tracks a spider civilization forming on a distant planet that humans wanted to terraform. An experiment, as often happens in sci-fi, gone wrong in many ways, breeding unintended consequences. The spiders are seeded with a (nano)virus that expedites intelligence but are ultimately left on their own. They experience their own friction. They must bootstrap everything. They fight against other insects and species that benefit from the same virus. Over hundreds of years, they develop a strange, different kind of intelligence. A different language, culture, technology. Inherently different. Distinctly non-human.
Left to their own devices, the spiders rise to meet the humans, their creators, only to find them limited and weak in both intelligence and technology. The first book in the series has a great one-liner that I often come back to: “We are coming.” It is the spiders’ message to the humans, as they leave their planet and become a spacefaring civilization.
Now move forward to the next book in the series, “Children of Ruin.” Tchaikovsky tracks the rise of another civilization, octopuses. While brilliant and in many ways better equipped to leap forward than the spiders, the octopuses are held back. Humans try to uplift octopuses but cannot resist teaching them. The result is ingenious but bounded by human frames. In yet another accident, the octopuses escape to a water planet not fully terraformed by the humans but already speckled with old human technology. They build their civilization and intelligence around this technology and the memories of their human trainer.
Of course these three civilizations will meet: the humans, the spiders, the octopuses. The spiders have developed a very different technology, where the digital and the biological blend deeply and cannot be distinguished: biological/digital hybrid silks, distributed cognition, superior to human technology, beyond human frames. The octopuses have done their best to improve human technology. Their ships are bulky. They are locked in their emotive language. The humans have lost so much to time and conflict.
So, no, I have not been thinking of geniuses and Einsteins in data centers. I have been thinking of octopuses.
Don’t get me wrong. I love having access to knowledge. I love (and I fear sometimes this feeling), asking my digital companion questions that would be hard for most humans to answer for me in a heartbeat, at the tap of my fingers. I love stitching lines of thought together from fragments, even co-critiquing and co-ideating my own research, with guided prompts.
But sometimes, being left alone with your thoughts leads to great things. Sometimes, having nothing to start with, removes that strong prior that pushes you down easy paths, locks you in pre-determined but unfinished steps, making incremental movements, trying to fix what does not work as opposed to charting completely new terrains, discovering new worlds and new forms of intelligence.
I fear no Einsteins in data centers. I fear incrementality. Churning and souping, milling about, easy wins, at scale, endless adrenaline-shot autocompletes.
Other brilliant humans have written about this before, in different forms, and with different words.
Borges, who I have quoted in an earlier post here on substack, makes that point in his “The Library of Babel” that too much information paralyzes, that abundance without navigation yields despair or cults of trivial heuristics.
Kuhn, in his Structure of Scientific Revolutions argues that normal science exploits priors, that anomalies (in fact, accumulating sufficiently many anomalies) are necessary for paradigm shifts.
Martin Heidegger, in his concept of “Gestell” (Enframing), warns us that technology tends to make the world available and orderable, with the risk being that we see only what our frames allow. In our specular room of ChatGPT mirror walls of all humanity, all knowledge narrows our experiences and our perceptions, locking us in.
Created with ChatGPT 5, September 28, 2025.
Perhaps McLuhan and Postman make this more explicit. “The medium is the message” is a heavy thought. Their provocation: a medium that provides answers on demand biases toward shallow breadth over deep invention.
Back to another sci-fi writer, Ted Chiang, who makes the interesting point that forgetting is beneficial. In his “The Truth of Fact, the Truth of Feeling,” the “Remem” technology offers perfect, searchable audiovisual recall of every moment of one’s life. Chiang makes the case that perfect recall tech locks narrative, that forgetting and reconstruction are essential to new meaning-making.
We have our own version of this paradox in computer science. All optimization researchers know about the importance of balancing exploration with exploitation. In the language of optimization, I fear that all we are doing with our digital companions is exploitation, that exploration may require intentionally abandoning or silencing them for a while. Taleb, in his “Antifragile: Things that Gain from Disorder” says as much: systems need stressors (friction) to become creative and robust; removing stress/friction leads to fragility and local optima.
So, I worry, not about superintelligence, not about unlocking or unleashing, or any of that sort.
I worry that we will find no Einsteins in those data centers afterall. No spiders. Only octopuses.



Basically agreeing with what you said, the way I would put it is that AIs are very good at repeating The Conventional Wisdom. They can do it quickly, clearly, on the broadest imaginable range of subjects. But they don't go beyond that.
I'd give the example of the Wright brothers. When they decided to work on flying, they first spent hundreds of hours studying birds. This let them see how birds twist themselves in flight, like you can do holding opposite diagonal corners of an empty cardboard box with some give in it. They then produced a machine with the right kind of materials and controls to allow such a twisting in flight.
If you took an AI today, fed it the entire corpus of human knowledge as of 1900, and asked it "How can I build a flying machine?" would it say "First, spend a few hundred hours in a tree watching birds. Then grasp an empty cardboard box by opposite diagonal corners and twist it a little"? I don't think so.
Putting The Conventional Wisdom instantly at everyone's fingertips is a great thing that can do humanity a lot of good. But at least from my limited experience, there's something about human creativity that I don't think machines have quite mastered yet.