Beyond the AGI Spectacle: What We Are Actually Living Through
Epistemic Drift in Latent Space
This piece draws on a recent vision article I authored that has now appeared in ACM Transactions on Intelligent Systems and Technology: Shehu, A. (2025). “Beyond the singularity myth: Artificial general intelligence as cumulative infrastructural transformation.” ACM Transactions on Intelligent Systems and Technology. https://doi.org/10.1145/3779133.
I occupy an unusual set of roles. I am an AI researcher, running a lab that spends its time in the messy intersections of foundational AI, biology, health, engineering, and policy. I am an educator, designing courses that introduce AI to students across disciplines, including many who never saw themselves as “technical.” And I am an executive, serving as the inaugural Vice President and Chief AI Officer at a large public university, responsible for deploying AI at institutional scale and aligning it with the mission of a public, access-driven R1.
From this vantage point, I see the same technology through three different lenses: as an object of research, as a subject of teaching, and as infrastructure that quietly reshapes how an institution thinks and acts. That combination is what informs this piece. It is also why I find much of the public conversation about artificial general intelligence frustratingly narrow.
The question people ask is when AGI will arrive. Next year, in ten years, or never. Whether it will be aligned or misaligned. Whether it is an existential threat or an overhyped marketing term. The question we should be asking is different. AGI or A’Super’I are largely distractions.
The Retrospective Arrival
Here’s what I suspect: we will not recognize AGI as it arrives. We will recognize it only in retrospect.
There will be no headline announcing “AGI Achieved.” There will be no moment when everyone agrees the machines have crossed some threshold. Instead, months or years from now, we will look back and realize the boundary was already crossed. Not through a single system achieving general intelligence, but through the cumulative handover of decision-making authority to systems whose reasoning we can no longer fully trace, verify, or override.
The transition will have been infrastructural, not spectacular. And that infrastructural quality is precisely what makes it so difficult to perceive in real time.
The real story is not a single breakthrough. It is a slow, cumulative shift in who holds verification power inside the infrastructure of decision making.
Absorption Capacity: The Missing Variable
So, when I hear the word “AGI,” I do not immediately think about consciousness or takeover. I think about absorption capacity.
By absorption capacity I mean the rate at which human systems can integrate new AI capabilities, adapt their practices, update their governance, and still maintain meaningful oversight. That rate is not fixed. It varies across institutions, sectors, and countries. It is also, in many places, completely out of sync with the pace at which AI is evolving.
At a large public university, I see this misalignment every day. AI models iterate on timescales of weeks. Vendor platforms push new “smart” features overnight. Meanwhile, curriculum revision takes years. Policy changes require committees, consultation, and approvals. Budget cycles are annual. Regulations move even more slowly.
The result is a growing gap between the systems we are actually using and the governance that is supposed to make sense of them. That gap is what I call the absorption gap. It is not a theoretical construct. It is a widening space where AI is already part of the decision infrastructure, while our ability to verify and steer its behavior falls behind.
From Augmentation to Autonomy: How the Handover Actually Looks
Let me ground this in concrete practice. We did not wake up one morning and decide to give AI agency over institutional processes. The shift began with something that felt safe: augmentation.
We started with pilots meant to support human work. AI tools to help students digest readings. AI tutors that were always available. AI assistants that helped faculty design syllabi compliant with institutional, state, and federal regulations. Systems that helped staff draft and summarize memos, freeing time for work that requires human judgment.
These were framed as aids. They fit a familiar story: humans in charge, machines supporting.
Then something subtle began to change. The systems started to do more than support. They started to recommend, route, and prioritize. Delegation crept in. Scheduling assistants that not only propose but confirm times. Systems that flag which tasks deserve attention and which can wait. Recommendation engines that tell students which resources to focus on. In some institutions, systems that weigh in on admissions decisions, a trend I view with real concern.
The line between suggestion and decision does not move all at once. It shifts in small steps. A human accepts one automated recommendation, then another, then hundreds of them. The cumulative effect is a slow handover of micro-decisions to systems whose inner workings we cannot fully audit.
This is the augmentation-to-autonomy continuum. It’s not a binary switch but a gradient we’re moving along, often without explicit decisions about when to take the next step. This is what the early stages of AGI look like in the wild. Not a single breathtaking demonstration but a thousand quiet transfers of agency.
Infrastructural Change Is Invisible Until It Fails
We like to imagine that AGI will announce itself with spectacle. A breakthrough paper. A dramatic demo. A moment when everyone agrees that the machines have “woken up.”
In practice, infrastructural change does the opposite. It becomes invisible as it succeeds.
Think about the last time a major model update broke people’s workflows. The frustration was not abstract. People were not upset that some research model was offline. They were upset because a piece of their daily infrastructure stopped working. Their writing pipeline broke. Their customer support routines failed. Their research assistant went missing.
That is the true signal. When model failure feels like infrastructure failure, the transition from toy to substrate has already happened.
The same is true for mundane tools such as email. We started with spam filters that did simple classification. We added priority inboxes that decided which messages mattered. Then came suggested replies. Now entire email threads, scheduling negotiations, and follow ups can be handled with minimal human intervention.
At what point did email stop being just a tool and start acting as an agent? There is no single moment. There is only gradual accretion of capability and gradual ceding of control.
This is why I worry less about a “day of arrival” and more about the day we realize we can no longer reconstruct how key decisions are made.
Asymmetry I: Governance Lag and the Velocity Mismatch
Our governance mechanisms were not designed for this velocity. Universities, government agencies, and many companies operate on slow cycles. Budgets are set yearly. Curricula are revised every few years. Policies take months or more from drafting to implementation.
Meanwhile, model providers iterate quickly. New APIs appear. New features roll out to millions of users in a single update.
By the time a policy is written for a particular model, that model has often already been patched, extended, or quietly replaced. Institutions write guidelines for “GPT-5” while the ecosystem has already shifted to a mix of successor models and specialized variants. They train staff on one interface and wake up to find that the interface has changed overnight.
This is governance lag. It produces a specific pattern: we end up documenting deployments rather than steering them. We review what has already been embedded rather than shaping what should be embedded. We govern the past, live in the present, and hope the future cooperates.
Here’s the uncomfortable truth: we are not governing AI. We are documenting it. We are creating a historical record of what was deployed, not a deliberative framework for what should be deployed.
The velocity mismatch also concentrates power. Actors who move fast, with access to compute, talent, and data, set de facto standards long before any collective deliberation can catch up. Public institutions, which ought to provide counterweight and alternative models, struggle to compete, not because of lack of insight, but because of constraints on resources and agility.
Who Shapes AI, Who Inherits It
When I drive across rural Virginia, West Virginia, or western Maryland, I do not see regions in a “race to AGI.” I see school systems still trying to stabilize basic staffing. I see teachers who have been told alternately to ban AI, ignore AI, or use AI without clear guidance or support.
My daughter’s public school system recently signed contracts with Google to provide AI tools, but no meaningful teacher training accompanied the rollout. The result is confusion at the classroom level. Some teachers conflate all AI, collapsing generative AI and basic machine learning (ML) into a single threatening category. I’ve watched teachers refuse to allow students to use simple ML techniques to enhance science projects, threatening disciplinary action for what they perceive as “cheating,” when the students are actually engaging in legitimate computational work.
Thirty minutes away, students at well-resourced private schools are taking courses on prompt engineering and AI-assisted research.
This is capability inequality in practice. Not everyone at different speeds, but different populations solving completely different problems while the same technology reshapes the landscape under all of them. In one setting, students are learning to work with AI as an instrument. In another, they’re learning that AI is something to fear and avoid, or encountering it through top-down contracts without the pedagogical infrastructure to make sense of it.
I also see my own children in middle and high school, moving through curricula that look almost unchanged, while the technology around them evolves rapidly. The gap between what they are learning and the world they are entering grows wider every semester.
Asymmetry II: Institutional Misalignment as an Ecosystem Problem
In the research world we talk constantly about “alignment.” Reward functions. Human feedback. Constitutional rules. All of these are useful work. But they can obscure a basic fact.
Alignment is not a property of the model in isolation. It is a property of the ecosystem in which the model is deployed.
We can in principle train a model that follows its deployer’s instructions with high fidelity. But what happens when two such systems, deployed by different institutions with different incentives, collide?
Imagine a healthcare system aligned with minimizing costs, an insurance system aligned with minimizing payouts, and a patient advocacy system aligned with maximizing access. Each may be well aligned with its local mission. Taken together, they can produce outcomes that satisfy no one and undermine trust in the entire system.
This is institutional misalignment. It is the risk of a thousand locally rational systems that, in combination, generate globally irrational outcomes. The nightmare scenario is not necessarily one superintelligence deciding to turn against humanity. It’s not Skynet. It’s thousands of Skylets, semi-autonomous systems, each optimized within its silo, producing a world that no one intended and no one can easily steer.
We see hints of this already. Recommendation algorithms optimized for engagement produce filter bubbles and polarization. Pricing algorithms optimized for profit produce unexpected inflation. Hiring algorithms optimized for efficiency produce demographic biases. Each system works as designed. The problem is the emergent behavior, the unintended consequence of many optimized systems interacting in an uncoordinated ecosystem.
There is no stable alignment target. Human values are not a fixed point in conceptual space. They are contextual, contested, and constantly evolving. An AI aligned with one set of values is, by definition, misaligned with others. As AI systems proliferate, we are not moving toward alignment. We are moving toward fragmentation.
Asymmetry III: Capability Inequality as an Amplifier
Another asymmetry that deserves more attention is capability inequality. AI is not a natural equalizer. It amplifies existing advantage.
Individuals with strong skills and clear goals use AI to multiply their productivity. Organizations with good data infrastructures and agile management integrate AI and pull away from competitors. Regions with strong educational systems and compute access accelerate. Others stall.
Every gain in AI capability risks widening the distance between those who can absorb it and those who cannot. Students with access to AI literacy, guided practice, and safe sandboxes learn to use these tools as instruments. Students without such access encounter AI primarily as hype, fear, or outright prohibition. Workers in AI-ready organizations learn to work alongside these systems. Workers elsewhere watch opportunities consolidate away from them.
The cascade is uneven by sector and geography. Finance will adopt faster than K–12 education. Urban centers will adapt faster than rural communities. Well-funded institutions will outpace under-resourced ones. This is not a future scenario. It is happening now.
The most urgent investment today is not in ever-larger models, but in AI-ready institutions and communities. I dare say we do not need another leap in capability. We need a leap in absorption capacity. As an AI researcher, I can appreciate that compute is a bottleneck, but it may well be a distraction from the real, immediate ones. The real bottleneck is institutional readiness: the capacity of schools to integrate AI into curricula, of governments to regulate AI effectively, of communities to understand AI’s implications and advocate for their interests.
Epistemic Drift: The Core Crisis
Despite efforts to build institutional capacity, there is a core concern that I find hard to shake. It is the problem of epistemic drift: the gradual loss of human verification power.
Modern models reason in high-dimensional latent spaces that are not designed for human interpretability. We can measure accuracy, bias rates, and other aggregate statistics. But we increasingly cannot trace how the model gets from input to output in a way that is satisfying for scientific, legal, or democratic scrutiny.
Let me be concrete about what we’re losing.
In my lab, we increasingly rely on AI systems to identify patterns in biological data, patterns no human could feasibly spot in the dimensionality and scale involved. The systems are allowing us to do great things. But when they’re wrong, we struggle to understand why in a way that would prevent the same error in a different context. We’re trading interpretability for capability, and that trade feels increasingly irreversible. In truth, even when the models are right, we struggle to understand why, and that bothers me.
In administrative contexts, it’s worse. When a student appeals a decision partially shaped by algorithmic assessment, what exactly are we explaining? The statistical properties of the model? The aggregate accuracy rate? These don’t answer the question: “Why me?”
We’re shifting from a world where decisions could be justified through accessible reasoning to one where they’re validated through opaque optimization. This isn’t a future risk. It’s happening now, in systems already deployed.
This loss deepens when AI systems are chained. One system’s outputs feed another. That output is then ingested by a third. The reasoning chain becomes effectively opaque. Now bring models that operate across diverse data modalities. Can we truly claim to understand what is happening? At some point, we risk shifting from “We can inspect the reasons for this decision” to “We can only say that the system was statistically correct most of the time in the past.”
Oversight that rests only on outcomes, without access to reasons, is fragile. It breaks under stress. It undermines trust. It erodes our capacity to correct failures before they scale.
Intelligibility to Ourselves
The nightmare scenario isn’t a superintelligent system that deceives us. It’s a world where we can no longer reconstruct the reasoning behind critical decisions, a world where “the AI said so” becomes an explanation we accept because we have no viable alternative.
This is the heart of epistemic drift. It is not about dramatic misalignment stories. It is about a slow slide into incomprehension, where critical social decisions are made by systems whose internal logic we cannot interrogate in a meaningful way.
But there’s something deeper here. As more of our social processes become mediated by AI, as algorithms shape what we see, what we believe, who we meet, what opportunities we access, the question becomes whether we still understand how our society operates. Can we still trace cause and effect? Can we still explain why things happen the way they do?
Epistemic drift threatens not just our ability to verify AI, but our ability to understand ourselves. When the infrastructure of decision-making becomes opaque, society itself becomes unintelligible. We live in a world of emergent outcomes that no one intended and no one can explain. We have lost not just control, but comprehension.
That loss, more than any misaligned superintelligence, is the existential risk we face. Not extinction, but incomprehension. Not destruction, but drift. A slow slide into a world we no longer understand, governed by systems we can no longer verify, producing outcomes we can no longer steer.
Adaptive Governance: The Response We Need but Don’t Have
If the core problem is absorption capacity and the central risk is epistemic drift, then what is our answer?
I hear “adaptive governance” proposed as the answer, regulation that learns and evolves with the technology. I want to believe in this. But I don’t know how to make it real at the speed required.
Who builds the public compute infrastructure that would prevent permanent capability concentration in private hands? How do regulatory sandboxes avoid becoming institutionalized loopholes? What happens when adaptive governance itself can’t adapt fast enough, when the technology is updating weekly and the governance is updating quarterly?
I don’t have good answers. What I do know is that governance can’t be separated from literacy. People can’t contest what they don’t understand. They can’t exercise agency over systems that are illegible to them. So any governance that works has to include serious, scaled investment in public understanding. Think less coding bootcamps and more deep engagement with what these systems can and cannot do, where their authority should and shouldn’t extend.
This costs money and time that most institutions don’t have. Which means we’re building governance on a foundation that doesn’t exist yet.
This also requires real investment in AI literacy at scale: K–12, higher education, workforce training, and public-facing programs. Literacy is not about turning everyone into a coder. It is about enabling people to understand enough to have informed preferences, to contest decisions, and to exercise agency.
Beyond governance and literacy, we need something else: epistemic infrastructure for an AI-mediated world. How do we maintain scientific authority when AI can generate persuasive but false research? How do we preserve journalistic credibility when AI can produce convincing but fabricated content? How do we sustain educational legitimacy when AI can provide personalized instruction at scale?
As AI becomes infrastructure, our epistemic institutions (science, journalism, education) must evolve to maintain their role as arbiters of truth and gatekeepers of knowledge. Without that evolution, epistemic drift will accelerate beyond any hope of correction.
The ethical frontier is not to make machines superintelligent. It is to make societies super-responsible. The measure of our success will not be how intelligent our machines become, but whether we become super-responsible: capable of absorbing, steering, and co-evolving with what we’ve built.
AGI as Mirror, Not Endpoint
In that sense, AGI is less an endpoint and more a mirror. It reflects the strengths and weaknesses of our institutions.
Every difficulty we encounter in governing AI reveals an older difficulty that we never fully resolved: coordination across agencies and sectors; persistent inequities in who gets access to powerful tools; slow and fragmented policy processes; and underinvestment in public capacity and public infrastructure.
AI does not invent these problems. It accelerates and amplifies them until we can no longer ignore them. Every limitation we encounter in responding to AI’s rise, every governance lag, every institutional misalignment, every capability gap, is not a new problem created by AI. It is an old problem made visible by AI.
The mirror shows us not what AI is doing to us, but what we have always been: a collection of institutions designed for a slower world, now confronting a faster one.
The Real Question, Reframed
So I will end where I began. The interesting question is not whether AGI (or ASI) arrives in 2030, 2040, or never. The question we ought to ask is whether, as AI systems grow more capable and more deeply embedded, we maintain our collective ability to understand and steer them.
Can we? The gap is widening faster than we’re building capacity to close it. The asymmetries in resources, in access, and in the ability to shape versus merely inherit these systems are hardening into structural divides.
What keeps me working on this is not confidence but obligation. I am privileged to hold positions that let me see these dynamics clearly. I have students who deserve more than inherited systems they can’t interrogate. I have my own kids who are going to live in whatever world we build or fail to build now.
The work isn’t about making machines superintelligent. It’s about whether we can maintain collective intelligibility, the precious ability to explain our decisions to ourselves and to each other, to contest them when they’re wrong, to understand the systems we’re depending on.
Some days I think we’re losing that faster than most people realize. But I also think it’s not lost yet. The question is how much time we have, and whether we’re willing to use it for something other than the next capability demonstration.
Let me propose that AGI will take care of itself. Our ability to steer it, now that requires work we’re not yet doing at the scale required.
The frontier is not artificial general intelligence. It is the collective capacity to remain intelligible to ourselves while embedded in AI-mediated decision ecosystems.


What in the Erasure Nightmare is this!?
Good piece tho! I want to say I enjoyed reading it but I'm secretly terrified 🤣
My hunch, more than a knowledgeable assessment, is that society is up against it.
I use the word "society" advisedly.
The big money behind the dangerous and inadequate approach to building AI appears not to bode well.
I don't want to go down the paranoid rabbit hole.
But many of the points made are of great concern.
Increased inequality in technology use.
The technology itself deprives individuals and groups of the wherewithal to make informed decisions, as ever more is delegated to an opaque, misunderstood, and out-of-control system.
--
Are there any answers to this that address the scale that this article points to?
If the mechanisms of the huge data-centre AI complex snatch away from the human user, is there a way to prevent this or to snatch back?
Those are two different questions.
At the centre of this is the way individuals value themselves and those around them.
The AI machine works hard at undermining this sense of value.
Do not look for it there, as that is the path to being poisoned.
Pragmatically, this must mean that decisions AI Agents take in a moment must be capable of review and reversal at some later point.
I wonder if it is possible to design this into these systems and the way they are used?