Computing after Code
What We Said We Wanted
I survived the boredom of my first two years in computer science. I tell people about this experience, sometimes, when they ask how I came to the field. I survived it. The wording is precise. It had nothing to do with my performance.
The mathematics-oriented courses had a different quality. There was something to follow, something to feel one’s way into, a rhythm that rewarded attention. I would walk out of those classrooms thinking. The other courses were not like that. There were lists to grow. There were numbers to sort. There were exercises whose internal logic was clear and whose larger purpose was not. I did them. I waited for the larger purpose to arrive.
The question I kept asking, mostly silently to myself, was the Why. Why this exercise. Why now. Why before the questions I came to the field for. The answer was always some version of the same answer: this is the foundation, the real things come later, trust the sequence.
I did trust it. Trust is what got me through. I stayed, and I came to love the field. I came to love what the foundation was a foundation for.
Many did not stay. Retention was an issue across introductory courses, and it remained an issue across the decades that followed. Students who came in with curiosity and the kind of mind the field needs left at higher rates than the field could afford. The field knew this. And so it responded with grants, bridge programs, summer immersions, scaffolding. The on-ramp itself stayed largely where it was.
The faculty teaching those exercises were among the best teachers I ever had. The discipline knew what it was doing. The mathematics of computation is genuinely difficult. There is no royal road into it. The substrate of the courses was not arbitrary. The encounter with the literal-minded machine, the discipline of stating a thing precisely enough that a machine can do it, the irreducible labor of writing code that runs and is elegant and concise: these things teach something real, and they cannot be skipped.
And yet, the discipline kept struggling with the gap between the foundation and the questions. The struggle was conducted in good faith, with significant resources, by people whose intelligence about the problem was real. Jeannette Wing’s seminal 2006 essay in Communications of the ACM named computational thinking as a curricular goal in its own right, distinct from the teaching of programming syntax [1]. Margolis and Fisher had already documented, in Unlocking the Clubhouse, how a leading department was losing students at the on-ramp and what the cost of that loss was [2]. The NSF launched its Broadening Participation in Computing program in the same period, with a portfolio designed precisely around this concern. [3] Mark Guzdial sustained two decades of writing on computing education research, with the retention question at its center. [4]
The students who stayed, like me, were the ones who could hold the foundation in mind long enough for the questions to arrive. The students who left came in with the same questions, and the same minds, and the same hunger. They had less reason to wait. The asking of “why” was met, again and again, by the silence of “later.”
That wall, the one between what the courses were and what they were for, did not break.
What Held the Wall in Place
The structural reason the field could not close the gap lived in the shape of the curriculum itself.
A computer science degree is a sequence. The third course requires the second. The second requires the first. The first requires, typically, fluency with a particular language and a particular kind of problem. Each course is built on the assumption that students arrive with certain competences, because the next course up the chain requires that those competences exist. An instructor who wished to teach more thinking and less syntax in an introductory course faced a constraint that was not local. The students leaving that course had to enter the next one, where the second instructor had built the syllabus around the assumption that students could do certain things. The cost of changing the substrate of the introductory course was not borne by the introductory instructor. It was borne by everyone downstream.
This is what curricular momentum looks like. The constraint belongs to the system, beyond what any individual could choose or refuse. Every new program designed to bridge students into the on-ramp had to deliver them in a state ready for the on-ramp as it was. Every grant that funded a more thoughtful introductory experience had to send those students forward into courses that had not changed. The reform projects worked, locally, but they could not move the structural fact downstream of them.
This is also why the field’s own awareness of the problem did not produce a different outcome. Wing’s essay was read widely. The BPC program funded substantial work for nearly two decades. The retention research literature is quite large. None of it could change the curriculum in a coordinated way, because no single actor had the authority to redirect a curriculum that lived in the relationships between courses. No coalition of actors moved at the pace the curriculum’s momentum required.
The wall was made of the way the bricks had been laid down across decades. To remove a brick was to risk what the wall was holding up. The silence on “why” came from an institution that knew the answer but could not coordinate the change.
What the Field Built to Move the Wall
The field tried, repeatedly. The history is older than the cheating discourse and older than the AI moment.
In 1967, Seymour Papert, Cynthia Solomon, and Wallace Feurzeig built Logo at BBN, the first programming language designed for children, with the explicit goal of letting them think computationally through a vocabulary they could hold in mind. Papert’s Mindstorms in 1980 made the philosophical case: the computer as an object to think with, programming as a route into mathematics and reasoning rather than the destination [5]. In the decades that followed, the Logo lineage produced LEGO Mindstorms, Alice, Greenfoot, and most influentially Scratch, developed at the MIT Media Lab by Resnick and colleagues under NSF funding beginning in 2003 [6]. Parallel efforts approached the same problem by different means, including Mark Guzdial’s Media Computation at Georgia Tech, which used Python within multimedia contexts to give intro students reasons to persist through the substrate. The premise across these tools was consistent: lower the syntax barrier, change the encounter with the substrate, return the cognitive budget to the questions that drew students to the field.
These tools did real work in the contexts they were built for. Scratch alone has more than a hundred million users across after-school programs, K-12 classrooms, and outreach settings worldwide. The pedagogical research literature documents real gains in computational concepts at the K-12 level.
But what they did not do was move the university curriculum. The transfer problem, well documented in the empirical literature, describes the limit. Students who learn programming concepts in Scratch or other block-based environments do not arrive in a Java or C++ classroom with those concepts intact and immediately operational. The early conceptual advantages of block-based intro environments fade within ten weeks of text-based instruction [7]. The wall the block languages were trying to dismantle reasserted itself at the point where the curriculum still required the substrate.
The reason is structural. The block-language interventions worked locally and could not move what was downstream of them. A student who arrived at a CS1 course with Scratch fluency still entered a syllabus built around the assumption of text-based programming competence, taught by an instructor whose own teaching had been built around the next course up the chain, which had not changed.
What this history exposes is the consistency of what the field said it wanted. Logo, Scratch, the BPC alliances, Wing’s essay, Margolis and Fisher’s study, and Guzdial’s two decades of writing all reach toward the same thing. For sixty years, the field could not move the wall from inside.
Constraint Relaxation
Then, in late 2022, the constraint that organized the curriculum was relaxed. We did not see it coming, but the tools have now done what no grant program could fund. They generate working code from natural language descriptions. They explain code line by line at any level of abstraction. They debug. They translate between languages. They produce examples on demand, calibrated to whatever vocabulary the user already has. How well any of this works in a given moment is a moving target. The months between late 2025 and now have done the work of years. The encounter with the literal-minded machine is no longer something a student must traverse alone. The substrate is now mass-accessible.
This is what “Learn to Code” was never able to mean. The cultural sentence had carried the curriculum on the promise that mastering the substrate would lead to economic security. Bootcamps and degree programs and high school AP courses and policy commitments and parental hopes were built on the assumption that the substrate was the door, and the door, once passed, opened into the field. The door has been moved. The substrate is now something a student can enter alongside, with help, with iteration, with conversation. The on-ramp the field could not redesign for thirty years was redesigned in eighteen months.
The constraint relaxation is the structural fact the field has not absorbed. The cheating discourse that began appearing in faculty meetings in 2023 is one manifestation. The bootcamp closures are another. The hiring pullback in entry-level software roles, where AI tools have begun to do work that interns and junior engineers used to do, is a third. From inside the field, these look like three separate problems. From outside, they are one structural change in the relationship between students and the substrate, refracted through three institutional surfaces.
The People in the Pivot
The relaxation does not arrive without cost. The cost falls on specific people who built lives and programs around a world that is now ending.
Bootcamp graduates trained in the years 2018 through 2024 invested significant time and tuition in a model that worked when the substrate was the bottleneck. They learned to write code that compiled and ran. They learned the shape of a typical entry-level software engineering interview. They earned credentials and entered a labor market that absorbed them. That market has now changed in ways their training did not anticipate. Their skills carry value. The work they were trained for has moved up the stack faster than the credential could update. They are inside a transition they did not choose and did not see coming.
Mid-career faculty whose syllabi have been built around the substrate are inside a different version of the same change. Their teaching craft is not invalidated. What they teach has moved under them. The lecture they gave last year is not the lecture they can give this year. The homework set that worked the year before AI tools became ubiquitous now produces submissions whose authorship is opaque. The accumulated craft of teaching the substrate, which represented decades of careful work, has to be re-aimed at a different target. This takes time, and time is exactly what the academic calendar does not give.
Students currently inside computer science programs are absorbing the dislocation in real time. They came in with a catalog description of a degree that is no longer the degree they will leave with. Some of them are oriented toward the labor market that is contracting. Some are oriented toward graduate study and research, where the constraint relaxation is differently present but no less real. They cannot pause their progress through the program while the field figures out what its degree now means.
The Wrong Question, Asked Loudly
The most audible response inside the field is the cheating discourse. Faculty meetings are filled with it. Conference panels return to it. Op-eds and Substack pieces and chair memos and provost reports rehearse some version of the same question: what do we do about the students who use AI to generate their homework?
The discourse comes from a real place. The faculty member who has just discovered that the homework set she has assigned for fifteen years is now producible in thirty seconds by a tool the student opened in another browser tab is contending with the collapse of an instrument she has used her whole career to assess whether her teaching has worked. The homework was a measurement device, but the device has stopped measuring what it measured. The faculty member does not yet have the replacement instrument.
The cheating frame, however, asks the wrong question. The wrong question produces the wrong remedies. AI-detection tools that perform poorly. Honor code revisions that cannot fully capture what is changing in real time. Restrictions on tool use in courses where the labor market the students are entering uses the tools daily. Each is a response to the symptom. None addresses the underlying change.
The right question is what assessment looks like when the substrate is shared with a tool. Computational thinking has assessment instruments that do not depend on the student writing every line of syntax herself. Walk a student through her reasoning about a problem before she touches a keyboard. Give her code she did not write and ask her to predict its behavior. Give her code she did not write and ask her to identify why it is wrong. Ask her to specify what a solution should do, in precise language, before the tool produces anything. Ask her to evaluate three solutions the tool produces and explain which is closest to what she wanted and why. These are harder to grade than a homework set that runs or does not run. They are also closer to what the field said it wanted to teach. They are also what graduates have always been expected to do in industry, only more so now: specification under uncertainty, evaluation, judgment.
The faculty panicking about generated code are good teachers encountering a moment without the tools they need. Building the new assessment instruments is the work of the next several years, and it is the work that will determine what a computer science degree means after the substrate.
What Was Asked For
What the moment makes possible is what the field said it wanted in the years when the substrate was in the way.
Computational thinking. Habit of mind. The discipline of stating a problem precisely. The judgment that recognizes a good solution. The capacity to spot when the easy answer is wrong, when the model’s confidence is misplaced, when a generated solution looks correct and is wrong. The kinds of analytical reasoning the field has claimed it was teaching, the kinds for which the substrate ate the time-budget. The time has been returned.
Two responses are emerging across institutions, and they are sorting the field into two camps.
The first treats the moment as a crisis to be managed. AI-detection software, tighter restrictions on student tool use, course redesigns oriented toward making the substrate harder for the tool to produce. There is intelligence in this response. It honors the substantial investment the field has made in its current curriculum. It refuses to abandon the assessment instruments that have served, until recently, to measure what students have learned.
The second treats the moment as a founding one. New courses oriented toward computational thinking from the first week. Assessment instruments built around reasoning. Curricula in which the substrate is taught as a tool a student learns to direct and check, alongside the higher-order work that determines what is worth directing the tool to do. Programs designed for the kinds of students the on-ramp lost for three decades, students whose minds the field has wanted and could not retain. This is the response the field’s own history asks for. It treats the constraint relaxation as the structural opportunity it is. It is willing to absorb the cost of change in order to take the gift the moment has offered. It requires an institution to see the moment as a liberation rather than a threat.
Some institutions will choose the first response. Some will choose the second. Most will be inside the choice for some time before they realize they have made it.
The discipline made a promise about what it wanted to teach. For decades the promise was held aloft against a curriculum that could not deliver it. The conditions for delivery have arrived.
The promise is now collectible. Time to find out who we said we wanted to be.
References
[1] Wing, J. M. (2006). Computational thinking. *Communications of the ACM*, *49*(3), 33–35. https://doi.org/10.1145/1118178.1118215.
[2] Margolis, J., & Fisher, A. (2003). Unlocking the clubhouse: Women in computing. MIT Press. https://mitpress.mit.edu/9780262632690/unlocking-the-clubhouse/
[3] National Science Foundation. (2005). Broadening participation in computing (BPC) (NSF 05-562) https://www.nsf.gov/funding/opportunities/bpc-broadening-participation-computing/13510/nsf05-562/solicitation
[4] Guzdial, M. (2009–present). Computing Ed Research – Guzdial’s take [Blog].
https://computinged.wordpress.com/
[5] Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books. https://worrydream.com/refs/Papert_1980_-_Mindstorms,_1st_ed.pdf
[6] Resnick, M., Maloney, J., Monroy-Hernández, A., Rusk, N., Eastmond, E., Brennan, K., Millner, A., Rosenbaum, E., Silver, J., Silverman, B., & Kafai, Y. (2009). Scratch: Programming for all. Communications of the ACM, 52(11), 60–67. https://doi.org/10.1145/1592761.1592779
[7] Weintrop, D., & Wilensky, U. (2019). Transitioning from introductory block-based and text-based environments to professional programming languages in high school computer science classrooms. Computers & Education, 142, Article 103646. https://doi.org/10.1016/j.compedu.2019.103646


Fabulous and timely. Very interesting references. I used the book The Art of Computer Programming by Davidson and colleagues to try to understand computational thinking .
I just finished a two-semester Python course. I did pretty well, and I liked my teachers. But the whole time, it bothered me that SO much energy was being expended on preventing me from using the tools I would undoubtedly be using from Day One if I were to get a job in the field (or to use my newly attained knowledge as I intend to.) It’s like teaching someone to be a carpenter while banning use of a nail gun and electric screwdriver. I would prefer the capenter working on my house to have great proficiency with both.
I have been spending too much time memorizing exception types to think much about the question of how this could be done better. But I think the author's suggestions here are excellent. Suppose I had seen a longish question on my exam that said "Here are four code blocks approaching the problem of _________. Evaluate the efficacy of each, under various circumstances." My eyes would have glazed over, while I'm thinking "I'm going to have to strain my brain over this." Isn't that what employers want their new hires to be able to do?
Phasing all this in will be tricky. But I do agree that institutions that fail to get ahead of this curve are doing no one any good, least of all themselves.