Journey through AI: Weekly Lessons from the Undergraduate Classroom
Drawing the Line: Why Defining AI Matters
The first day of class always feels like a threshold. The walkways in the George Mason’s Fairfax campus are full of students, a river swelling with the beginning and end of each class. The air hums with student chatter, full of possibility: a new semester, new faces, new voices, each carrying questions that might one day redirect entire lives.
This week, I stood in front of my students for the first lecture of AI4All: Understanding and Building Artificial Intelligence, a Mason Core course I designed for undergraduates of all majors. It is a course not for computer scientists alone but for everyone, because AI has already become everyone’s business.
When I announced this class on LinkedIn and shared my draft syllabus, the responses were generous, but one note in particular stayed with me. A dear colleague wrote:
“Good for you, Amarda. So needed. Glad for ethics in the forefront of your course. Good luck with defining AI! Maybe it doesn’t matter (so I’m told) but I think we will see more cases where it does, like in law and policy.”
“Good luck with defining AI!” echoed in my mind all weekend as I put the finishing touches in my opening lecture. It was still with me when I walked into the classroom on Monday.
Why are We here?
This is not a typical class. The diversity of ages is interesting. The diversity of reasons for taking this course is even more so.
I always like to know the students by asking them why they are taking the course. Their answers were very telling. Some said that they just needed Mason Core credits. Fair enough, honest. Others said that no matter how they feel about AI, it is here to stay, and they better understand it. Yet others said that they wanted to “get in on it.” One said their parents really insisted they take this class. That made me chuckle. Non-typical students, older adults coming from the surrounding community and auditing this class, said they needed something really stimulating. They felt the world is moving too fast in ways they do not understand.
All genuine answers, reflecting the spectrum of feelings around AI, the colorful, nuanced context for this class. And so we begin.
What is Intelligence?
I asked the students what AI was to them. They said ChatGPT, agents, tools, things that help with everyday life. Things that they were excited to see. Others that were concerning them. That was good grounding. We decided that we would spend the first week trying to wrap our arms around AI, what it was, what men said it should be, what those men (yes, predominantly men) decided it would become. So, we started drawing a sweeping arc of the history of trying to understand, emulate, and mechanize intelligence.
We started with Aristotle (384–322 BCE), who gave us the first formal system of logic: syllogisms, reasoning as valid inference from premises. To be intelligent was to think rationally. The famous chain that “All men are mortal. Socrates is a man. Therefore, Socrates is mortal.” was not just philosophy but a declaration: intelligence is rule-governed, demonstrative, and calculable.
We then skipped many centuries and went to René Descartes (1596–1650) who reframed nature itself as a mechanism. Animals, he argued, could be seen as automata, following rules step by step. He made an exception for humans. For Descartes, humans retained their rational soul, but the idea that complex behavior could arise from simple, mechanical rules planted the seeds of computation.
With Leibniz (1646–1716) and Boole (1815–1864), intelligence became mathematics. Leibniz dreamed of a universal symbolic language, a characteristica universalis, that could settle disputes by calculation. The famous line: Let us calculate! Boole turned this into algebraic logic, laying the binary foundations of digital computation.
We then turned to the 19th century and made these visions tangible. Charles Babbage designed the Analytical Engine, with memory, processor, and control flow, the skeleton of the modern computer. Ada Lovelace, (remarkably, at a time when women were largely excluded from scientific circles) in her famous notes, went even further: machines might manipulate not only numbers but also symbols, perhaps even music. She glimpsed what we now call general-purpose computation and programming language.
The Countess of Lovelace by Antoine Claudet (c. 1843). Source: Wikipedia.
And then, Turing! The 20th century gave us Alan Turing, who formalized the very limits of what machines can compute and offered the famous imitation game, the “Turing Test,” as a way to probe machine intelligence. His vision set the theoretical bedrock of computer science and cast intelligence as behavior indistinguishable from our own. We had some fun here. We asked whether ChatGPT has passed the Turing test. We determined that the Turing test seems a low bar for intelligence. It seems to be based on fooling humans. I could not resist sharing with the class a favorite movie of mine, Ex Machina (2014).
But who, who coined this term, Artificial Intelligence, and why? When did the “artificial” enter the conversation? So we turned to that famous 1956 Dartmouth Workshop, where John McCarthy, Marvin Minsky, Claude Shannon, and colleagues gave this project its name: Artificial Intelligence. Their goal was ambitious, aspirational for the times, machines that could learn, reason, use language, and even improve themselves. AI was born not just as a technical field but as a bold claim on the future. We side-tracked and realized that at every turn, AI has carried with it an aspirational soul, defined less by what has been achieved than by what remains just beyond reach. Each generation has asked: what cannot yet be done, and where can we journey next? Where next?
We made a very quick journey over definitions shifting with every breakthrough. Expert systems of the 1970s–80s cast AI as encoded human knowledge, brittle but powerful in narrow domains. In the 1990s, Russell and Norvig unified the field around the concept of intelligent agents: systems that perceive their environment and act to maximize success. In the 2010s, machine learning reframed AI as systems that learn from data, while deep learning scaled this vision with neural networks, massive datasets, and compute.
A student asked: what about today? What powers ChatGPT? While we have lectures devoted to it, it was important to introduce them to today, the age of foundation models. Large Language Models like GPT process staggering amounts of text, generate fluent language, and increasingly operate across modalities. Some see them as “narrow” tools, statistical engines of prediction; others see them as early steps toward Artificial General Intelligence.
And so, the question Aristotle posed in logic, Descartes in mechanism, Leibniz in symbolic calculation, Lovelace in programs, Turing in computation, McCarthy in definition, what is intelligence, and how do we make it artificial? remains with us, and our answers shift with every decade and now seemingly with every year.
We ran a thought experiment. To ground ourselves, we turned to Russell & Norvig’s definition of AI: an intelligent agent receives goals and percepts from the world, and converts them into actions that change the world. Then we asked: how does ChatGPT fit this model? What is its world? What are its percepts? Who feeds it goals?
Slowly, the realization dawned: we are the world. Our prompts are its percepts. Its goals are the ones we set. And its actions are text exerted back upon us, changing how we think, learn, and decide. In that sense, ChatGPT is not acting upon the physical world but upon the human one. I could see the students’ eyes lighting up. Bodies perking just a bit more. They were receiving understanding, making sense of a rapidly-moving world.
They concluded, rightly, that this makes ChatGPT a weak agent, a narrow form of AI, powerful within its bounds but far from Artificial General Intelligence; a clear reminder that even definitions must bend to reality, and that reality often reveals both power and limitation at once.
To Think or Act: Like Humans or Rational?
To make sense of these shifting definitions of AI across history, we turned to Russell and Norvig’s classic taxonomy. It offers two dimensions: systems that think versus act, and those that do so like humans versus rationally. A neat little grid betraying big questions.
We asked: can you act without thinking? Reflexes came up, like touching a hot stove, ducking a ball, automatic, encoded behaviors. But then we pressed further. For complex actions, can they really be performed without thought? Which led, inevitably, to the deeper question: what even counts as thinking?
When I asked if ChatGPT “thinks,” the class split. Some said yes, you can see traces of thought in its outputs. Others hesitated; maybe it’s doing something, but not the kind of thinking we know. I told them they weren’t alone. AI researchers are divided too. As for me, I admitted that I lean toward “not thinking,” at least not in the way humans understand thinking.
From there, we shifted to the second axis: like humans, or rationally? The students quickly saw that humans do not hold a monopoly on intelligence. That realization lit up the room. I introduced them to one of my favorite modern sci-fi authors, Adrian Tchaikovsky, and his Children of Time series, which imagines intelligences very different from our own. We then asked: if rationality is what we prize instead, do we even understand our own? Humans often act irrationally, for reasons of emotion, culture, or morality. Should we want AI to copy that, or to rise above it?
The students went right to the heart of it: whatever we build, it must carry our morality and our values. Intelligence without them is not enough. And there we were, reproducing in one classroom the debates that have been alive for centuries and we are still having today.
What’s in a Definition?
We returned to our anchor question: if this thing is so hard, why bother defining it at all? Why wrestle with boundaries when AI seems to shift beneath us and defy definition?
The answers were rich. Some students saw definitions as compass points, guiding decisions about where to go next. Others stressed restraint, that careful definitions can also tell us what not to do. One example they raised was the hidden cost of cheap human labor, a reminder that technology often shifts burdens rather than erasing them. I know we will return to this theme, and to other costs, again and again throughout the semester. And then some pressed even further: perhaps the deeper question is why do some things at all?
It was a moment of clarity. In defining, we are not merely describing what is. We are deciding what will be. And maybe what it should not.
So, we brought it all together. To define is to draw a boundary. To say this counts, that does not. In AI, those lines matter profoundly. A definition is not just a dictionary entry. It is:
Meaning-making: shaping what society understands as intelligence, agency, and humanity itself.
Permission and restriction: opening doors for some systems, closing them for others. What we define as AI will be funded, taught, regulated, and adopted; what we exclude risks being marginalized.
Ethical direction: foregrounding values—whether AI is framed as mere pattern recognition at scale or as decision-making that affects human lives.
Future-shaping: a blueprint for aspiration, a line in the sand that says: this is what we aim to build, and this is what we refuse to create.
By Humans for Humans
I told students that my thinking had changed on this. As a computer scientist and AI researcher, I had always wanted definitions to be dry, void of lyricism and aspirations. But I am also a writer, a reader, a documenter of humanity. Overtime I had started seeing the power of ambitions, aspirations interweaving themselves in definitions. To ignore that is to miss their power. To engage with it is to recognize that definitions are never neutral.
In a field that defies definitions and is increasingly looking to the next horizon, one thing remains true: definitions are made by humans. They write them. With what they write, they control. We need to ask deep questions of by who, and for who? These students, born into a world already woven with algorithms and recommendation engines, already see the act of defining as an act of agency. They are right. If they do not define AI intentionally, others will (policymakers, corporations, international bodies) shaping their future in ways they may not want.
The Journey Ahead
So, we begin. Each week we will revisit the boundaries, sometimes shifting, sometimes sharpening. We will learn by doing, by building, by critiquing. Students will not just absorb lectures; they will co-create the dialogue. We are learning together. We are challenging one another. The line is drawn.
We cover a lot in a week. I will not be able to convey it all, but I will attempt to share highlights from our journey here on substack as best as I can (grading will occasionally intrude). My hope is that readers beyond the classroom will walk with us. Because one thing was clear to all of us in the classroom this week. AI is not just a technical field. It is a collective, continual negotiation over meaning, boundaries, and direction. By humans. Hopefully, for humans.


