Journey through AI: Weekly Lessons from the Undergraduate Classroom
AI Literacy as Agency
This fall I launched something new at George Mason University: UNIV 182 – AI4All: Understanding & Building Artificial Intelligence, the first campus-wide course in AI literacy, open to every undergraduate, regardless of major. It satisfies the Mason Core requirement in Information Technology & Computing, and, more importantly, it’s meant to lower the barrier of entry into AI for every student on campus. This is not an appreciation course. We understand, we apply, we critique, we build. This course has a rhythm. Join us!
We’re now in the final weeks, and something worth noting has emerged.
The Shift from Consumer to Builder
When I designed this course, I was deliberate about its structure. My goal: move students from understanding AI systems to building them. Not because building is inherently superior, but because the constraints of building (data quality, method selection, ethical trade-offs, feasibility) are what truly reveal what understanding actually requires.
I designed the final project very carefully and broke it down into checkpoints and a debate for good measure. Three structured checkpoints mirror early-stage product development. My goal: not to train entrepreneurs, but to force precision through a framework.
Checkpoint 1 required teams to establish: a precisely scoped societal problem (4-6 sentences), identified users and pain points, a data inventory with trustworthiness assessments, justified AI method selection, and initial ethical analysis. Teams assigned temporary roles: Product Lead, Data Lead, AI/Tech Lead, Ethics Lead, Pitch Lead.
This structure matters. Students learn that rigorous AI work begins with problem formulation, not tools.
We also held a debate. Teams defend their problem statements, data practices, method choices, and feasibility claims. As I emphasized during our two debates, the goal was not combat but surfacing assumptions before they calcified. When students must articulate why their approach is sound under scrutiny, ideas sharpen and solutions become more realistic.
Checkpoint 2 shifted to evidence. Teams provided prototype demonstrations, updated data plans accounting for bias and privacy, finalized method selection with technical justification, and expanded their ethics framework with severity ratings and mitigation strategies.
The pedagogical principle I used here: industry never starts with tools. The sequence is always Problem → Data → Constraints → Model. Students grasped that method selection is alignment work and not guesswork. They learned to distinguish between “hey, let’s use this cool thing” to “here is why it makes sense to use this.”
Translation as Core Competency
The final deliverable, checkpoint 3, is a pitch deck. Again, not because I’m training founders, but because I believe that translation is fundamental to AI literacy. If you cannot articulate the problem you’re addressing, the data you’re using, the risks your system poses, why your method is appropriate, and the impact you expect, then you do not fully understand what you are building.
The deck structure draws from venture, policy, and product contexts: Team & Mission, Societal Problem, Market Opportunity, Data & Trustworthiness, AI Approach, System Architecture, Prototype Demo, Ethics/Privacy/Security, Impact & Feasibility, Next Steps.
This isn’t entrepreneurship education. It’s requiring students to make their work legible to stakeholders beyond the classroom, to think of users, communities, employers, society, and, ultimately, to see their course as much more than a grade.
What Students Discover
Through this process, students encounter what every competent AI practitioner learns:
Building with AI is easy. Building responsibly is hard. Building something people actually need is harder still.
Final presentations will be evaluated by invited leaders from industry, education, and nonprofits. The format mimics investor pitches deliberately, not to valorize that context, but because high-stakes presentation to external evaluators forces students to take their work seriously.
Observations
Somewhere in this process, learning by doing became learning by designing. Students don’t experience project work as separate from conceptual understanding. Instead, they experience it as the mechanism through which understanding deepens.
This isn’t novel pedagogy. It’s applying what we know works: structured inquiry, peer critique, external accountability, and the irreplaceable insight that comes from confronting real constraints.
AI literacy that produces agency looks like this: students who can move an idea from conception to responsible implementation, who understand that technical capability is necessary but insufficient, and who can defend their choices in technical, ethical, and practical terms.
The Space Between Lectures
There’s something else worth noting about how this course has operated, particularly given ongoing anxiety about AI’s impact on education.
A significant portion of the student work happens in class. Not as homework submitted into the void, but as active construction during dedicated class time. I structure the sessions, set the constraints, then move through the room, observing, questioning, redirecting when teams hit conceptual walls or drift toward solutions that sound good but won’t work.
This is studio pedagogy applied to AI. Students work at their tables in real time. They debate within their teams, test ideas against each other’s skepticism, iterate on the spot. The deliverables they submit at the end of each class session reflect the thinking that happened in that room, with me present to catch misunderstandings before they compound.
It’s an old model, the kind master craftspeople used in workshops, the kind studio arts professors still use today. You learn by making, under the eye of someone who knows what competent work looks like and can intervene when you’re about to spend three hours going the wrong direction.
This matters more now, not less. When AI tools can generate plausible-sounding work instantly, the pedagogy that survives is the pedagogy that happens in shared space, where the instructor can watch students think, can see where understanding breaks down, can distinguish between work that reflects genuine grappling and work that reflects skilled prompting.
The classroom becomes what it always should have been: a place where thinking is visible, where confusion can be addressed in real time, where the messiness of learning isn’t hidden in individual homework struggles but becomes part of the collective experience.
I’m using the oldest educational technology we have, direct instruction within collaborative work time, applied to the newest domain. It turns out that when machines get better at producing outputs but remain incapable of producing understanding, the workshop model isn’t obsolete. It’s essential.
A Note on Endings
Courses that ask students to build something real, rather than demonstrate recall, tend to produce a particular kind of closing dynamic. Students arrive expecting a conventional class. They leave having created systems they’ll need to defend, having made choices they’ll need to justify, having confronted constraints that don’t yield to effort alone.
I doubt most knew what they were signing up for. The gap between expectation and experience was, I think, productive.
This is what comprehensive AI education can look like: not producing founders, but producing thinkers capable of moving an idea responsibly from conception to implementation. People who understand that technical capability must answer to human needs.
We’re at the end now. The final presentations will mark both culmination and conclusion, that powerful moment when students make their work public, and when this particular cohort’s journey closes. There’s something bittersweet about semester endings like this. I will miss this cohort. That’s the cost of doing this work well; you build something real with students, and then they leave to build other things.
Next week’s post, the final one in this series, will document what happens when these teams face external evaluators. It’s a celebration, but also a proof point: this model works.
To my knowledge, UNIV 182 is the first campus-wide AI literacy course of its kind in Virginia, possibly nationally, designed not for CS majors but for every undergraduate, structured around building rather than consumption, required to grapple with the full stack: technical, ethical, societal. These students proved that the gap between who gets to understand AI and who gets shaped by it can close, if we’re willing to redesign what AI education looks like.
See you then.
Missed our other posts tracking the course? You can find them here:
Journey through AI: Weekly Lessons from the Undergraduate Classroom Drawing the Line: Why Defining AI Matters
Journey through AI: Weekly Lessons from the Undergraduate Classroom
Journey through AI: Weekly Lessons from the Undergraduate Classroom
Journey through AI: Weekly Lessons from the Undergraduate Classroom
Journey through AI: Weekly Lessons from the Undergraduate Classroom
Journey through AI: Weekly Lessons from the Undergraduate Classroom
Journey through AI: Weekly Lessons from the Undergraduate Classroom
From Perceptrons to Patterns: When Students Start to Feel the Code
Journey through AI: Weekly Lessons from the Undergraduate Classroom
Building the Tower: From Tokens to Transformers in the Classroom
Journey through AI: Weekly Lessons from the Undergraduate Classroom

