Journey through AI: Weekly Lessons from the Undergraduate Classroom
DIY: Make it Personal
This fall I launched something new at George Mason University: UNIV 182 – AI4All: Understanding & Building Artificial Intelligence, the first campus-wide course in AI literacy, open to every undergraduate, regardless of major. It satisfies the Mason Core requirement in Information Technology & Computing, and, more importantly, it’s meant to lower the barrier of entry into AI for every student on campus. This is not an appreciation course. We understand, we apply, we critique, we build. This course has a rhythm. Join us!
You have to Build to Truly Understand
After spending weeks building the tower of tokens and the transformer, we entered the DIY module with a simple, ambitious goal: have students synthesize their technical understanding into something personal they can build themselves, then use that as a lens to examine data security, privacy, copyright, authorship, and the rest of the ethical landscape they’d been studying in theory.
To prepare them, we brought in practitioners from Google and Microsoft, bracketing a “create with AI, then critique it” homework assignment.
Preparing to Build
Prototyping with Words, not Code (Google)
Mike Snodgrass, a Google AI Specialist, showed students how no-code tools can turn intent into working apps. This was rare territory for an intro AI course: students saw that governance, privacy, and copyright aren’t afterthoughts but design constraints from the first prompt.
The questions that emerged told me everything:
“An image-to-poem generator could be a gateway for propaganda. How do you protect against that?”
“I want to build something that brings education to places in the world that are difficult to reach.”
“Opening up innovation like this allows ideas that would never get past boardrooms to surface.”
“The moat is you, your team, your passion, your deep understanding, and your network.”
Mike called the classroom “a learning digital sandbox.” That’s exactly what UNIV 182 is designed to be: pair creation with critique, from ethics to evaluation, so students leave with both literacy and judgment.
Prompt-a-thon with Copilot (Microsoft)
Chris Ingeholm and Nicholas Connon from Microsoft led a hands-on session on prompting and agent building with the Copilot suite. Students learned how grounding, privacy, and Responsible AI principles connect to real workflows, and how Copilot’s agent builder can turn a well-structured prompt into a working teammate.
The energy came through in their questions:
“How does grounding shape accuracy?”
“What does ‘responsible by design’ actually mean when real data, security, and decisions are on the line?”
That’s the course in miniature: pair creation with critique so students leave with both skills and judgment.
Meanwhile, They Were Building
While our guests were visiting, students were working on Homework 3: “Create with AI: A Critical Creation and Reflection.” The assignment asked each student to:
Use a generative AI tool to create a new work (art, story, poem, audio, video, hybrid media)
Document their creative process (prompts, iterations, successes and failures)
Critically evaluate their output for quality, originality, bias, representation, plagiarism risks, ownership, privacy, and transparency
Reflect on how the experience reshaped their understanding of creativity, authorship, and AI
Sample prompts included: a sonnet in Dostoevsky’s voice about OpenAI Sora; a debate between Cleopatra and Ada Lovelace about AI art; a “lost” Shakespeare monologue on digital immortality; an AI-generated music track paired with a DALL·E visual.
By the end, students would have both created with AI and critiqued it, turning abstract lectures on data, models, and ethics into something lived and examined.
What They Built
I had anticipated this would be a seminal assignment. Even so, the range startled me: stage monologues about digital immortality, poems about unfamiliar emotions, trivia apps, tactics games, pet portraits, nature photography, superhero selfies, and more.
On Authorship and Ownership
One student co-wrote a stage monologue about uploading your mind to live online forever. The final piece centered on a “mirror made of wires” and insisted that safety isn’t just encryption but “the right to say no, and the right to end.” From there, the student worked through what consent, watermark-stripping, and “assembled from data” labeling would need to look like if we built memorial bots that speak in our voices after we die.
Another tried to get ChatGPT to write a poem about sonder, the feeling that everyone else is living a life as vivid as your own. They loved the result but didn’t trust it. They reverse-searched lines, worried about hidden training texts, and decided they’ll only use AI for synonyms going forward, never pasting entire original poems into a model they now suspect might reuse them for someone else.
A third asked a chatbot to write a rant poem in the voice of the architect of modern electricity about electric vehicles and corporate greed. The poem rails against “batteries fat with earth’s stolen blood” and “corporate kings” who turned visionary ideas into luxury toys. The student saw the tone as “robotically majestic” and refused to claim authorship: “That would be like telling a human to write a poem and then saying I wrote it.” They’ll use AI only for brainstorming now, not finished work.
On Capability and Limits
One of the most moving projects came from a student trying to create a family pet portrait: thirteen animals, some long gone, reunited in a group scene. They cycled through DALL·E, Copilot, Leonardo, and more, fighting broken anatomy, wrong species, missing tongues, and uncooperative “reference image” tools. In the end, they produced a single lovely (but still not quite right) portrait of one dog and wrote a long post-mortem on failure.
The confession struck me: based on what companies claimed, they’d thought this would be easy. They came away disappointed but clear-eyed about current image generators’ capabilities. The student also noted that IP worries feel different when a generated personal pet might secretly match a Westminster show dog, and that uploading personal photos is a privacy tradeoff they’re only comfortable making for pets, not people.
Another student used Google AI Studio to have a model code a dungeon-crawler from scratch, then iteratively added UI, combat rules, item durability, and shop logic through plain-language commands. The result worked, but the student kept circling the same questions: where did this code really come from? Is it stitching together someone else’s patterns? How do you trust a system that could, in principle, be programmed not to tell you the whole truth about its training?
On Bias and Representation
Several students discovered bias not through lecture but through their own creations resisting them.
One used Google AI Studio to create a sliding timeline portrait of themselves from the 1900s through the 1950s. Each decade got a stylized photograph. They admired the multimodal blending, but their critique cut deeper: every version came out as a well-off young white man in a crisp suit, reinforcing a narrow, classed, racialized default of who “belongs” in those decades. The student also asked why stronger privacy requires paying for a higher tier, and what it means that coding “is about 50% AI and 50% human” even inside companies.
Another student, who loves hiking, used ChatGPT’s image tools to generate realistic scenery and critiqued it with a hiker’s eye: the shadows are eerily right; the fall colors are beautiful but too uniform; the whole scene quietly defaults to a European/North American alpine aesthetic. The project ended in a nuanced place: AI as “synthesizer, not originator,” useful for exploration but entangled with opaque data sources and privacy risks.
A student painting a specific scene loved the hand-painted aesthetic but noticed the default character was a white, Western-looking woman. The system quietly reinforced one cultural norm unless they worked hard to override it. They also worried about uploading personal photos and AI remixing copyrighted material without consent.
On Joy and Unease
One student, hungry and in a good mood, asked Copilot to make a gummy bear with a closed smile hugging another gummy bear. What came back was a glossy green bear cradling a red one. No colors were specified, but the image was instantly recognizable as the iconic green gummy from an old show they remembered. They were both delighted and unsettled: had the model quietly pulled from that cultural reference? Even the smile needed careful steering: open-mouth versions looked “scary.”
Another turned a stadium selfie into a superhero collage, flying over a city in Mason green with a golden retriever sidekick at sunset. The reflections were light and delightful: “I made myself a superhero,” not by drawing but by steering.
On Complex Productions
One student orchestrated a mini Hong Kong–style martial arts film: ChatGPT wrote the rooftop fight screenplay; Mootion turned it into 3D animation; Suno generated the soundtrack; CapCut stitched it together. The visuals were stunning for a student project, but everything frayed beneath: characters changed appearance, props vanished, continuity broke, voices were weak, and the system failed to hold a consistent world. The student worried aloud about what happens when these tools do get coherent, what that means for stunt performers, martial artists, and musicians, and called for regulation and transparency around training data and labor.
Another used ChatGPT to fill in a missing scene in a beloved book, imagining conversations between two main characters. The AI did something thrilling at first: it turned vague, half-felt images into vivid dialogue and atmosphere. But the student could still feel the gap. ChatGPT could imitate tone and structure but never quite reach the original author’s depth. They came away thinking of authorship as “60% ChatGPT, 40% me,” and concluded AI literature is a vivid extension of imagination that still can’t create true masterpieces and probably shouldn’t be published without careful crediting.
On Practical Tools and Ethics
Several students built functional tools, which surfaced different tensions.
One used Gemini to design an AI trivia app from their own lecture notes, complete with multiple-choice questions and reward stickers. The model stubbornly refused to draw a proposed “bad mushroom” penalty sticker. The student went deep into Google’s privacy policy, noticing that unpaid users’ inputs can be read by human reviewers and reused to “improve services.” They came away determined to treat AI as a prototype engine, not a neutral black box.
Another built a turn-based tactics game inspired by a popular title, then systematically probed the bot’s weaknesses: slow turns, bad tactical choices, terrain mechanics that quietly copied genre staples like forests reducing movement and mountains blocking paths. The project turned into a sharp critique of originality and IP in AI-assisted game design.
An athlete student built a training tracker that could juggle practices, games, positions, and goals. Three prompts later, they had a working prototype. Their critique cut to information, security, and ethics: the app quietly assumed a Western male body, ignored gender and age differences, and would collect sensitive health data with little transparency about where it goes. They described themselves as “director” rather than author (“the AI wrote most of the code and UI”) and ended convinced AI can recombine existing ideas creatively but not yet originate something truly new.
On Privacy and Power
One student had ChatGPT write a dialogue between a device and its half-asleep user. The device calmly lists what it knows: deleted searches, GPS traces, hesitation before unsent messages, backup copies that survive “delete.” It ends by asking, “Do you still trust me?”
The student’s analysis pointed out how Western the framing was (clouds, apps, recommendation systems) leaving out communities for whom surveillance means police, borders, and state power. They also underlined the parallel between the device and AI itself: both operate behind layers we can’t see, both feel confident and “neutral” while reflecting biases and data sources we’ll never fully audit. They’d use AI again, but only if transparent about how much work came from the model.
What they Learned
These projects did exactly what the assignment was designed to do. Students didn’t just use generative AI; they pressed on its edges. They questioned process and outcomes with sophistication.
Who trained this system? Whose voices are inside it? What am I giving up when I lean on it? How can I trust it when I don’t see the system prompts or know what paths it took? And where, in all of this, does human creativity still stand alone?
One student’s reflection captured the assignment’s goal perfectly. At the beginning of the course, they were agnostic about where creativity came from, including from AI. At the end of this experience, they were less sure. The personal experience had been unsettling and eye-opening.
That was exactly the point: to give students their own experiences to guide them.
What They’re Doing Now
Through a guest lecture from Jesse Kirkpatrick, a Mason faculty member and Co-Director of the Mason Autonomy and Robotics Center, students are digging deeper into what Responsible AI might look like in practice.
The class has entered its final phase. Students are organizing as startups or nonprofits. Their charge: build something both impactful and responsible.
First, they’ll debate their ideas and stress-test the problem, stakeholders, solution, and risks. Next, they’ll present and stress-test prototypes: what works, what breaks, what harms. Finally, they’ll showcase their agents in front of industry leaders, who will also coach them on translating this work into portfolios, résumés, and next steps.
I’ll write about that end-of-semester sprint next. It’s a mad dash to the finish line, but I’m caught up on my grading.
What This Course Actually Achieves
(in Case you Missed it)
I took on this challenge deliberately. As an AI researcher and educator, I wanted to design something that would introduce AI to all undergraduates, not just computer science majors, in a way that builds genuine literacy and critical judgment. I wanted students who could participate meaningfully in the conversations we’re having about AI, not just consume headlines about it.
What emerged from this homework assignment proved the model works. The sophistication and depth of understanding these students demonstrated rivals what I hear from panelists at top AI/ML conferences. And I should know; I just returned from several panels this past week.
These are not computer science students cherry-picked for technical aptitude. These are undergraduates from every major on campus: business, nursing, humanities, social sciences, arts. They came in with curiosity and are leaving with the ability to interrogate systems, recognize bias in their own outputs, trace implications from data collection through deployment, and articulate the tensions between innovation and responsibility.
The pedagogical innovation here is straightforward but powerful: you cannot teach AI literacy through lectures alone, and you cannot teach it through building alone. You need both, in conversation with each other. Build something that matters to you, then interrogate every assumption it rests on. Make it personal, then examine what “personal” even means when the tool you’re using was trained on millions of uncredited works, optimized on patterns you can’t see, and deployed with privacy policies you didn’t write.
This is what AI education for everyone can look like. Not a survey course that asks students to appreciate AI from a distance. Not a technical course that gates participation behind prerequisites. A course where students build working AI applications while simultaneously developing the critical frameworks to evaluate them, where “understanding” means both how it works and what it does in the world.
That’s the achievement I’m most proud of. These students are ready for the conversations we need to be having.
I will leave you with one (there are many I could share) paraphrased statement from a student: “This [chatbot] thing does not have an original thought of its own.”
See you in the next post.
Missed our other posts tracking the course? You can find them here:
Journey through AI: Weekly Lessons from the Undergraduate Classroom
Journey through AI: Weekly Lessons from the Undergraduate Classroom
Journey through AI: Weekly Lessons from the Undergraduate Classroom
Journey through AI: Weekly Lessons from the Undergraduate Classroom
Journey through AI: Weekly Lessons from the Undergraduate Classroom
Journey through AI: Weekly Lessons from the Undergraduate Classroom
From Perceptrons to Patterns: When Students Start to Feel the Code
Journey through AI: Weekly Lessons from the Undergraduate Classroom
From Perceptrons to Patterns: When Students Start to Feel the Code
Journey through AI: Weekly Lessons from the Undergraduate Classroom
Building the Tower: From Tokens to Transformers in the Classroom



