Journey through AI: Weekly Lessons from the Undergraduate Classroom
From Lectures to Live Debates: Two Things can be True
This fall I launched something new at George Mason University: UNIV 182 – AI4All: Understanding & Building Artificial Intelligence, the first campus-wide course in AI literacy, open to every undergraduate, regardless of major. It satisfies the Mason Core requirement in Information Technology & Computing, and, more importantly, it’s meant to lower the barrier of entry into AI for every student on campus. This is not an appreciation course. We understand, we apply, we critique, we build. This course has a rhythm. Join us!
Beset by midterm grading, I finally now have a chance to reflect and share with you the class debates that followed the “Through Student Eyes: The Promises and Perils of AI” homework. The theme for the debates was “Two Things Can Be True.”
From Conversation to Collaboration
The debates were never an afterthought. They were part of the rhythm from the start, woven into the architecture of the course. Students knew that the first homework would not just be read; it would become the seed of a larger exchange that would ask them to listen, reason, and build with one another.
So, we began with conversation. On that first day devoted to exploration, the room organized in small groups huddled around shared curiosity. Students compared notes on what they had read and noticed in the world of AI: the breathtaking and the troubling, the futuristic, the scary, the hyped, and the familiar. They worked through readings and media I had shared, teasing out the patterns that connect innovation with consequence. They carried the class. I was simply a resource, walking around, answering questions, occasionally prodding.
Out of those early discussions came Homework 1, “Through Student Eyes: The Promises and Perils of AI.” Each student wrote about AI’s promises and perils as they saw them. It was part reflection, part position paper, and fully their own. They were not asked to choose a side but to hold the tension and to see how, in AI as in life, two things can be true at once.
When the students returned, they were ready to deepen the conversations in preparation for live debates. A second in-class session invited them to reorganize, to find new partners or stay with their groups, to settle on a focus and a framing question that mattered. Preparing for the debates transformed the work. Students moved from reflection to resolve, translating their individual ideas into shared inquiry.
Each team named itself (they had fun with this one) and set its structure, one we all agreed on: a (silver) moderator, a blue sub-team (in favor), and a red sub-team (against). Moderators refined the framing questions and kept the team focused and working; sub-teams built arguments grounded in evidence and citation.
Over a week, ideas turned into outlines, and outlines into conviction.
By the time debate day arrived, the classroom had transformed. I had rerun the structure and timing in my head so many times to make sure all went smoothly, but I have to confess I had never done this before, and I was anxious to see that this was a good experience for the students. I told them as much. I felt a great responsibility, and I could see that they held it with me. They had prepared so hard.
Generated with GPT-5 on October 17, 2025.
One thing was clear to me: debate day would feel less like a class and more like a symposium. It would be spirited, intentional, and alive. I motivated the students by telling them we would vote on the top team (QR code ready), and while everyone would receive something, the winning team might get something extra. Think notepads versus notebooks. Yes, the notepads were super popular, and I wish I had brought more.
But back to that debate class. Mic around. Students pitched, questioned, and then voted. Preserving that sacred space, I will only share with you aggregates of the conversation. You just had to be there.
Inside the Debates
The Bots Plots: AI and Facial Recognition
Framing question: Is it better to adopt a wider use of AI-directed facial recognition in society? The blue sub-team supported adoption, citing improved public security, convenience, and economic benefits. They referenced examples such as TSA biometrics identifying threats and missing persons, smartphone Face ID for easier authentication, and a projected $12 billion global facial-recognition industry by 2028. The red sub-team opposed wider use, pointing to privacy violations, misidentification (especially of women of color) and the danger of irreversible biometric data breaches that could lead to identity theft. This debate made clear how quickly the language of safety and convenience gives way to questions of surveillance, fairness, and the permanence of digital identity.
Arty Fishal Intelligence: AI in Artistic Creation
Framing question: Should AI be permitted in competitive artistic creation against humans? The red sub-team argued that AI art “steals” from human artists because models are trained on copyrighted works without consent. They cited the case of cartoonist Sarah Andersen’s style appearing in Midjourney and explained that AI-generated works cannot be copyrighted under U.S. law. They also emphasized job loss among illustrators and translators due to generative AI and noted that scraped datasets from artist portfolios reproduce human creativity without credit or compensation. The blue sub-team contended that AI can efficiently generate art, offering low-cost tools and new forms of human–AI collaboration for creative industries such as gaming and advertising. What struck me was how deeply students engaged with the ethics of ownership, seeing AI not just as a tool, but as a mirror for what we value in human creation.
To be honest, going into this debate, I thought the blue sub-team would have a harder time. I was very nicely surprised to see them not only hold their own but make very strong arguments that surfaced foundational questions on both the nature and process of art. If there was one team that exemplified the maturity and richness I wanted to see out of this experiment, it was this one. And no, they did not win the popular vote.
Team Teletubbies: AI in Telehealth
Framing questions: How will AI affect doctors in telehealth, and how will it affect society? The red sub-team warned that incorrect symptom descriptions from patients could mislead AI systems, risking wrong diagnoses. They raised cybersecurity and data-breach concerns, citing the rise of hacking incidents since 2019. They also noted bias toward wealthier populations whose data dominate training sets, leading to unequal quality of care, and difficulties for rural or low-income regions lacking internet access. The blue sub-team highlighted how AI assists doctors in detecting cancers and other conditions, reducing diagnostic error rates from 11.3 to 6.8 percent. They also emphasized personalized medicine and examples of telehealth expanding rural care in India while supporting privacy through anonymization and encryption techniques. This team reminded me how quickly conversations about AI in healthcare turn from technology to trust, and how access remains a key measure for us humans.
Cutting Edge: AI in Surgical Training
Framing question: Should AI be allowed to assess a student’s surgical performance and grant competency certifications, or should human educators always be the final judge? The blue sub-team argued for AI’s precision and consistency. It can analyze incision angles and response times, reduce human bias and fatigue, and provide standardized grading. The red sub-team cautioned that anonymized data can still be re-identified and that AI lacks contextual understanding and emotional intelligence. They warned that technical errors could wrongly certify unqualified students. The moderator concluded that AI should serve as a supportive tool while human educators remain the final decision-makers. The moderator’s balanced conclusion, advocating for a hybrid approach, captured the tone of the entire course: precision matters, but judgment still belongs to us.
Debates of Defense: AI in Autonomous Weapons
Framing question: What should be the use of AI in autonomous weapons? The blue sub-team argued that autonomous weapons could reduce casualties, minimize human error, and improve efficiency through faster decisions and predictive maintenance, such as helicopter-engine failure prediction. The red sub-team emphasized the danger of misidentifying civilians, AI’s brittleness in unfamiliar conditions, and lack of accountability. They also discussed the risk of authoritarian misuse and human-rights violations when such systems are deployed without regulation. This was the team that had the most memorable quotes for me, and, no, I cannot tell you. But this team brought forward some of the most compelling language of the day, invoking civilian risk, accountability, and the moral weight of human decision-making, a really powerful reminder that technical capacity never absolves ethical responsibility.
Team 10: AI in Education
Framing question: Should AI be allowed in classrooms K-12? The blue sub-team promoted AI as a study aid offering personalized learning, automated grading, deadline reminders, and accessibility for students with disabilities. The red sub-team warned of developmental risks from reduced human interaction, potential privacy violations through surveillance tools, and reinforcement of existing biases in educational data. They stressed that AI should supplement rather than replace teachers. I was impressed by how students could see both the promise and the peril so clearly, recognizing that education is shaped as much by relationships as by tools. This was the team that won, mainly due to a savvy moderator that learned in real time how to present arguments; the benefit of going last.
Reflections
The students loved this experiment. They told me as much. The word “amazing” was thrown around more than once. They loved being the protagonists of their own learning. They also loved the process. The discussions had weight but also play. The seriousness was balanced by laughter and quick thinking.
And me? I wish I could bottle it. I hope I can still replay it in my mind when I am old. You do not get these moments often in your life.
I watched the debates unfold with equal parts pride and wonder. Students who had entered the semester unsure of how AI even works were now dissecting studies, weighing evidence, and debating ethics with poise and conviction. The performances were stellar, yes, but what moved me most was the tone. They argued not to win, but to understand together. They gave life to the course motto: in this course, we learn together.
What are we doing now? Now we are getting more technical and entering the world of neural networks. We are going through the foundations, the architectures, and will cap this technical foraging with transformers. And after that, the students will enter a creative phase, turning their understanding and insights about AI into AI-assisted Build-it projects of their own.
The rhythm of UNIV 182 continues: conversation, reflection, collaboration, creation.
And always, curiosity leading the way.
Join us.
Missed our other posts tracking the course? You can find them here:


