Journey through AI: Weekly Lessons from the Undergraduate Classroom
Through Student Eyes: The Promises and Perils of AI
This fall I launched something new at George Mason University: UNIV 182 – AI4All: Understanding & Building Artificial Intelligence, the first campus-wide course in AI literacy, open to every undergraduate, regardless of major. It satisfies the Mason Core requirement in Information Technology & Computing, and, more importantly, it’s meant to lower the barrier of entry into AI for every student on campus. This is not an appreciation course. We understand, we apply, we critique, we build. This course has a rhythm. Join us!
This past week has been a week of reflection, assessments, and debates! I thought I would share with you what this young generation perceives about AI. First, a ten-thousand-mile-up view summary of the various topics and dimensions discussed as part of an individual assessment, a window into how the next generation is thinking critically about the technologies shaping their world.
As AI literacy becomes essential, these reflections offer a valuable pulse check on what informed engagement looks like in 2025.
Topics Explored by Students
Facial Recognition in Law Enforcement and Surveillance
Examined Clearview AI and other police uses of biometrics.
Concerns: Mass surveillance, racial bias, lack of consent.
Proposals: Bias audits, federal oversight, stricter limits on biometric collection.
AI Chatbots and NLP in Education & Healthcare
Explored Khanmigo tutors and medical chatbots.
Concerns: Privacy of minors, misinformation, consent gaps.
Proposals: Keep humans-in-the-loop, develop low-resource classroom AI, restrict medical advice.
AI-Assisted Diagnosis and Robotics in Healthcare
From SOS (Second Opinion Systems) to Da Vinci surgical robots.
Concerns: Patient data security, training bias, legal ambiguity.
Proposals: Transparency protocols, opt-in consent, HIPAA updates for AI.
Generative AI in Art and Creativity
Investigated diffusion models scraping artist work.
Concerns: Copyright theft, economic harm, lack of attribution.
Proposals: Royalty systems, dataset transparency, opt-in licensing.
AI in Finance and Housing
Looked at fraud detection, rental pricing algorithms, and financial AI.
Concerns: Price fixing, tenant discrimination, data leaks.
Proposals: Stronger data governance, rent control, regular bias audits.
AI in Defense and Surveillance
Explored military applications and conflict monitoring.
Concerns: Civilian privacy, misuse in war zones.
Proposals: Human-in-the-loop safeguards, international treaties, privacy-by-design.
Retail Surveillance and Workplace Monitoring
Analyzed mokSa.ai in retail contexts.
Concerns: Employee profiling, lack of transparency, excessive monitoring.
Proposals: Ethical use policies, data minimization, transparency for workers.
What Stood Out to Me
Students aren’t anti-AI. They are pro-accountability. They see the potential and the promises but want clear ethical boundaries.
Privacy dominates: concerns about data collection, consent, and surveillance run through every conversation.
Bias and inequality surface again and again: whether in facial recognition or education, students notice how AI can deepen divides.
Their recommendations are pragmatic: policy fixes, bias audits, consent protocols. Students are thinking like both technologists and ethicists.
Innovation + ethics = the balance they demand.
And now a personal one. I invited one of my students, Jagan Yetukuri, to share his homework submission in its entirety. I had asked the students to make it personal. Jagan did. What follows are his words, with his permission.
AI-Powered Surveillance in Retail: An Ethical and Technical Analysis of mokSa.ai
Introduction
At a recent entrepreneurship event, I was introduced to an innovative platform that caught my attention due to its real-world application of AI for security and analytics. The platform, mokSa.ai, is a surveillance-audit solution that uses artificial intelligence to help retail businesses reduce losses from shoplifting and employee fraud. It offers features such as AI-powered customer count tracking, employee time management, and aisle heat mapping, giving store owners valuable insights into both security and operations. While mokSa.ai’s capabilities are compelling from a business perspective, they also raise important questions around information storage, exchange, security, privacy, and the ethics of AI surveillance. This report critically analyzes mokSa.ai’s approach to managing information in the retail sector and reflects on its societal implications.
AI’s Role in Information Management
Information Storage
mokSa.ai stores multiple data types: camera footage, customer movement patterns, employee check-in/check-out data, and AI-generated heat maps. This data may be stored locally or in the cloud, depending on client implementation. The technical challenge lies in handling large volumes of video data efficiently while also ensuring appropriate retention policies are in place. Storing identifying data such as facial features or employee metadata introduces further privacy and compliance burdens, especially under laws like GDPR or CCPA.
Information Exchange
The platform transmits data:
Internally to store managers and head offices for operational review.
Externally if using third-party AI services or cloud storage.
This creates risks of data interception, leaks via APIs, or misuse by external partners. Real-time alerts, such as when customer volume spikes in an aisle, require secure and fast communication channels, which must be protected from tampering.
Security
Given its sensitive nature, the data processed by mokSa.ai demands strong security:
Encryption for data in transit and at rest.
Access controls with role-based permissions to prevent internal misuse.
Audit trails to track who accessed what data.
However, vulnerabilities remain. An insider threat could misuse access privileges, while external hackers might target surveillance feeds. Without rigorous security audits and patching protocols, the system could expose sensitive customer and employee information.
Privacy
mokSa.ai’s operation raises critical privacy concerns:
Customers may be unaware they’re being tracked or analyzed via AI.
Employees could feel over-monitored, especially if time tracking is overly detailed or used punitively.
Privacy is further threatened if data is stored beyond its useful window or shared for secondary purposes. The lack of clear, informed consent mechanisms can make the system feel intrusive, even if technically legal.
Ethical and Societal Implications
Ethical Concerns
The core ethical dilemma is surveillance without consent. mokSa.ai collects continuous behavioral data from people who may not have actively agreed to be recorded or analyzed. Transparency is often low: customers may only see a small sign indicating video surveillance, with no mention of AI analysis.
Another concern is algorithmic bias. If mokSa.ai’s AI is trained on unbalanced datasets, it may undercount or misidentify people of certain races, clothing types, or movement patterns. This can lead to unfair treatment or incorrect alerts, disproportionately affecting marginalized groups.
Additionally, there’s a power imbalance: companies use AI to track workers’ every move, while employees may not have a say in the matter or access to how the data is interpreted.
Societal Impact
Positive Effects:
Retailers benefit from lower losses due to theft and fraud.
Data-driven decisions improve store layouts, staffing, and customer service.
More efficient operations can lead to cost savings and potentially better prices.
Negative Effects:
Employees may feel distrusted and dehumanized under constant surveillance.
Customers may begin to avoid stores that feel overly monitored.
Surveillance normalization may erode public expectations of anonymity in public spaces.
This technology reflects a growing tension between efficiency and ethics. While legal frameworks exist in some regions, the lack of standardized AI surveillance laws in countries like the U.S. allows wide variation in practice.
Recommendations
1. Implement Transparency & Consent Protocols
mokSa.ai should integrate tools that:
Inform users (employees and customers) about data collection and use.
Provide employees access to their own data and insights.
Use clear signage that specifies AI analytics, not just camera use.
This improves ethical alignment by giving individuals a sense of control, restoring trust, and reducing legal risk.
2. Design for Privacy and Data Minimization
Where possible, mokSa.ai should:
Use aggregate data (e.g., customer counts, heat maps) rather than identifying individuals.
Mask or blur faces in footage unless specific use cases demand identity tracking.
Automatically delete non-essential footage within a short retention window.
These steps reduce the system’s privacy footprint while maintaining its effectiveness. It also limits damage if a breach occurs, as anonymized data has lower legal and ethical sensitivity.
Conclusion
mokSa.ai exemplifies the power and complexity of AI in retail surveillance. Its tools help businesses reduce losses and operate more efficiently—but they also challenge ethical boundaries around privacy, consent, and fairness. To fulfill its potential responsibly, mokSa.ai and similar platforms must invest in transparent communication, secure data practices, and privacy-focused design. With these safeguards, society can benefit from innovation without sacrificing fundamental rights.
References (APA Format)
Maurer, R. (2024). AI Surveillance in the Workplace Linked to Employee Resistance, Turnover. SHRM. https://www.shrm.org/topics-tools/news/employee- relations/ai-surveillance-in-the-workplace-linked-to-employee-resistance
Scylla AI. (2023). Combating Shoplifting with AI-Powered Video Analytics. https://www.scylla.ai/combating-shoplifting-with-ai-powered-video-analytics
NICE Actimize. (2024). The Ethics of AI in Monitoring and Surveillance. https://www.niceactimize.com/blog/fmc-the-ethics-of-ai-in-monitoring-and- surveillance/
Missed our other posts tracking the course? You can find them here:

