๐ŸŽ‰ Now Enrolling for Spring 2026  ยท  Saturdays in Markham  ยท  Reserve Your Spot โ†’
Back to Blog AI & Future

What Kids Actually Learn in an AI Class (It's Not Just ChatGPT)

The Misconception About AI Education for Kids

When most people hear "AI class for kids," they picture children learning to write prompts for ChatGPT, or generating images with DALL-E, or using AI tools to complete school assignments faster. That's not what we do โ€” and it's not what will actually matter in ten years.

Using AI tools is easy and getting easier every year. Understanding how AI systems are built, trained, and where they fail โ€” that's the skill with lasting value. Our AI curriculum is about building and understanding, not just using. The students who leave our program aren't just users of AI. They're people who understand it.

Week 1โ€“2: What Is Intelligence?

We start every AI course with a philosophical question that sounds deceptively simple: what does it mean for something to be "intelligent"? Can a calculator think? Can a chess program understand chess? What's the difference between following rules and genuinely reasoning?

These questions are not academic throwaways. They shape how students think about AI capabilities and limitations for the rest of the term and beyond. A student who has genuinely wrestled with "is this system understanding, or pattern-matching?" will never again uncritically accept an AI output as authoritative.

We also introduce the history: from Turing's imitation game to today's large language models. Students learn that "AI" has meant very different things at different times, and that the current moment is genuinely unprecedented โ€” but also genuinely limited in specific ways.

Week 3โ€“4: Training Your First Model

Students use Google's Teachable Machine โ€” a free, browser-based tool โ€” to train their own image classifier from scratch. No code required in the first iteration. They take photos with their webcam: thumbs up vs. thumbs down, scissors vs. paper vs. rock, their face wearing glasses vs. not wearing glasses.

They train the model, test it with new images, and observe carefully where it fails. This is where the most important learning happens. Why does the model misclassify this image? Oh โ€” the training photos all had the same background. Why does it work for some people but not others? Oh โ€” most training images were of the same person.

The hands-on discovery of "garbage in, garbage out" is more powerful than any lecture could be. Students genuinely understand, at an experiential level, why training data quality matters.

Week 5โ€“6: Exploring Bias

This is the unit that consistently produces the most powerful moments in our AI curriculum. Students deliberately train models with biased data and observe the biased outputs โ€” then we discuss what this means in the real world.

We look at documented cases: facial recognition systems with higher error rates for darker-skinned faces, because training datasets over-represented lighter-skinned images. Hiring algorithms that learned to penalize resumes containing the word "women's" (as in "women's chess club") because historical hiring data reflected historical discrimination.

"When a 10-year-old realizes that an AI system can be unfair not because it's evil but because of the data it was trained on, something clicks that they never forget."

Students leave this unit with something that's genuinely rare and valuable: a mental model of how AI systems can cause harm without any bad intent from their creators. They understand unfairness as a design and data problem, not a moral failing โ€” which means they understand how to address it.

Week 7โ€“8: Building a Recommendation System

Students build a simplified recommendation engine โ€” the kind of system that powers Netflix, Spotify, and YouTube. They implement both major approaches: collaborative filtering ("people who liked X also liked Y") and content-based filtering ("this item has similar features to that one").

The hands-on build is followed by a critical discussion: what happens when recommendation systems optimize for engagement rather than quality? How do filter bubbles form? Why might a recommendation system that "works" still cause harm? These questions don't have simple answers, and we don't pretend they do โ€” but students leave knowing how to ask them.

Week 9โ€“10: Python + ML Project

The final weeks bring everything together with a Python-based machine learning project. Students use accessible libraries to build classification systems around something they actually care about: sentiment analysis of song lyrics, image classification of everyday objects, pattern recognition in simple datasets.

Every student presents their project at the end of the term and answers three questions: What data did you use? What did your model learn? Where does it fail, and why? That final question โ€” where does it fail and why โ€” is the one we care most about. It's the question responsible AI developers ask every day.

Projects Our AI Students Have Built

  • Rock-Paper-Scissors detector using webcam + Teachable Machine
  • Movie genre predictor trained on plot descriptions
  • Handwriting recognition model for custom symbols
  • Sentiment analyser for social media-style text
  • Simple chatbot with decision-tree logic
  • Face expression classifier (happy/surprised/neutral)

Why We Take Ethics Seriously From Day One

Every unit in our AI curriculum has an ethics component โ€” not as a add-on, but as a central thread. We believe children who build AI should grapple with questions like: Who decides what the AI optimizes for? What happens when the AI is wrong? Who is accountable? How do you build systems that are fair to people with different backgrounds?

These aren't abstract philosophical extras. They are the questions that the most important AI researchers and developers are working on right now, and that the next generation of technologists will need to answer. Starting that conversation at age 10 or 12 is not too early. It may, in fact, be exactly the right time.

Sharareh Keshavarzi

Lead Instructor & Founder

Sharareh is the founder of Tiny Byte Academy. She designed our AI curriculum to go far beyond tool use โ€” to genuine understanding of how AI systems work and what that means for the world.

More Than ChatGPT โ€” Real AI Education

Our AI & Machine Learning program builds genuine understanding, not just tool fluency. Saturday mornings in Markham, for ages 10 and up.

Enroll in AI & ML โ€” Spring 2026