LVL
1
/ 3

Coco Crunchers

A multi-year experimental game exploring learning, immersion, and AI-assisted gameplay🐶 Inspired by Coco, an English Bulldog

A 2D Python educational game that evolved over three years as a comprehensive research platform, demonstrating the intersection of game-based learning, data collection, and artificial intelligence.

0 Years
of Research & Development
0+
Participants Tested
AI-Powered
Educational Assistant
CHAPTER01

The Beginning

Where a simple question sparked a three-year journey

▼ SCROLL TO CONTINUE ▼

What Is Coco Crunchers?

More than just a game—a comprehensive research platform exploring the intersection of education, technology, and user experience.

The Game

🐶 Named after Coco, a lovable English Bulldog who inspired the main character

🎮 Game title inspired by the classic educational game Number Munchers by MECC

Coco Crunchers is a 2D Python-based educational game built with pygame and pygame-ce. Players navigate through carefully designed levels using a combination of movement mechanics and problem-solving skills.

The game features three distinct skill-based levels created with the Tiled level editor:

  • Level 1: Wall jumping mechanics
  • Level 2: Enemy combat and timing
  • Level 3: Platform navigation techniques

Each level presents mathematical challenges that players must solve to progress, integrating educational content seamlessly into the gameplay experience.

The Research Platform

Beyond entertainment, Coco Crunchers serves as a sophisticated research tool for studying game-based learning effectiveness, user engagement, and the impact of AI assistance on gameplay.

The project evolved through multiple technical iterations, each addressing specific research questions:

  • Does gameplay improve mathematical performance?
  • How can games collect data while maintaining immersion?
  • Can AI assistants enhance the learning experience?

The deployment journey alone—from tkinter to pygame to web deployment via Pygbag—represents a masterclass in adaptive problem-solving.

Educational Game

2D platformer designed to teach math and problem-solving through engaging gameplay mechanics

Python Powered

Built with pygame and pygame-ce, leveraging Python for rapid development and iteration

Research Platform

Systematic data collection from participants to study learning patterns and effectiveness

Web Deployed

Successfully deployed online using Pygbag after overcoming multiple technical challenges

The Evolution Journey

Three years of iterative development, research, and learning. Click each year to explore the details.

1
Year 1 (2023)

Foundations

8 participants (9th grade students)
Technology Stack
PythontkinterpygameGoogle Sheets
2
Year 2 (2024)

Immersion & Data Collection

12 participants across age groups (8-77 years)
Technology Stack
FlasknginxgunicornDjangopygamePygbagPython SimpleHTTPServer
3
Year 3 (2024)

AI Integration

12 participants: Control group (n=5) vs Experimental group (n=7)
Technology Stack
pygame-ceTiledOpenAI API (GPT-4o)streamlit

Visual Progression

Watch the transformation unfold across three years of development

tkinter Interface

The Foundation

Simple desktop GUI with math problems and test workflow. The journey begins with a basic but functional educational tool.

Y1
YEAR 1 FOOTAGE
KEY FEATURES
Desktop GUI
Pre/Post Testing
Math Challenges
Google Sheets Data
◄ SELECT YEAR ABOVE TO VIEW PROGRESSION ►
CHAPTER02

The Research

Data, experiments, and discoveries

▼ SCROLL TO CONTINUE ▼

Research Highlights

Data-driven insights from three years of experimental research with 32 total participants.

Year 1 Impact
0%
Improvement in test scores (83% → 92.5%)
Statistically significant (p = 0.02459)
Year 2 Performance
0%
Average accuracy across all ages
12 participants aged 8-77 years
Year 3 AI Boost
0%
Faster completion with AI assistance
684s vs 1066s average time

Year 1: Pre-test vs Post-test Performance

Measuring the impact of gameplay on mathematical skills among 9th grade students

n = 8 participants | p = 0.02459 (significant)

Key Finding: Statistical analysis revealed a significant improvement in mathematical performance after playing Coco Crunchers. The t-test (t = 2.376, p = 0.025) confirmed that the null hypothesis could be rejected, demonstrating that game-based learning effectively enhanced foundational math skills.

Year 2: Accuracy by Age Group

Analysis of performance patterns across different age demographics (8-77 years)

n = 12 participants | Overall mean: 86.92%

Key Finding: Teenagers demonstrated perfect accuracy (100%) with zero standard deviation, suggesting optimal cognitive alignment with the game mechanics. While younger participants (8-12) showed lower accuracy, completion times varied significantly, with an 8-year-old completing tasks in under 5 minutes compared to adults taking 10+ minutes.

Year 3: Control vs Experimental Groups

Comparing level completion times with and without AI-powered Coco Hintbot assistance

Control: n = 5, Experimental: n = 7

35.8% faster with AI assistance

Key Finding: The experimental group with access to the GPT-4o-powered hintbot completed levels 35.8% faster on average. However, high standard deviation and limited sample size (n=12) resulted in non-significant results (p > 0.05). This suggests promising trends that require larger datasets for statistical validation.

While the results did not achieve statistical significance, the positive correlation between hintbot helpfulness and completion time suggests that educational LLMs can enhance the gaming experience when properly integrated.

Year 3 Research Paper

Year 3: Helpfulness vs Completion Time Correlation

Exploring the relationship between perceived AI helpfulness and gameplay efficiency

Correlation between hintbot helpfulness and level completion time

Key Finding: Participants who rated the hintbot as more helpful generally completed levels faster, though some dependency was observed where users relied too heavily on hints rather than attempting problems independently. Average helpfulness rating was 8/10, indicating strong user satisfaction with AI assistance.

Technical Architecture

A journey of iterative problem-solving, technology pivots, and adaptive engineering.

Deployment Evolution

Initial Attempt
Python + tkinter
Not web-compatible; GUI does not render in browsers
Flask/nginx/gunicorn
Ubuntu Server + Flask
Framework setup too time-consuming for project timeline
Django Attempt
Ubuntu Server + Django
tkinter module incompatible with web deployment
Final Solution
pygame + Pygbag
Successfully deployed to web; 5-month journey complete

Key Technical Decisions

Game Engine Evolution

  • tkinter → pygame → pygame-ce
  • Chose pygame for web compatibility and active development
  • pygame-ce provided better performance and modern features

Level Design

  • Tiled editor for visual level creation
  • 3 skill-focused levels: wall jumping, combat, platform navigation
  • Each level teaches specific mechanics progressively

LLM Integration (Year 3)

  • Separate deployment: streamlit for hintbot, game runs independently
  • OpenAI GPT-4o model for educational assistance
  • Hintbot informed by level objectives, not real-time game state

Data Collection

  • Year 1: Manual collection via Google Sheets
  • Year 2: Manual collection, immersion limitations identified
  • Year 3: Google Forms for user feedback and completion times

Future Directions

Database integration for automated data storage
Built-in LLM directly in the game (no separate application)
Real-time game state awareness for context-aware hints
User input system for in-game data collection
Enemy AI to enhance immersion and challenge
Both game and hintbot in the same web browser instance
CHAPTER03

The Impact

Lessons learned and paths forward

▼ SCROLL TO CONTINUE ▼

Key Takeaways

What three years of research and development revealed about game-based learning, AI integration, and technical perseverance.

Game-Based Learning

Statistically significant improvement demonstrated in Year 1, with test scores increasing from 83% to 92.5% (p = 0.02459). Validates that gameplay can effectively enhance mathematical performance when designed with educational intent.

9.5% improvement | 8 participants | p < 0.05

Research Through Games

Successfully collected data from 30+ participants across three years, spanning ages 8-77. Demonstrated that video games can serve as effective research platforms while engaging users, though manual collection revealed the need for automated systems.

32 total participants | 3 age groups studied | 86.9% avg accuracy

AI-Assisted Education

Experimental group with GPT-4o-powered hintbot completed levels 35.8% faster on average. While not statistically significant due to sample size, the promising trend and positive user feedback suggest AI assistance enhances learning experiences.

35.8% faster | Positive correlation | Larger sample needed

Iterative Development

Successfully overcame multiple deployment challenges through persistent problem-solving. The 5-month journey from tkinter to web deployment via Pygbag exemplifies adaptive engineering and the importance of flexibility in technical research.

5-month deployment | 4 technology pivots | Web deployment achieved

Skills Demonstrated

Technical Skills

  • Python programming (pygame, API integration)
  • Game development and level design (Tiled)
  • Web deployment (Flask, Django, Pygbag)
  • LLM integration (OpenAI GPT-4o)
  • Server administration (Ubuntu, nginx, gunicorn)

Research Skills

  • Experimental design (control vs experimental groups)
  • Statistical analysis (t-tests, significance testing)
  • Data collection and interpretation
  • IRB compliance and ethical research practices
  • Academic writing and documentation

Soft Skills

  • Problem-solving through multiple failed attempts
  • Persistence over 3-year development cycle
  • Adapting to technical constraints
  • UX thinking and user feedback integration
  • Project management and timeline navigation