ML-Heavy TinyML Study Plan (6 Weeks)
Week 1: ML Foundations & Edge Awareness
- Refresh core ML concepts: supervised learning, overfitting, regularization
- Learn constraints of TinyML (latency, memory, power)
Do:
- Watch: Google ML Crash Course
- Read: TinyML book (O'Reilly)
- Train a small digit classifier on MNIST
Project: MNIST classifier -> convert to TFLite -> test inference locally
Week 2: Deep Learning with Efficiency in Mind
- Understand CNNs, quantization-aware training, and pruning
- Learn TensorFlow Model Optimization Toolkit
Do:
- Try pruning & quantization on a small CNN
- Use TFLiteConverter and post_training_quantize
Project: Keyword spotting using 1D CNN + quantization
Week 3: Train for Deployment
- Learn to prepare models for embedded inference
- Train on custom audio/sensor data
Do:
- Use datasets like Google Speech Commands
- Build classifier with TensorFlow/Keras
Project: Wake word detector ("Hello Micro") deployed via TFLite Micro
Week 4: Toolchains & Deployment (ML Focused)
- Explore ML-centric deployment tools
- Focus on inference, not firmware
Do:
- Use Edge Impulse for data collection and deployment
ML-Heavy TinyML Study Plan (6 Weeks)
- Run inference on Raspberry Pi using tflite_runtime
Project: Sensor anomaly detector with TCN or LSTM
Week 5: Vision on Edge
- Use MobileNet/SqueezeNet for low-power inference
- Try image classification and person detection
Do:
- Use pre-trained MobileNetV2 -> quantize -> test TFLite
- Try transfer learning with model_maker
Project: Real-time person detection (Pi Camera or ESP32-CAM)
Week 6: Advanced Concepts & Final Project
- Try distillation, NAS, federated learning
- Build a full end-to-end pipeline
Do:
- Read about federated learning
- Try model distillation
- Explore NAS with TFLite Model Maker
Final Project: Choose Audio/Vision/Time-series -> Quantize -> TFLite -> Run inference