Hello TinyML enthusiasts! We recently released all the industry talks from the internal Harvard offering of the Intro to TinyML course publicly. You can find the entire playlist here.
Hope you find them valuable!
Links to individual videos along with short descriptions.
“Recent Progress on TinyML Technologies and Opportunities” by Evgeni Gousev (Qualcomm): Overview of the TinyML ecosystem + roadmap and Qualcomm’s efforts in the space.
“Common Voice” by Joshua Meyer & Jane Polak Scowcroft (Mozilla): Voice data is essential for training tiny edge devices to do voice recognition. Unfortunately, there is a lack of such massive voice datasets in the public domain. This talk outlines Mozilla’s efforts in creating Common Voice, a crowd-sourced, multilingual speech corpus.
“Data Collection Design for Real World TinyML” by Sacha Krstulović (Audio Analytic): Sound recognition is a key part of inferring context and environmental features in edge devices; how to collect reliable data for differentiating between sounds?
“Hotword Detection” by Alex Gruenstein (Google): Algorithms for doing hotword detection (“Ok Google”!)
“EdgeML: Algorithms for TinyML” by Prateek Jain (Microsoft)": Algorithms and tools for tiny models for ML inference on the edge.
“TVM: An End to End Deep Learning Compiler Stack” by Thiery Moreau (OctoML): Compiler infrastructures for seamlessly deploying deep neural networks across a wide range of hardware platforms.
“Artificial Neural Networks and Tools for Microcontrollers” by Danilo Pau (STMicroelectronics): Challenges in interoperability across DNN frameworks and microcontroller targets + benchmarking tools.
“TensorFlow Lite Micro” by Pete Warden (Google): The journey behind building TensorFlow Lite Micro.
“MLOps for TinyML” by Daniel Situnayake (Edge Impulse): DevOps is a software development methodology for streamlining building, testing, and releasing software. This talk covers tools + best practices for doing so for complex ML pipelines.
“CMSIS-NN and Library Optimizations” by Felix Thomasmathibalan (ARM): Optimizations for neural network kernels targeting ARM microcontrollers for edge devices.
“Endpoint AI and the Advent of the microNPU” by Tomas Edsö (ARM): Creating a configurable Neural Processing Unit for ML inference on edge devices.
“Efficient Dot Products (Or, Scaling ML Workloads)” by Erich Plondke (Qualcomm): Efficient matrix operations for low power Digital Signal Processors (DSPs), now ubiquitous in mobile phones.
“TFLite Micro Benchmarks” by Nat Jeffries (Google): Standardizing benchmarking for TinyML hardware targets.
“Privacy in Context” by Susan Kennedy (Harvard): Identifying and mitigating challenges to privacy in TinyML applications.
“Responsible AI with TensorFlow” by Tulsee Doshi (Google): Identifying and mitigating fairness concerns in ML algorithms and models.