What Is TinyML?
Learn about a subsection of machine learning (ML) called Tiny Machine Learning (TinyML), what it is, its applications, hardware and software requirements, and its benefits.
Machine learning (ML) is a dynamic and powerful field of computer science that has permeated nearly every digital thing that we interact with, be it social media, our cell phones, our cars, or even household appliances.
Still, there are many places that ML would like to go but, has a difficult time reaching. This is because many state-of-the-art machine learning models require significant computing resources and power consumption to perform inference, which is the act of running the ML models and making predictions about its inputted data.
The need for high-performance computing resources has confined many ML applications to the cloud, where compute at the data center level is readily available.
To allow ML to broaden its reach, and unlock a new era of applications in the process, we must find ways to facilitate ML inference on smaller, more resource-constrained devices. This pursuit has led to the field known as Tiny Machine Learning or TinyML (a trademarked term from the TinyML Foundation which has become synonymous with the technology).
What is Tiny Machine Learning or TinyML?
Machine learning itself is a technology that utilizes algorithms called neural networks (an example is shown in Figure 1) to teach a computer to recognize patterns. This gets extrapolated to a variety of applications including object recognition and natural language processing.
Figure 1. A visualization of an example Perceptron neural network. Image by Robert Keim.
TinyML, on the other hand, can be defined as a subfield of ML which pursues enabling ML applications on devices that are cheap, as well as resource- and power-constrained.
The objective of TinyML is to bring machine learning to the edge in an extreme way, where battery-powered, microcontroller-based embedded devices can perform ML tasks with real-time responsivity. This effort is extraordinarily multidisciplinary, requiring optimization and maximization from fields including hardware, software, data science, and machine learning.
The field has largely been gaining popularity in recent years due to the maturation of the hardware and software ecosystems that underlie it.
Whether or not you realize it, TinyML is probably a part of your daily life in some capacity.
Applications for TinyML include:
- keyword spotting
- object recognition and classification
- gesture recognition
- audio detection
- machine monitoring
An example of a TinyML application in daily life is the audio wake-word detection model used inside of Google and Android devices. An example of wake-word detection components is shown in Figure 2.
In order to “turn on” when they hear the words “OK Google,” Android devices use a 14 kB speech detection ML model that runs on a DSP. The same can be said for many other virtual assistants.
Figure 2. Components for a wake-word application. Image used courtesy of Zhitong Yan and Zhuowei Han
Other example TinyML applications from students at Harvard include highway deer detection for cars (an example of object detection), audio-based mosquito detection (an example of audio recognition), and many more.
Hardware Used in TinyML Applications
When it comes to the hardware side of things, TinyML is impressive in that it aims to work on some pretty unimpressive hardware. From a certain perspective, the real goal of TinyML is to perform ML inference at the lowest power possible.
Pete Warden, widely considered the father of TinyML, states in his seminal book on the subject that TinyML should aim to operate at a power consumption below 1 mW. The reason for this seemingly arbitrary number is that 1 mW consumption makes a device capable of running on a standard coin battery with a reasonable lifetime of months to a year. So when you think about power sources for TinyML, think of coin batteries, small Li-Po batteries, and energy harvesting devices.
From a compute perspective, TinyML doesn’t rely on graphic processing units (GPUs), application-specific integrated circuits (ASICs), and microprocessors like most ML applications. An example learning kit using an Arduino can be seen in Figure 3.
Figure 3. An example of a TinyML Learning Kit. Image used courtesy of Arduino
To meet the lofty 1 mW goals, we are almost exclusively confined to less capable computing hardware like microcontrollers (MCUs) and digital signal processors (DSPs). These devices are often Cortex-M based and can be expected to have no more than a few hundred kB of RAM, similar amounts of flash, and clock speeds in the tens of MHz.
Beyond this, other hardware you might expect to find on a TinyML device includes sensors (e.g., camera, microphone) and possibly some BLE (Bluetooth Low Energy) connectivity.
TinyML Software: TensorFlow
In a lot of ways, the software behind tools and concepts behind TinyML is its most important feature.
Generally speaking, the most popular and built-out ecosystem for TinyML development is TensorFlow Lite for Microcontrollers (TF Lite Micro). A generalized workflow for TinyML on TF Lite Micro is shown below in Figure 4.
TF Lite Micro was designed specifically for the task of ML on devices with constrained resources, with MCUs being the focus.
A Python-based environment, TF Lite Micro is full of built-in libraries and toolkits for:
- Data acquisition
- Model architecture
Figure 4. The TensorFlow Lite Micro workflow. Image used courtesy of Saumitra Jagdale
As we’ll touch on in later articles, quantization is really the secret sauce that makes TinyML possible. But briefly and minimally, quantization is a process by which you reduce the precision (bit size) of a model’s weights and biases such that the model takes up less memory, runs faster, and requires less power—all with a minimal hit to accuracy!
With a quantized model, most TinyML devices' applications are written in C/C++ for minimal overhead.
Benefits of TinyML
The main benefit of TinyML is its portability. Running on cheap microcontrollers with tiny batteries and low power consumption means that, using TinyML, one can easily integrate ML into virtually anything for cheap.
On top of this, TinyML also has the benefit of increased security due to the local nature of the computing—i.e., data doesn’t need to be sent to the cloud. This can be significant when working with personal data in applications like IoT.
With a solid introduction to the field of TinyML we can now dive deeper into more technical aspects of the field in the next article.
Featured image used courtesy of the TinyML Foundation