Technical Article

What is Machine Learning? An Intro to ML Basics

June 05, 2022 by Brandon Satrom

This article aims to contextualize machine learning (ML) for hardware and embedded engineers, what it is, how it works, why it matters, and how TinyML fits in.

Machine learning is an ever-present and often misunderstood technological concept. The practice, which is the science of using complex processing and mathematical techniques to enable computers to find correlations between large swaths of inputs and output data has been in our collective consciousness of technologies for decades. In recent years, the science has exploded, enabled by improvements in:

  • Computing power
  • Parallel processing enabled by graphics processing unit (GPU) architectures
  • Cloud computing for large-scale workloads

In fact, the area has been so focused on desktop and cloud-based use that many embedded engineers don’t give much thought to how ML affects them. And for the most part, it hasn’t.

However, with the advent of TinyML or tiny machine learning (machine learning on constrained devices like microcontrollers and single-board computers), ML has become relevant to engineers of all types, including those working on embedded applications. Adding to this, even if you’re familiar with TinyML, it’s important to have a concrete understanding of machine learning in general.

In this article, I’ll provide an overview of machine learning, how it works, and why it matters to embedded engineers.

 

What is Machine Learning?

A subset of the field of artificial intelligence (AI), machine learning is a discipline focused on using mathematical techniques and large-scale data processing to build programs that can find relationships between input and output data. As an umbrella term, AI covers a broad domain in computer science focused on enabling machines to “think” and act without human intervention. It covers everything from “general intelligence” or the ability for a machine to think and act in the same way a human would, to specialized, task-oriented intelligence, which is where ML falls on the spectrum.

One of the most powerful ways that I’ve heard ML defined in the past is in comparison to the traditional, algorithmic approach used in classical computer programming. In classical computing, an engineer presents a computer with input data—for example, the numbers 2 and 4—as well as an algorithm for converting them into a needed output—for example, multiply x and y to make z. As a program runs, inputs are provided, and the algorithm is applied to produce outputs. This can be seen in Figure 1.

 

In a classic approach, we supply a computer with input data and the algorithm and ask for an answer.

Figure 1. In a classic approach, we supply a computer with input data and the algorithm and ask for an answer.

 

ML, on the other hand, is the process of presenting a computer with a set of inputs and outputs and asking the computer to identify the “algorithm”—or model, using ML parlance—that turns those inputs into outputs every time. Often, this requires a lot of inputs to ensure the model will properly identify the correct output every time.

For example, in Figure 2, if I feed an ML system the numbers 2 and 2 and an expected output of 4, it might decide that the algorithm is to always add the two numbers together. But if I then provide the numbers 2 and 4 and the expected output of 8, the model will learn from two examples that the correct approach is to multiply the two provided numbers.

 

With ML, we have the data (inputs) and answers (output) and need the computer to derive an algorithm of sorts by determining how the inputs and outputs relate in a way that is true for the entire data set.

Figure 2. With ML, we have the data (inputs) and answers (output) and need the computer to derive an algorithm of sorts by determining how the inputs and outputs relate in a way that is true for the entire data set.

 

Given that I am using a simple example to define a complex field, you might at this point ask: why one would bother to complicate the uncomplicated? Why not stick to our classical, algorithmic computing approaches?

The answer is that the class of problems that tend toward machine learning often cannot be expressed via a purely algorithmic approach. There’s no simple algorithm for giving a computer a picture and asking it to determine if it contains a cat or a human face. Instead, we leverage ML and give it thousands of pictures (as collections of pixels) with cats, and human faces, with neither, and a model develops by learning how to correlate those pixels and groups of pixels with the expected output. When the machine then sees new data, it infers an output based on all of the examples it has seen before. This part of the process, often called prediction or inference, is the magic of ML.

It sounds complex because it is. In the world of embedded and Internet of Things (IoT) systems, ML is increasingly being leveraged to aid in areas like machine vision, anomaly detection, and predictive maintenance. In each of these areas, we collect mountains of data—images and video, accelerometer readings, sound, heat, and temperature—for the purpose of monitoring facilities, environments, or machines. However, we often struggle to turn that data into insight we can act on. A bar chart is nice, but when what we really want is the ability to anticipate that a machine needs service before it breaks and goes offline, simple algorithmic approaches won’t do.

 

The Machine Learning Development Loop

Enter machine learning. Under the guidance of capable data scientists and ML engineers, the process starts with data. Namely, the mountains of data created by our embedded systems. The first step in the ML development process is to collect data and label it before it is fed into a model. Labeling is a critical classification step and is how we associate a set of inputs to the expected output.

 

Labeling and Data Collection in ML

For example, one set of accelerometer x, y, and z values might correspond to the machine being idle, another may mean the machine it is running fine, and a third might correspond to a problem. A high-level depiction can be seen in Figure 3. 

 

ML engineers use labels to classify data sets during the data-gathering process.

Figure 3. ML engineers use labels to classify data sets during the data-gathering process.

 

Data collection and labeling is a time-consuming process but is critical to get right. While there are several innovations in the ML space that leverage pre-trained models to offset some of the work and emerging tools to streamline data collection from real systems, this is a step that cannot be skipped. No ML model in the world can reliably tell you if your machine or device is running well or is about to break without seeing actual data from that machine or others like it.

 

Machine Learning Model Development, Training, Testing, Refining

After data collection, the next steps are model development, training, testing, and refinement. This phase is where a data scientist or engineer creates a program that ingests the mass of collected input data and transforms it into the expected outputs using one or more approaches. Explaining those approaches could fill volumes, but suffice it to say that most models perform a set of transformations (for example, vector and matrix multiplication) on their inputs. Additionally, they will adjust the weights of each and every input against each other in order to find a set of weights and functions that reliably correlate to the expected outputs.

This phase of the process is often iterative. The engineer will adjust the model, the tools, and methods used, as well as the number of iterations to run during model training and other parameters to build something that reliably can correlate the input data to the correct outputs (aka, the labels). Once the engineer is happy with this correlation, they test the model using inputs not used in training to see how the model performs on unknown data. If the model underperforms on this new data, the engineer repeats the loop, shown in Figure 4, and refines the model further.

 

Model development is an iterative process with many steps, but it starts with data collection.

Figure 4. Model development is an iterative process with many steps, but it starts with data collection.

 

Once the model is ready, it is deployed and available for real-time prediction against new data. In traditional ML, the model is deployed to a cloud service so that it can be called by a running application that provides the needed inputs and receives an output from the model. The application might provide a picture and ask if a person is present, or a set of accelerometer readings, and ask the model if this set of readings corresponds with an idle, running, or broken machine.

It’s at this part of the process that TinyML is so important and so groundbreaking.

 

So Where Does TinyML Fit?

If it’s not clear already, machine learning is a data-intensive process. When you are attempting to derive a model through correlation, you need a lot of data to feed that model. Hundreds of images or thousands of sensor readings. In fact, the process of model training is so intensive, and so specialized, that it’s a resource hog for almost any central processing unit (CPU), no matter how high-powered. Instead, the vector and matrix-math operations so common in ML are not dissimilar from graphics processing applications, which is why GPUs have become such a popular choice for model development.

Given the need for powerful compute, the cloud has become the de-facto place to offload the work of training models and hosting them for real-time prediction. While model training is, and remains the domain of the cloud, especially for embedded and IoT applications, the closer we can move our ability to make real-time predictions to the place where data is captured, the better our systems will be. We get the benefit of built-in security and low latency when running models on microcontrollers, as well as the ability to make decisions and take action in a local environment, without relying on an internet connection to do so.

This is the domain of TinyML, where platform companies like Edge Impulse are building cloud-based tools for sensor data collection and ML architectures that output compact, efficient models purpose-built for microcontroller units (MCUs). Where an increasing number of silicon vendors, from STMicroelectronics to Alif Semiconductor are building chips with GPU-like compute capabilities that make them perfect for running ML workloads alongside your sensors, right where data is collected.

For embedded and IoT engineers, there’s never been a better time to explore the world of machine learning, from the cloud to the tiniest of devices. Our systems are only growing more complex and processing more data than ever. Bringing ML to the edge means we can deal with that data and make decisions even faster.