Machine learning is a rapidly growing field, especially as deep learning moves to the edge and more and more engineers build applications that include some form of vision- or voice-based machine learning technology. The number of deep learning frameworks, tools, and other capabilities that have become available to allow developers to build and deploy neural network models continues to expand. TensorFlow Lite, as an inference engine, is one such example of a deep learning tool that has gained tremendous popularity in recent years. A relative newcomer to this field is Glow, the open source neural network compiler. This session explores the trade-offs and features of TensorFlow Lite and the Glow NN compiler, with a focus on how to target these technologies for MCUs, to work within the various resource constraints such as memory and power inherent in MCUs.
In Partnership with Power Integrations
In Partnership with Infineon
In Partnership with STMicroelectronics
In Partnership with Infineon
In Partnership with STMicroelectronics