OKdo Brings AI to Makers and Embedded Designers Alike With New Developer Kit
The new kit offers out-of-the-box support for full desktop Linux and is compatible with many peripherals that support 4K, including Ethernet, USB, and HDMI.
OKdo recently announced the latest addition to their range of products, the NVIDIA Jetson Nano Developer Kit. The new developer kit offers out-of-the-box support for full desktop Linux and is compatible with many peripherals that support 4K, including Ethernet, USB, and HDMI.
The NVIDIA Jetson Nano Developer Kit is built to allow users to meet their AI needs with 472 Gflops. Image from NVIDIA
It also features a 128-core NVIDIA Maxwell GPU that allows users to develop AI applications in video analytics and robotics.
Not Only for Makers
While the unit is primarily geared toward makers and learners, this kit can be just as valuable for engineers and embedded developers. The Jetson Nano Developer Kit allows developers to run multiple neural networks in parallel for applications such as image classification, object detection, segmentation, and speech processing.
We’ve previously looked at how neural networks are changing the way engineers work in various fields, and this new kit helps provide the necessary AI hardware and software for engineers to do just that. The platform is built for easy use, running with as few as 5 watts.
The Jetson Nano kit is aimed to run multiple neural networks in parallel for applications that would benefit from AI. Image from NVIDIA
The kit uses NVIDIA’s Deepstream SDK, which delivers a complete streaming analytics toolkit for AI-based video and image understanding using multi-sensor processing.
Using the Jetson Nano as a Professional
The Jetson developer kits are not meant for production use; instead, they should be used to develop and test software in a pre-production environment. There are Jetson modules that are suitable for production deployment that offer a 5- or 10-year operating life.
Demonstration of the Jetson Nano Developer Kit. Image from NVIDIA
However, the Jetson Nano can still be useful for professional embedded engineers and researchers. With 472 Gflops of computing performance, the Jetson Nano may help engineers working on projects involving object detection, video search, video enhancement, pose estimation, face recognition, and heat mapping.
How Jetson Nano Stacks Up Against Other Machine Learning Edge Devices
NVIDIA offers a number of community resources illustrating how to use the Jetson Nano, including a series of benchmark tests on machine learning edge devices performed by machine learning engineer Juan Pablo Gonzalez. Gonzalez’s report benchmarks five novel edge devices: the Jetson Nano, Google Coral Dev Board, Intel Neural Compute Stick, Raspberry Pi, and 2080ti NVIDIA GPU.
The tests measure inference throughput in real-time through a one-at-a-time image classification task. This was achieved by evaluating top-1 inference accuracy of a specific subset of ImageNetV2 and comparing them to ConvNets models.
The results found that the Jetson Nano (when used with ResNet-50, TensorRT, and PyTorch) had the best inference time finishing in 2.67ms or 375 frames per second. In terms of relative accuracy, the Jetson Nano also had the best results with 85% accuracy when used with TR-TRT and EfficientNet-B3.
Deep learning inference performance using Jetson Nano and TensorRT with FP16 precious and batch size 1. Image from NVIDIA
NVIDIA also provides a series of benchmarks on the Jetson Nano using TensorFlow, PyTorch, Darknet, Caffe, and MXNet on various models including ResNet-50, MovileNet-v2, and Unet. These benchmark results represent some of the most popular networks available to users, but can also be used with custom architectures.
NVIDIA also states that the Jetson Nano is not limited to deep neural network inference, but can be leveraged for computer vision and digital signal processing (DSP).