CubeSAT Project Launches Deep Learning Into Space
An onboard image processor from Teledyne e2v will bring deep learning capabilities to a smart nanosatellite.
Robots programmed with artificial intelligence (AI) are essential to modern space exploration. Deep learning (DL) is a subset of AI that allows a simple machine learning (ML) model to handle vast amounts of data and learn from mistakes to sharpen decision-making without human control.
Back in February of this year, NASA’s Mars Rover Perseverance was able to go seven minutes without any real-time control and land on Mars’ surface with the help of AI. The rover used an ML algorithm to scan the landing site, maneuver appropriately, and safely land. The rover was also equipped with tools that use AI models to perform various guidance maneuvers and evaluate rock and sediment.
NASA’s Mars Perseverance Rover uses AI-based tools to help NASA search for previous life on the red planet. Image used courtesy of NASA
Now, Teledyne e2v Semiconductors has announced a processing module that will bring DL into space.
Teledyne’s QlevEr Sat System
With assistance from the Centre Spatial Universitaire de Grenoble (CSUG), Teledyne e2v has developed a high-performance image analysis system that will help a satellite camera record large areas of space through AI algorithms. The system, which will be integrated onboard the QlevEr Sat, can then build a binary map of the observed areas before transmitting signals down to earth.
A radiation-tolerant IC is integrated in a 6U CubeSat (10 cm x 20 cm x 30 cm) to power this ML processing. The system is the result of a collaboration between Teledyne e2v and CSUG that started in 2017.
Chips for onboard imaging processing. Image used courtesy of Teledyne e2v
The QlevEr features Teledyne e2v’s quad-core, a 1.8 GHz Qormino QLS1046-Space processing module that utilizes a 64-bit Arm Cortex A72 processor and 4 GB of packaged DDR4 memory. This AI-based processing module will also allow smart satellites to capture high-resolution images leveraging Teledyne e2v’s 16Mpixel Emerald CMOS image sensor.
With all the stars, debris, gasses, and darkness surrounding the satellite, it is crucial that an elegant onboard processing unit can take in large amounts of data without any previous examples and learn from them. The QLS1046-Space can run these AI algorithms to render a high-resolution image from the Emerald sensor's data.
Breaking Down Benchmarks
Housed in an 0.8 mm package, the Qormino module is built with size, weight, and power constraints in mind. This device requires less than 2 V of input voltage.
Teledyne performed several benchmark tests on the module, which revealed that the QLS1046-Space performed computational analysis like an i7 quad-core processor. This is a solid performance for AI models on earth let alone in space.
Block diagram of the QLS1046A. Image used courtesy of Teledyne e2v
The benchmark for deep learning demonstrated that when neural networks processed an image, the system could classify a 512 x 512 onboard image in 13,562 milliseconds. This is a relatively long processing duration owing to the limitations of DDR4 memory.
Thomas Porchez, an application engineer at Teledyne e2v discussed the importance of bringing AI to space. “By combining our next generation processing, memory and optoelectronic devices with the cutting-edge AI technology developed, QlevEr Sat overcomes [many] challenges,” he explains. “It enables image capture and subsequent processing to be carried out in even the smallest of satellite designs.”
Memory Drawbacks to Deep Learning
Because the system offers only 4 GB of memory, it can't process images at the speed that many space systems require. The promising news is that a bump to 8 GB of DDR4 memory (not to mention oncoming DDR5 memory) can increase AI performance and optimize memory management for image processing.
With that addressed, it won’t be long before deep learning algorithms analyze high-quality images in space on a broad scale.