CEVA-XM4: AI Algorithms on a Chip
CEVA markets its CEVA-XM4 as “bringing human-like intelligent vision processing to low-power embedded systems”. The processor is a licensable IP that can be used for real-time mapping, 3D depth mapping, point cloud generation, computational photography, object detection, and context awareness using deep learning and convolutional neural networks. It targets embedded systems applications specifically, based on its energy efficiency and compact size relative to more traditional GPUs.
The company envisions the processor as being used in small, portable devices such as smartphones, tablets, wearable devices, security cameras, and in automotive vehicles.
The CEVA-XM4 can be programmed using high-level programming languages and comes with an ADK with libraries, modules, and drivers to speed up development. The process also has a full-memory subsystem to make integrating into SoCs simple.
- 1.2 GHz
- 8-way VLIW
- 128 MACs /cycle
- 4096-bit processing per cycle
- Fixed and Floating Point math (8/16/32/64-bit)
- Scalar and Vector DSP processing
Image courtesy of CEVA.
AR9X01: AI for Drones
Artosyn Microelectronics, based in Shanghai, China, is known for their work in computer vision, deep-learning, and wireless communication. In particular, they have been successful with controller chips.
Their vision of the AR9X01 AI SoC is for applications in robotics, drones, and smart surveillance systems. Such systems are constrained in energy, communication bandwidth, and size, and so the AI SoC is working towards a solution to make integrating AI capabilities simpler.
With the AR9X01, several CEVA-XM4 processors will be used for real-time analysis of drone flight environments, as well as detecting, classifying, and tracking objects. This will save both energy and bandwidth for drones, which will allow longer flight times and less dependency on the operator.
Further details and specifications for the AR9X01 are yet to be released.
A Trend Towards Compact AI
Artosyn isn’t the first company to pursue AI SoCs. ARM announced its Machine Learning processor and Object Detection processor earlier this year, focusing on the mobile market. Both have been designed and developed from the ground up without relying on previously existing GPU/CPU architectures.
Both processors aim to make things like facial recognition, translation, and visual data processing even faster and more comprehensive.
Image courtesy of ARM.
Google also has AI chips in the work, some of which have been tested by Lyft and can be used for recognizing pedestrians and street signs. One particularly innovative feature of the chips is the ability to train it in a few hours instead of a few days. Google's quickly evolving Cloud TPUs (tensor processing units) are also pushing the boundaries of AI, not just in what's possible now but in what's theoretically scalable for the future. The development of these chips can also be seen as Google’s attempt to gain independence from company’s like NVIDIA or Intel for processing.
Beyond drones, embedded AI can be paired with virtual/augmented reality headsets for even more interesting applications, medical and wearable devices, IoT for edge computing, and intelligent home automation.
Featured image courtesy of RoboHub.