News

Robot Navigation Roundup: Tracking/Depth Cameras, SLAM SDKs, Accelerators, and Cloud Navigation

June 03, 2019 by Chantelle Dubois

Here is a roundup on some of the latest trends in the robotic navigation domain, as of June 2019.

Here is a roundup on some of the latest trends in the robotic navigation domain, as of June 2019.

Robotic navigation is a highly specialized field on its own, and there has been significant investment into research and development over the years to improve the available technology.

The importance of robotic navigation is further corroborated by a recent MarketResearchReports.biz publication—titled "Robotic Mapping and Odometry"—which remarks that the increasing use of autonomous robots across several industries worldwide is creating rapid growth and demand for the market. 

There are many different types of solutions out there, improving different aspects of robotic navigation. Here is a roundup of a few different technologies worth reading about.

Combining Tracking and Depth Perception for Reactive Visual Simultaneous Localization and Mapping (SLAM)

Intel is using the visual approach with the RealSense line of hardware which features several depth, light, and tracking cameras. In particular, the line encourages mixing and matching of camera arrays to achieve better robotic navigation. For example, one combination is the T265 tracking camera with the D435 depth camera. 

 

 

As the name suggests, a tracking camera helps track where the robot is by determining its pose from both visual data and sensor data from IMUs. The depth camera provides a 3D point cloud of objects the robot "sees". Collectively, this information builds an accurate map of the robot's surroundings as it explores until its entire space is mapped. 

With full awareness of its environment, the robot can react to sudden new obstacles without having to completely remap or rescan its environment to decide on a pathway forward; it has enough information to know it can just move around it and can do so quickly.

Intel also includes what they call V-SLAM technology, as part of the RealSense line. This is basically a more visual form of SLAM navigation. To help power V-SLAM, two wide field-of-view fisheye lenses (163 +/- 5 degrees) are used for visual tracking and low-powered, always-on, specialized VPUs (vision processing units).

Lowering Barrier to Entry with Simplified SDK

The barrier to entry for many robotic navigation solutions can be quite high, especially when trying to combine advanced hardware and software.

CEVA attempts to make robotic navigation more accessible by combining the CEVA SLAM-SDK with their already existing line up of specialized processors. In particular, the CEVA-XM6, a specialized computer vision processor, and the NeuPro processors, which are specialized deep learning AI processors.

The CEVA SLAM-SDK provides interfaces that allow processing to be offloaded from the CPUs to specialized processors. Image processing building blocks are also included for capabilities such as feature detection, accelerated linear algebra, and other fast numerical manipulations important to computer vision. It also features the CEVA-CV library for OpenCV functionality, and RTOS scheduling, all out of the box.

 

 

Image courtesy of CEVA

 

Projects and products can be hampered by long, complex software development times, so when fancy new hardware is released that promises better performance, there is always a question of how complicated it is to use. If it’s too complicated, it will never be adopted. CEVA is imagining their SLAM-SDK being used in a variety of applications that involve computer vision, including robotic navigation, AR/VR, and drones.

Specialized System-on-Chip Accelerators for Autonomy

One of the ways in which advanced hardware is becoming more energy and space efficient is by combining everything needed for an application in a single, highly-specialized chip. SoC accelerators are not a new concept, but more interesting, smaller, and more powerful ones are being made available every passing year. 

Intel presented a project using one such experimental accelerator at the 2019 ISSCC. The team demonstrated a fleet of small multi-robots performing coordinating tasks without any sort of centralized processing server or human involvement. The backbone of this robotic collaboration is a customized 22nm CMOS SoC, 16mm in size, and using 37 mW of power. The SoC handles all the fusing of sensor data, mapping, localization, object detection, collision detection, motion control, and path planning.

 

Image courtesy of Intel.

 

Having all this managed from one chip significantly lowers barrier-to-entry, especially when it comes to systems requiring multiple robots working in tandem, that all have energy and space constraints. The example applications Intel suggests are in search-and-rescue or precision agriculture.  

Cloud-Based Processing and Navigation

Of course, on the flip side, there are situations where perhaps SLAM processing is simply not possible on the robot, itself. In these cases, cloud solutions are viable. 

Cloud-based processing and navigation typically involve receiving sensor data from a robot and handling all the processing remotely before sending back that information. Cloud servers are less constrained by power and space availability, and so much more complex, and computationally heavy hardware and algorithms can be used. 

Further, in a multi-agent system, information from multiple robots can all be shared with the cloud, which is then shared over the entire fleet. This can help map an area faster, provide more frequent updates, and provide broader spatial awareness.


 

Whether it’s more hardware- or software-based, there are plenty of ways to achieve increasingly precise robotic navigation. Different approaches have their own pros and cons, of course, but for many scenarios, solutions are becoming better and better. 

 

Featured image courtesy of Intel.