“Wi-Fi Sensors” Help Robots Map Their Indoor Environment
Research from UCSD takes a new approach to simultaneous localization and mapping (SLAM).
An infamous engineering challenge in robotics is simultaneous localization and mapping (SLAM). SLAM refers to a robot's ability to map an environment while locating itself within it.
To achieve SLAM, developers often employ a variety of hardware, software, and computer science techniques. This week, researchers from the University of California, San Diego (UCSD) published a paper in which they describe a new method for SLAM that uses Wi-Fi signals as a homing beacon.
The researchers posit that Wi-Fi sensing could potentially replace LiDAR sensors and complement other low-cost cameras for indoor positioning. Screenshot courtesy of UCSD
In this article, we’ll provide some background on SLAM and discuss how this new method proposed by the UCSD researchers takes an interesting twist on mapping technology.
How Does SLAM Work?
In order for robots to guide themselves through an environment, they must first understand both their environment and their place within it. With SLAM, a robot builds a map of its environment as it traverses it in real-time while also understanding its location within that map.
The goal of SLAM is to map and understand the robot’s location within an environment. Image from Sifrobot
Mapping is often achieved through visual hardware solutions such as LiDAR, cameras, and radar. When using radar and LiDAR, the robot relies on time-of-flight data, in which the robot will map its environment by sending out laser pulses and calculating the round trip time for the pulses’ reflections. Camera-based solutions use computer vision and statistical methods to analyze a frame and understand the depth and location of objects in the environment.
Localization, on the other hand, is achieved through statistical methods, generally by taking data points within the map and using algorithms such as the Five Point Algorithm to estimate location.
SLAM Faces Many Obstacles—Literally
However, SLAM is difficult to implement in practice because of a variety of environmental and hardware limitations. As All About Circuits contributor Nicholas St. John writes, methods of visual SLAM are often limited by the dynamic environments that exist in the real world.
For example, while cameras are a low-cost hardware solution that provides context-rich maps and location estimates, they do not yield useful data in poorly-lit or homogenous settings. Similarly, LiDAR may offer immunity to glare and homogenous environments, but they can be limited by range in environments like a long hallway.
These limitations can make it difficult or impossible for a robot to create a 3D map of its environment, making SLAM unachievable and autonomy infeasible. Since there is no way to control all environments in a real-world application, researchers must look for other methods of 3D mapping for SLAM.
UCSD Devises Uses Wi-Fi to Map
This week, researchers from UCSD published a paper in which they describe a new method for SLAM called P2SLAM.
P2SLAM works by communicating via Wi-Fi with local access points. A robot is equipped with Wi-Fi sensors that allow it to both send and receive Wi-Fi signals back and forth. At initialization, the robot will call out to the local Wi-Fi access points and wait for a reply back, almost like a game of Marco Polo. The robot will interpret the unique physical properties of the Wi-Fi signal, such as angle of arrival and direct path length, to interpret where the robot is relative to the access point. As this process continues, the robot gathers more information about its environment and its location within it until it develops a full picture.
P2SLAM uses relative odometry with Wi-Fi signals to perform SLAM. Image from Arun et al.
According to the researchers, a significant benefit of this approach is that P2SLAM is not a visual-based SLAM. As such, it does not suffer from environmental limitations such as poor lighting and homogeneity. In addition, Wi-Fi signals are present in most domestic and commercial environments, making this method essentially free to use and accessible in most locations.
A Comparable Result to LiDAR
The researchers tested their unique Wi-Fi-based SLAM technology in a commercial building that included multiple access points. From there, the team equipped a robot with Wi-Fi sensors, LiDAR, and a camera to compare how the three technologies mapped their environment.
After the robot made several trips around the floor—which included bright and dimly-lit spaces, long and narrow hallways, and several corners, the researchers found that mapping and localization data from the Wi-Fi sensors were just as accurate as data from the LiDAR sensor and commercial camera.
P2SLAM navigating an office. Screenshot courtesy of UCSD
The research (while it shows promise) is the first of its kind and must answer many more questions before P2SLAM—a proof of concept—can be implemented in production.