SLAM on Spiri


Simultaneous Location and Mapping

SLAM is a set of algorithms that work together, simultaneously making a map of a robot's surroundings, and telling it its relative location within that map. Just as humans rely on a combination of our eyes and inner ears to orient ourselves in space, Spiris use a fusion of stereo cameras, magnetometers, gyroscopes and GPS.

Oriented FAST Rotated BRIEF

FAST and BRIEF are computer vision algorithms, and are components underlying the type of SLAM (ORB_SLAM2) running on Spiri. FAST (Features from Accelerated Segment Test) is a "feature detector," a way to look for corners in the architecture. The algorithm scans every pixel in a frame and locates those that are in high contrast to the pixels around them. BRIEF (Binary Robust Independent Elementary Features) is a "feature descriptor," an algorithm that describes patches of an image by comparing the intensities of pairs of pixels in the patch.

ORB_SLAM2

The SLAM algorithm running on the Spiri Mu is ORB_SLAM2 enhanced with stereo vision and graphics acceleration. This algorithm works in three steps, repeated again and again while the robot is in motion. The first step, detection, recognizes features in the frame using FAST and describes them using BRIEF. At this point a tentative estimate is made of the Spiri’s pose (its position and angular orientation) relative to those features. The frame is compared with the local maps in Spiri’s memory, if any. The second step is local mapping. The best matching map to the tentative pose estimate is found, and, at this point, may be updated by adding or culling features. The third step, loop closing, checks the tentative pose and map estimates for validity, corrects errors, and performs final refinements on the pose estimate.

Performance Boosts

We take full advantage of Spiri’s feature set to make ORB_SLAM2 run better. Three things stand out. First, we enhance every step of the operation by using graphics acceleration. The main computer on Spiri is designed with graphics processing in mind. We compile and set up our SLAM system with CUDA support – this puts 256 Pascal parallel processors to work on the many simultaneous calculations the algorithm demands. Second, we use stereo vision. This helps keep the scale from one local map to the next consistent. Third, we provide SLAM results as a ROS topic. This makes them available for use by the navigation system.

Applications for SLAM

The primary application for SLAM is to enable precise navigation indoors, or otherwise where global navigational satellites are unreachable. Even outdoors, SLAM makes navigation more precise than GPS or similar navigation is on its own. Maps can be stored, meaning they can also be shared and cooperatively built and optimized. SLAM can work in tandem with obstacle avoidance algorithms.