SLAM (Simultaneous Localization and Mapping)
What is SLAM (Simultaneous Localization and Mapping) in Humanoid Robotics?
A technique for building a map of an unknown environment while tracking the robot's location within it.
SLAM is essential for autonomous navigation in new spaces, allowing robots to explore while creating spatial maps they can use for future navigation.
How SLAM (Simultaneous Localization and Mapping) Works
SLAM addresses the chicken-and-egg problem: to localize, you need a map, but to build a map, you need to know your position. The algorithm maintains both simultaneously. As the robot moves, sensors (LiDAR, cameras) detect environmental features - corners, edges, distinctive objects. These observations are matched to previously seen features to estimate how far the robot moved. This movement estimate updates the robot's position on the growing map. New features are added to the map at their calculated positions. Loop closure detection recognizes when the robot returns to previously mapped areas, allowing the system to correct accumulated errors. Graph optimization techniques refine both the map and the position history to maintain consistency.
Types of SLAM (Simultaneous Localization and Mapping)
- Visual SLAM: Uses cameras to detect and track features
- LiDAR SLAM: Creates 3D point cloud maps from laser scans
- RGB-D SLAM: Uses depth cameras for dense 3D mapping
- Feature-based SLAM: Tracks specific landmarks or features







