The code for this section is located in navigation_bot_03 section of the code archive. The robot that we are going to create in this section will be able to drive around (we use keyboard commands), and as it drives, the lidar will create a map. Then we will be able to save this mat to disk for future use.
Now that we have a robot with its sensors set up, we can use the obtained sensor information to build a map of the environment and to localize the robot on the map. The slam_toolbox package provides a set of tools for 2D Simultaneous Localization and Mapping (SLAM) in potentially massive maps with ROS2. It is also one of the officially supported SLAM libraries in Nav2.
Aside from the slam_toolbox, localization can also be implemented through the nav2_amcl package. This package implements Adaptive Monte Carlo Localization (AMCL) which estimates the position and orientation of the robot in a map. Other techniques may also be available, see Nav2 documentation for more information.
Both the slam_toolbox and nav2_amcl use information from the laser scan sensor to be able to perceive the robot's environment. Hence, to verify that they can access the laser scan sensor readings, we must make sure that they are subscribed to the correct topic that publishes the sensor_msgs/LaserScan message. This can be configured by setting their scan_topic parameters to the topic that publishes that message. It is a convention to publish the sensor_msgs/LaserScan messages to /scan topic. Thus, by default, the scan_topic parameter is set to /scan.
Note that when we added the lidar sensor to our robot in the previous section, the lidar topic was different, so let's change it:
lidar.xacro (part of):
For the complete list of configuration parameters of slam_toolbox, see the
Github repository of
slam_toolbox.
For the complete list of configuration parameters and example configuration of nav2_amcl, see
the
AMCL Configuration Guide.
You can also refer to the (SLAM) Navigating While Mapping guide for the tutorial on how to use Nav2 with SLAM.