The code for this section is located in nav25d_02 part of an archive. Note that archive includes some additional folders, like maps and worlds, that are used by all projects and therefore are located outside of them. Also note that we use ROS2 Galactic.
In a previous section, I have introduced a rather simple code for robot localization using Kalman filter. Now it is time to apply it to ROS2, so we can do robot localization in a Gazebo simulator.
Also, some errors will be fixed and explained: both my errors and errors that are due to Gazebo imperfections.
Now, the code of this chapter is not final: there is one more aspect that we will have to explore in future. What if sensor readings are wrong? Let's say we have a compass, and we use it to figure out the robot's heading. Then we pass by a metal structure or a power line, and it has magnetic field of its own. Our compass provides wrong data, and the accuracy of our localization drops, as we have no way of knowing which sensor (compass or odometry) is wrong. So I am going to address it in the later sections.
Also, sensor fusion is not done here, it is postponed till later sections.
Also also, Landmarks (implemented in non-ROS2 code) are not implemented here. The reason is, we do not have tools to detect landmarks yet. I will add landmarks - you got it right! - in a larer section.
In ROS2, there are many ways of creating the demo we need. What I chose is the quick and dirty one: I created a ROS2 Node that commands the robot where to move by publishing desired linear and angular speed for its diff. drive. Also it reads data from robot's sensors and feeds it to Kalman filter. This becomes especially important if we want to have some kind of a "brain" that controls all our robots: pairing it with localization is simply wrong, as each robot should do its own localization. Or not: imagine that one robot can consider another robot as a landmark, so when robots see each other, they can adjust their position.
So, this approach is not ideal: ideally, we need a separate entity to send commands and a totally different one to do localization. Therefore, this node will be split to two nodes in future.
Finally, this code is for 2d world; I will re-implement it for 3d world later, but it is much easier to explain it without Z axe and all these quternions and matrix operations floating around.
To test the code, I am going to use same robot I used before.
But I am going to move the IMU sensor to its center to improve accuracy. To tell the truth, it will not help due to Gazebo problems, but it is a good practice anyway.