In robotics, the term is used to describe the process of figuring out where the robot is. As we deal with navigation, we need to know the robot's current location, and the higher accuracy, the better. For example, someone might think that 30 cm accuracy is very good for a robot (and it is good and hard to achieve). However, for a 60 cm wide robot, 30cm error means that it can not move through the door! Well, not every time.
In the example below you will see the robot trying to move along the path made of 10 cm segments - and deviating from it wildly. That is, even though it runs in a simulation with a nice laser scan and almost non-slippery tires.
In order to do an accurate localization, we use the following tricks:
Odometry. Robot (its dif drive controller) knows how many spins the wheels did, that, together with size of the tire and distance between wheels is enough to come up with estimation for coordinates.
Downside: we perform all calculations from the starting point. Usually, this starting point is called odom frame, and it is attached to the world frame using static transformation (see below). You can think of it as of the rope between robot in its current position and the robot's initial position.
Ans as we have slippery tires and air pressure in the tires can be slightly different, and motors can produce different torque... there will be an error, and this error will accumulate.
Inertial sensors. It has same problems the odometry has - accumulating errors. However, as it is a different source of data, we can use tricks, like Kalman filter, to combine it with odometry and to come up with a better accuracy.
Laser scans, GPS and other external sources. Unlike "internal" odometry and imu, LIDAR or GPS can get us coordinates that do not have accumulating error. In other words, if GPS provides us +/-10m accuracy NOW, it will provide same accuracy 24 hours later.
As you can guess, we can combine data from all sensors: odom+imu+GPS using Kalman, for example, LIDAR is usually used with particle filters... and accuracy improves. In ROS2, Kalman filter is implemented in robot localization node, while particle filter - in AMCL, plus, there is a large number of 3rd party modules providing different versions of these (and some other) algorithms.
Why Path following? Earlier, I used Nav2, a ROS2 navigation library that has some nice navigation utilities. And as you can recall, I didn't need localization. The reason is, Nav2 modules were subscribing to corresponding topics without asking permission (and sometimes doing very strange things if something was wrong). So to control it better, in order to create demos for this section, I needed a navigation example that I can control. This is the first reason.
The second reason is, Nav2 is doing what it believes is necessary, and not what you need. Sometimes it is ok, sometimes not. For example, you can navigate to a waypoint, but can you navigate to it, using a road with two lines, one in "your" direction and one with opposite traffic? Probably, not. There are algorithms for it, but they are not in Nav2 yet (I believe so, and if I am wrong, it only means that they have no tutorials worth mentioning. I am sorry, I am not supposed to say that. But how can you spend years writing code and minutes documenting it?).
Anyway, this section is a good starting point in case you want to create your own navigation algorithm. This is a second reason.