The code for this section is located in navigation_bot_11 (same as code for a previous section) part of an archive. The only difference, for prev. section you needed 11_charger_docking.py, while for this section you will need 11a_charger_docking.py. Note that archive includes some additional folders, like maps and worlds, that are used by all projects and therefore are located outside of them. Note also that we use ROS2 Galactic now.
Before getting into details, let me give a quick review of all three "docking" sections.
The bot was running (patrolling?) between couple of waypoints, as long as its battery level was high enough. Then it would go to a charger, restore the charge, and resume patrolling again, and so on, forever.
The first section (10_charger_docking.py) used wheel odometry only. As the result, position error accumulated, which created some nasty side effects, especially when bot ran for extended periods of time.
The way bot rotation and navigation was implemented in a first section was rather straightforward. Let's take rotation as an example. We want the bot to rotate 45 degrees at 10 degrees per second. To do it, we divide 45/10 and get 4.5 seconds, then we send command to a bot to start rotating, and stop it 4.5 seconds later.
That might have worked for a 2-wheeled bot on a perfect ground without slippage, but our bot has 4 wheels, and there is slippage. So the resulting angle is not 45 degrees. Same, though to a lesser degree, holds true with driving forward.
To solve the problem, I used bot coordinates from AMCL (11_charger_docking.py). This plugin reads lidar / camera / IMU / odometry, and it can fix errors by noticing "landmarks". It is not precise, but, at least in theory, the error should not accumulate.
In the case of rotating bot, we have start angle alpha, tagret angle alpha plus beta, and rotate until AMCL reports robot orientation to be alpha plus beta. No timer involved, so the angle is measured directly.
According to my experiments (that you can repeat using provided code), navigation accuracy improved to an acceptable level.
Finally, in the third (11a_charger_docking.py, this one) section I have addressed the last remaining problem. As the robot moves towards the "line on the floor that is perpendicular to projection of the narker to the floor (see 10_charger_docking.py for images)", and then turns towards the charger, it is aligned, but not necessarily accurate enough. We can not simply say "ok, the distance between bot and charger is 2 meters, let's run forward for two meters" - we'll miss the charger for small, but signifficant margin: 10-15, maybe even 20-30 centimeters.
So we need a way to drive towards the charger, while staying on a perpendicular line. So I have added this code.
Also, the camera bot uses, has a minimum range (see camera.xacro), which is 0.8 meters. So in the code, when bot approaches the charger at 1 meter distance, it does exactly what I refered above as a bad idea: blindly goes forward for 90 cm, to stop 10 cm before the bot.
There is nothing I can do here, because in order to fix the "last meter" problem, I have to know how the charger works, what its contact pads look like and so on. There are many ways of solving that challenge, for example, we can use the approach that heat guided missiles apply (except, as our robot moves in 2D, not in 3D, it will require 2, not 3/4 heat sensors). The approach is exactly the same as with camera, except heat sensors can work on a ultra-short distance.
In two words, thhis section is based on a previous section, with additional improvements.