This functionality is implemented in Navigation.py file.
First of all, we have to decide how to position our landmarks. A simple experiment shows that, with the camera our robot uses and size of a marker equal 1m, the distance it is recognized with confidence by OpenCV algorithms is about 20 meters. This is understandable, as camera is just 1000x500 pixels. Yet, I am not going to change it at the moment (though in case of a real life robot, i definitely should). It means we have to position landmarke about 20m apart.
Well, as I mentioned, this is not exactly an outdoor approach. Though, if we use 4000x3000 camera, we will get 24 times better result (have not tested yet); plus, out robot can use an additional camera with a zoom lens... and so on.
Second, Gazebo has some kind of a problem that I do not quite understand yet. Sometimes, aruco functionality simply doesn't work, and to fix it, we need either restart the app, or to use an additional command during compilation:)
Let's hope it will work better when I switch to a newer Ubuntu / ROS2 / Gazebo.
In the code, there are commented out print statements:
If you uncomment it, you will see distance, printed for each detected aruco marker, or "None", if no marker was detected in a current frame.
As you can see, half of the time either camera points in the wrong direction, or markers are too far. Still, UKF is able to hold on in between successful measurements. If you get a longer listing, you will see measurements for two or even three markers in the same frame... But usually, it is one at a time. In our setup the accuracy is not great, but as i mentioned, it can be improved.
Navigation.py code is well documented, so I am not going to repeat it here.
First of all, we need to do something with design of out robot. I have already mentioned that it could use a better camera. Also, if you look at design of, say, modern delivery drones, you will see a lot of extras, which are useful... yet add to the price as well as to a required processor power. For example:
On one side, such approach immediately gives us a 360 degrees view (more markers to attach our position to) as well as ground view (bottom cameras on a picture above are, probably, for looking for potential obstacles at a surface).
On the other side, this design is not for a single processor. Humans and animals use an alternative approach, they can look in different directions. Should I put robot's camera on some kind of a robotic arm? Maybe.
Next, I will probably create some kind of a house (a Gazebo model) to test indoor navigation. With markers much closer to a camera, we can achieve better accuracy even with existing design.
Using non-aruco markers will require either some kind on advanced (NN, perhaps) image recognition, or some alternative design for the markers. Well, nothing prevents us from making markers visible in IR light only, so they do not destroy the view of our property, or we can use (again, IR) lamps flashing at a different frequency, or... There are many possible solutions, some already offered in the Internet. I think I will go with using NN for advanced image recognition.
Current design uses only distance to the marker, ignoring the angle. This is to be fixed as a priority.
Also, there is a lot to add to a way UKF gets data. In many cases, sensor provides more data than I use. For example, in imu_callback, I ignore msg.orientation, or when working with GPS, I ignore speed value it can provide, or in case of odometer, I ignore msg.twist.twist. It is understandable, as I was trying to make a simple case, but it definitely is something to add.
Finally, something should be done with using Sun and stars (probably, not in my flat world project) as well as with using other robots as landmarks.