ROS2 and Gazebo: 2.5D Navigation

Kalman Filter: a longer theory

What is it all about:

When we estimage error we get from a sensor, it can be represented using true value and measured value:

err = x_true - x_estimated

Then the error covariance is

P = E(err*err.T)
Where P is Error covariance matrix, err is estimate error, err.T is same err, transposed.

Basic idea of a Kalman filter is to choose the Kalman gain K such that P is minimized: we want to get low average estimation error. As new data arrive, Kalman gain is auto adjusted, making the estimation optimal.

Terminology and basic concepts:

State Vector (x): represents the variables of interest (state of a system) at a given time step. Example: x could represent position and velocity in a tracking problem.

Process model: Describes how the state evolves from one time step to the next.

x_predicted = F @ x + B @ u + w
F is the state transition matrix, which describes how the state changes from one time step to the next.
B is the control input matrix, which describes how the control input u affects the state.
u is the control vector, representing external inputs to the system (e.g., acceleration).
w is the process noise, representing uncertainty in the process model.

Measurement model: Describes how the measurements relate to the state.


z = H @ x_predicted + v
H is the measurement matrix, which maps the predicted state to the measurement space.
v is the measurement noise, representing uncertainty in the measurements.

Covariance matrices:
P: Error covariance matrix of the state estimate.
Q: Process noise covariance matrix.
R: Measurement noise covariance matrix.

The Algorithm

1. Prediction Step.

State Prediction:

We use dynamic model to update the state estimates:
x_predicted = F @ x + B @ u

Covariance Prediction:

P_predicted = F @ P @ F.T + Q

P_predicted is the predicted error covariance matrix at the next time step.

So, at each step, Kalman filter gives us a normal distribution of what the estimate is. Note that at this step we work with "_predicted" values, that is, before we take the measurements for current step.

2. Update Step:

Measurement Residual:

y = z - H @ x_predicted
y is the measurement residual, representing the difference between the actual measurement and the predicted measurement.

Residual Covariance:

S = H @ P_predicted @ H.T + R
S is the residual covariance, representing the uncertainty of the residual.

Kalman Gain:

K = P_predicted @ H.T @ np.linalg.inv(S)
K is the Kalman gain, which determines how much the predictions should be corrected based on the measurement residual.

State Update:

x = x_predicted + K @ y
x is the updated state estimate after incorporating the measurements.

Covariance Update:

P = (np.eye(P.shape[0]) - K @ H) @ P_predicted
P is the updated error covariance matrix after incorporating the measurements.

UKF

Now, as we are going to use UKF - Unscenter Kalman Filter, there are two additional moments.

First of all, our system is non-linear; we moved away from linear model the moment we used a non-linear state transition function f, also known as system model. For example, any use of trigonometry in our model makes the system non-linear.
State transition function is a function (not matrix anymore!) that takes current state of a system, for example, coordinates and speed, and returns the next state, dt seconds later.

We also have a non-linear sensor model h.

(C) snowcron.com, all rights reserved

Please read the disclaimer