So far, our Kalman filter only used one element of data provided by IMU sensor: orientation. It was the easiest bit of data to integrate into the filter, as it requires no transformation between what sensor returns and what goes into robot's state.

Acceleration (ax, ay; we still ignore z for now) is different. Robot's state does not include acceleration (we can add it, but let's say we do not want it). More than that: the Hx() function in our code should take state and return expected sensor data. There is no way to get acceleration from x, y and theta! So we need to do something about it.

An obvious solution is to keep previous state of a robot. And as we keep it elsewhere (not in robot's state itself), we can also keep speed (one we received as a command input). As the result, at any point we will have current and previous speed (separated by time period dt), so we can calculate acceleration.

Here is another problem. The move() function that we use to model the robot's position, should update those stored data from the previous step. However, the move() is also called from Kalman filter itself - few times during each step! So we need to make sure it only updates once, when the next step (cycle) happens. This is done using cycle counter.

Now we are ready to take a look at the code of kalman_1_accel.py:

```
# (c) robotics.snowcron.com
# Based on Kalman and Bayesian Filters in Python, Roger R Labbe Jr

# import required modules:

import numpy as np
from numpy.random import randn

import copy

import scipy.stats as stats
from scipy.linalg import block_diag

from math import tan, sin, cos, sqrt
from math import cos, sin, atan2, pi

import matplotlib.pyplot as plt

from filterpy.kalman import UnscentedKalmanFilter as UKF
from filterpy.common import Q_discrete_white_noise
from filterpy.kalman import JulierSigmaPoints
from filterpy.kalman import unscented_transform, MerweScaledSigmaPoints
from filterpy.stats import plot_covariance_ellipse

# Sensor data:
#
# gps: x,y
#
# imu: orientation (roll, pitch, yaw), angular velocity (vx, vy, vz),
#      linear acceleration (ax, ay, az)
# imu currently used: orientation (yaw), linear acceleration (ax, ay)
#
# landmarks: data in format [dist_1, bearing_1, dist_2, bearing_2,...]
#
# Here we list sensor name (like "gps") and number of data elements
# it returns (in case of GPS, it is 2: x and y)
dictSensors = { }

# As robot moves, some landmarks become obscurred. The following is a list
# of boolean values for visibility of each landmark
arrShowLandmarks = []

# Stored predictions from the prev. step. They are used if particular landmark
# is not visible at this step, so we have to skip prediction for it, using
# old value.
arr_npPrediction = None

# Stored values from the previous step
dictState = {}

# Robot's state: coordinates and orientation:
# x = [x, y, θ]T
nNumOfStateParams = 3

# Kalman filter cycle counter. "move()" can be called by us in Kalman
# cycle, but also by Kalman itself. In first case we need to adjust
# values (speed, acceleration) we keep, in second, we don't. To distinguish,
# we use cycle counters.
nPrevKalmanStep = 0
nCurrentKalmanStep = 0

# From state, command and stored prev. values, calculate ax, ay, vel, heading
def getAccelerations(u, x, dt):
vel = u[0]
angularVelZ = u[1]

ax = (vel * cos(heading) - dictState["vel_prev_x"]) / dt
ay = (vel * sin(heading) - dictState["vel_prev_y"]) / dt

# --- Model of robot movement

# control input u is the commanded velocity and steering angle
# u = [v, alpha]T
def move(x, dt, u):
global nPrevKalmanStep
global nCurrentKalmanStep

ax, ay, vel, heading = getAccelerations(u, x, dt)

dist = vel * dt

if(nPrevKalmanStep < nCurrentKalmanStep):
dictState["ax"] = ax
dictState["ay"] = ay

nPrevKalmanStep = nCurrentKalmanStep

return np.array([
])

# ---

def normalize_angle(x):
x = x % (2 * np.pi)         # force in range [0, 2 pi)
if x > np.pi:               # move to [-pi, pi)
x -= 2 * np.pi
return x

# ---

# The measurement model must convert the state x = [x, y, θ]T
# into a range and bearing to the landmark. If p is the position
# of a landmark, the range r is r=sqrt((px-x)^2 + (py-y)^2)
# We assume that the sensor provides bearing relative to the
# orientation of the robot, so we must subtract
# the robot's orientation from the bearing to get the sensor
# reading: angle = atan2(py - x[1], px - x[0])

# The residual is y = z - h(x). Suppose z has a
# bearing of 1 ◦ and h(x) is 359 ◦ . Subtracting them gives -358 ◦ .
# This will throw off the computation of the
# Kalman gain because the correct angular difference is 2 ◦ .
# So we will have to write code to correctly compute
# the bearing residual.

# as the robot maneuvers different landmarks
# will be visible, so we need to handle a variable number of measurements.
# The function for the residual in the measurement will be passed an array
# of several measurements, one per landmark.
def residual_h(a, b):
global arr_npPrediction

residuals = []

i = 0

if("gps" in dictSensors):
residuals.append(a[i] - b[i])
residuals.append(a[i + 1] - b[i + 1])
i += dictSensors["gps"]

if("imu_orientation" in dictSensors):
# Orientation (yaw)
residuals.append(normalize_angle(a[i] - b[i]))
i += 1

if("imu_accel" in dictSensors):
# Linear acceleration along x-axis
residuals.append(a[i] - b[i])
i += 1

# Linear acceleration along y-axis
residuals.append(a[i] - b[i])
i += 1

if("landmarks" in dictSensors):
for j in range(0, dictSensors["landmarks"], 2):
if(arrShowLandmarks[j//2] == True):
residuals.append(a[i + j] - b[i + j])   # distance
residuals.append(normalize_angle(a[i + j + 1] - b[i + j + 1]))
else:
residuals.append(arr_npPrediction[i + j])
residuals.append(arr_npPrediction[i + j + 1])

i += dictSensors["landmarks"]

return residuals

# ---

def residual_x(a, b):
y = a - b
y[2] = normalize_angle(y[2])
return y

# the measurement model
# takes a state variable and returns the measurement
# that would correspond to that state
def Hx(x, landmarks, u, dt):
global arr_npPrediction

hx = []

if("gps" in dictSensors):
hx.extend(x[[0, 1]])

if("imu_orientation" in dictSensors):
hx.extend([x[2]])

if("imu_accel" in dictSensors):
# Here we calculate accelerations based on new data and data
# from the prev. step
ax, ay, _, _ = getAccelerations(u, x, dt)

hx.extend([ax])
hx.extend([ay])

if("landmarks" in dictSensors):
for lmark in landmarks:
px, py = lmark
dist = sqrt((px - x[0])**2 + (py - x[1])**2)
angle = atan2(py - x[1], px - x[0])
hx.extend([dist, normalize_angle(angle - x[2])])

arr_npPrediction = np.array(hx)

return np.array(hx)

# ---

def state_mean(sigmas, Wm):

x = np.zeros(nNumOfStateParams)

x[0] = np.sum(np.dot(sigmas[:, 0], Wm))     # x
x[1] = np.sum(np.dot(sigmas[:, 1], Wm))     # y

sum_sin = np.sum(np.dot(np.sin(sigmas[:, 2]), Wm))
sum_cos = np.sum(np.dot(np.cos(sigmas[:, 2]), Wm))
x[2] = atan2(sum_sin, sum_cos)              # theta

return x

# ---

def z_mean(sigmas, Wm):
z_count = sigmas.shape[1]
x = np.zeros(z_count)

i = 0
if("gps" in dictSensors):
x[i] = np.sum(np.dot(sigmas[:, i], Wm))
x[i+1] = np.sum(np.dot(sigmas[:,i+1], Wm))
i += dictSensors["gps"]

if("imu_orientation" in dictSensors):
sum_sin = np.sum(np.dot(np.sin(sigmas[:, i]), Wm))
sum_cos = np.sum(np.dot(np.cos(sigmas[:, i]), Wm))
x[i] = atan2(sum_sin, sum_cos)                              # yaw
i += 1

if("imu_accel" in dictSensors):
x[i] = np.sum(np.dot(sigmas[:, i], Wm))                     # ax
i += 1

x[i] = np.sum(np.dot(sigmas[:, i], Wm))                     # ay
i += 1

if("landmarks" in dictSensors):
for z in range(i, z_count, 2):
x[z] = np.sum(np.dot(sigmas[:,z], Wm))

sum_sin = np.sum(np.dot(np.sin(sigmas[:, z+1]), Wm))
sum_cos = np.sum(np.dot(np.cos(sigmas[:, z+1]), Wm))
x[z+1] = atan2(sum_sin, sum_cos)

i += dictSensors["landmarks"]

return x

# ---

def run_localization(initPos, cmds, landmarks, sigma_vel, sigma_steer, sigma_gps,
sigma_imu_orientation, sigma_imu_angular_vel, sigma_imu_accel, sigma_range,
sigma_bearing, dt, ellipse_step=1, step=10):

global arrShowLandmarks

global nPrevKalmanStep
global nCurrentKalmanStep

# Initial values, will be re-assigned in the cycle below
dictState["vel_prev_x"] = 0
dictState["vel_prev_y"] = 0

plt.figure()

points = MerweScaledSigmaPoints(n=nNumOfStateParams, alpha=.00001,
beta=2, kappa=0, subtract=residual_x)

dim_z = 0
for val in dictSensors.values():
dim_z += val

ukf = UKF(dim_x=nNumOfStateParams, dim_z=dim_z,
fx=move, hx=Hx, dt=dt, points=points, x_mean_fn=state_mean,
z_mean_fn=z_mean, residual_x=residual_x, residual_z=residual_h)
ukf.x = np.array(initPos)

# initial uncertainties (variances) in the robot's state vector
ukf.P = np.diag([.1, .1, .05])

# measurement noise, dimensions (dim_z x dim_z)
# Note the way we build sigmas array depending on available sensors
arrSigmas = []
if("gps" in dictSensors):
arrSigmas.extend([sigma_gps**2, sigma_gps**2])
if("imu_orientation" in dictSensors):
arrSigmas.extend([sigma_imu_orientation**2])
if("imu_accel" in dictSensors):
arrSigmas.extend([sigma_imu_accel**2, sigma_imu_accel**2])

# Note: these are initial values. I will adjust them in a cycle below
# as robot moves and distance to landmarks change, which means sigmas
# for landmark distance change as well
if("landmarks" in dictSensors):
for lmark in landmarks:
distance_adjusted_sigma_range = sigma_range * sqrt(lmark[0]**2 + lmark[1]**2)
ukf.R = np.diag(arrSigmas)

# process noise covariance matrix for the state vector
ukf.Q = np.eye(nNumOfStateParams)*0.0001
# Alternative:
# ukf.Q[0:2, 0:2] = Q_discrete_white_noise(2, 1, var=0.0001)
# ukf.Q[2, 2] = 0.0001
sim_pos = ukf.x.copy()

# plot landmarks
if len(landmarks) > 0:
plt.scatter(landmarks[:, 0], landmarks[:, 1], marker='s', s=60)

track = []

for i, u in enumerate(cmds):
# Set random elements in arrShowLandmarks to False (hide landmarks)
nHideLandmarks = np.random.randint(0, 4)
arrShowLandmarks = np.ones(len(landmarks), dtype=bool)
arrFalse = np.random.choice(np.arange(len(landmarks)), size=nHideLandmarks, replace=False)
arrShowLandmarks[arrFalse] = False

nCurrentKalmanStep += 1
sim_pos = move(sim_pos, dt / step, u)

track.append(sim_pos)

gpsPos = sim_pos[:2] + np.random.normal(0, sigma_gps, size=(2))
imuYaw = sim_pos[2] + np.random.normal(0, sigma_imu_orientation, size=(1))

# Here we use stored accelerations, but as move() was just called, they are
# current values, not values from prev. cycle

imuAccelX = dictState["ax"] + np.random.normal(0, sigma_imu_accel, size=(1))
imuAccelY = dictState["ay"] + np.random.normal(0, sigma_imu_accel, size=(1))

if i % step == 0:
if("landmarks" in dictSensors):
j = 0
if("gps" in dictSensors):
j += dictSensors["gps"]
if("imu" in dictSensors):
j += dictSensors["imu"]
for lmark in landmarks:
distance_adjusted_sigma_range = sigma_range * sqrt((sim_pos[0] - lmark[0])**2 + (sim_pos[1] - lmark[1])**2)
j += 2
ukf.R = np.diag(arrSigmas)

ukf.predict(u=u)
if i % ellipse_step == 0:
plot_covariance_ellipse((ukf.x[0], ukf.x[1]), ukf.P[0:2, 0:2], std=6, facecolor='k', alpha=0.3)

x, y = sim_pos[0], sim_pos[1]
z = []

if("gps" in dictSensors):
z.extend(gpsPos[:2])
if("imu_orientation" in dictSensors):
z.extend(imuYaw)
if("imu_accel" in dictSensors):
z.extend(imuAccelX)
z.extend(imuAccelY)
if("landmarks" in dictSensors):
for lmark in landmarks:
dx, dy = lmark[0] - x, lmark[1] - y
d = sqrt(dx**2 + dy**2) + randn()*sigma_range
bearing = atan2(lmark[1] - y, lmark[0] - x)
a = (normalize_angle(bearing - sim_pos[2] + randn()*sigma_bearing))
z.extend([d, a])

ukf.update(z, landmarks=landmarks, u = u, dt=dt)

if i % ellipse_step == 0:
plot_covariance_ellipse(
(ukf.x[0], ukf.x[1]), ukf.P[0:2, 0:2], std=6,
facecolor='g', alpha=0.8)

track = np.array(track)
plt.plot(track[:, 0], track[:,1], color='k', lw=2)
plt.axis('equal')
plt.title("UKF Robot localization")
plt.show()

return ukf

# ---

def turn(v, t0, t1, steps):

#landmarks = np.array([])
landmarks = np.array([[5, 10], [10, 5], [15, 15], [20, 5], [0, 30], [50, 30], [40, 10]])

# Comment in/out any combination
dictSensors = {
#"gps": 2,
#"imu_orientation": 1,
"imu_accel": 2,
#"landmarks": 2 * len(landmarks)
}

# accelerate from a stop
cmds = [[v, .0] for v in np.linspace(0.001, 1.1, 50)]

# Drive straight at 45 degrees (note: initial heading is 45 degrees)
cmds.extend([cmds[-1]] * 50)

# turn right
v = cmds[-1][0]
cmds.extend(turn(v, 0, -30, 50))
cmds[-1][1] = 0
cmds.extend([cmds[-1]]*100)

# more right
cmds.extend(turn(v, 0, -15, 50))
cmds[-1][1] = 0
cmds.extend([cmds[-1]]*100)

# turn left
cmds.extend(turn(v, 0, 30, 50))
cmds[-1][1] = 0
cmds.extend([cmds[-1]]*100)

# more left
cmds.extend(turn(v, 0, 15, 50))
cmds[-1][1] = 0
cmds.extend([cmds[-1]]*100)

# more left
cmds.extend(turn(v, 0, 50, 50))
cmds[-1][1] = 0
cmds.extend([cmds[-1]]*400)

ukf = run_localization(initPos, cmds, landmarks,
sigma_vel=0.1,
sigma_steer=0.1,
sigma_gps = 10.0,
sigma_imu_orientation = 0.3,
sigma_imu_angular_vel = 0.3,
sigma_imu_accel = 1.0,
sigma_range=1,
sigma_bearing=0.3,
dt=0.1,
step=1, ellipse_step=50)

print('final covariance', ukf.P.diagonal())

```

Now, let's test the filter.

```
dictSensors = {
#"gps": 2,
"imu_orientation": 1,
#"imu_accel": 2,
#"landmarks": 2 * len(landmarks)
}

```

```
dictSensors = {
#"gps": 2,
#"imu_orientation": 1,
"imu_accel": 2,
#"landmarks": 2 * len(landmarks)
}

```

```
dictSensors = {
"gps": 2,
#"imu_orientation": 1,
"imu_accel": 2,
#"landmarks": 2 * len(landmarks)
}

```

Note that the implementation is not complete. I still have to add IMU angular velocity, plus the way acceleration is used is a bit wrong. The filter itself is fine, but the way I provide data to it (the model of a sensor) assumes that "linear acceleration" only occures when we, well... linearly accelerate. Which is wrong.

If a robot with an IMU (Inertial Measurement Unit) sensor is moving in a circular path at a constant speed, the linear acceleration reported by the IMU will be non-zero. When an object moves in a circular path, even at a constant speed, it experiences a centripetal acceleration directed towards the center of the circle. This centripetal acceleration is necessary for the object to continuously change its direction and follow the curved path.

So we need to make sure we account for this when we pass simulated accelerations to Kalman filter. I will return to it later.