 BLOG / NEWS
How to Tune a Kalman Filter
By Fredrik Bagge Carlson  Jun 15, 2023
The celebrated Kalman filter finds applications in many fields of engineering and economics. While many are familiar with the basic concepts of the Kalman filter, almost equally many find the "tuning parameters" associated with a Kalman filter nonintuitive and difficult to choose. While there are several parameters that can be tuned in a realworld application of a Kalman filter, we will focus on the most important ones: the process and measurement noise covariance matrices.
The Kalman filter
The Kalman filter is a form of Bayesian estimation algorithm that estimates the state $$x$$ of a linear dynamical system, subject to Gaussian noise $$e$$ acting on the measurements as well as the dynamics, $$w$$. More precisely, let the dynamics of a discretetime linear dynamical system be given by
$$x_{k+1} = {Ax_k + Bu_k + w_k}$$
$$y_k = {Cx_k + Du_k + e_k}$$
where $$x_k \in \Bbb R^{n_x}$$ is the state of the system at time $$k, u_k \in \Bbb R^{n_u}$$ is an external input, $$A$$ and $$B$$ are the state transition and input matrices respectively, $$c$$ an output matrix and $$w_k \backsim N(0, R_1) $$ and $$e_k \backsim N(0, R_1) $$, are normally distributed process noise and measurement noise terms respectively. A state estimator like the Kalman filter allows us to estimate $$x$$ given only noisy measurements $$y \in \Bbb R^{n_y}$$, i.e., without necessarily having measurements of all the components of $$x$$ available.[obs] For this reason, state estimators are sometimes referred to as virtual sensors, i.e., they allows use to estimate what we cannot measure.
The Kalman filter is popular for several important reasons, for one, it is the optimal estimator in the meansquare sense if the system dynamics is linear (can be time varying) and the noise acting on the system is Gaussian. In most practical applications, neither of these conditions hold exactly, but they often hold sufficiently well for the Kalman filter to remain useful.[nonlin] A perhaps even more useful property of the Kalman filter is that the posterior probability distribution over the state remains Gaussian throughout the operation of the filter, making it efficient to compute and store.
obs
Under a technical condition on the observability of the system dynamics.
nonlin
Several nonlinear state estimators exist as well.
What does "tuning the Kalman filter" mean?
To make use of a Kalman filter, we obviously need the dynamical model of the system given by the four matrices $$A,B,C$$ and $$D$$. We furthermore require a choice of the covariance matrices $$R_1$$ and $$R_2$$, and it is here a lot of aspiring Kalmanfilter users get stuck. The covariance matrix of the measurement noise is often rather straightforward to estimate, just collect some measurement data when the system is at rest and compute the sample covariance, but we often lack any and all feeling for what the process noise covariance, $$R_1$$, should be.
In this blog post, we will try to give some intuition for how to choose the process noise covariance matrix $$R_1$$. We will come at this problem from a disturbancemodeling perspective, i.e., trying to reason about what disturbances act on the system and how, and what those imply for the structure and value of the covariance matrix $$R_1$$.
Disturbance modeling
Intuitively, a disturbance acting on a dynamical system is some form of unwanted input. If you are trying to control the temperature in a room, it may be someone opening a window, or the sun shining on your roof. If you are trying to keep the rate of inflation at 2%, the disturbance may be a pandemic.
The linear dynamics assumed by the Kalman filter, here on discretetime form
$$x_{k+1} = Ax_k + Bu_k + w_k$$
make it look like we have very little control over the shape and form of the disturbance $$w$$, but there are a lof of possibilities hiding behind this equation.
In the equation above, the disturbance $$w$$ has the same dimension as the state $$x$$. Implying that the covariance matrix $$R_1 \in \Bbb R^{n_x*n_x}$$ has $$n_x^2$$ has $$n_x^2$$ parameters. We start by noting that covariance matrices are symmetric and positive semidefinite, this means that only an upper or lower triangle of $$R_1$$ contains free parameters. We further note that we can restrict the influence of $$w$$ to a subset of the equations by introducing an input matrix $$B_w$$ such that
$$x_{k+1} = Ax_k + Bu_k + B_w \skew{2} \tilde w_k$$
where $$w = B_w \skew{0} \tilde w$$ and $$\skew{0} \tilde w$$ may have a smaller dimension than $$w$$ .To give a feeling for why this might be relevant, we consider a very basic system, the double integrator.
A double integrator appears whenever Newton's second law appears
$$f = ma$$
This law states that the acceleration $$a$$ of a system is proportional to the force $$f$$ acting on it, on continuoustime statespace form,[tf] this looks like
$$\dot p(t) = v(t)$$
$$m\dot v(t) = f(t)$$
or on the familiar matrix form:
$$ \dot x = \left\lbrack \matrix{0 & 1\cr 0 & 0} \right\rbrack x + \left\lbrack \matrix{0 \cr \frac 1m} \right\rbrack f$$
where $$ x = [p,v]^T$$
Now, what disturbances could possibly act on this system? The relation between velocity $$v$$ and position $$p$$ is certainly deterministic, and we cannot disturb the position of a system other than by continuously changing the velocity first (otherwise an infinite force would be required). This means that any disturbance acting on this system must take the form of a disturbance force, i.e., $$w = B_ww_f$$ where $$B_w = \left\lbrack \matrix{0 \cr \frac 1m} \right\rbrack$$. A disturbance force $$w_f$$ may be something like friction, air resistance or someone applying an unknown external force etc. This means that the disturbance has a single degree of freedom only, and we can write the dynamics of the system as
$$ \dot x = \left\lbrack \matrix{0 & 1\cr 0 & 0} \right\rbrack x + \left\lbrack \matrix{0 \cr \frac 1m} \right\rbrack f + \left\lbrack \matrix{0 \cr \frac 1m} \right\rbrack w_f$$
where $$w_f$$ is a scalar. This further means that the covariance matrix $$R_1$$ has a single free parameter only, and we can write it as
$$R_1 = \sigma_w^2B_wB_w^T = \frac{\sigma_w^2}{m^2}\left\lbrack \matrix{0 & 0 \cr 0 & 1} \right\rbrack$$
where $$\sigma_w^2$$ is the variance of the disturbance $$w_f$$. This is now our tuning parameter that we use to trade off the filter response time vs. the noise in the estimate.
What may initially have appeared as a tuning parameter $$R_1$$ with three parameters to tune, has now been reduced to a single parameter by reasoning about how a possible disturbance acts on the system dynamics! The double integrator is a very simple example, but it illustrates the idea that the structure of the disturbance covariance matrix $$R_1$$ is determined by the structure of the system dynamics and the form of the disturbance.
tf
As a transfer function in the Laplace domain, the double integrator looks like $$P(s)/F(s) = \frac{1}{s^2}$$ where $$P$$ is the Laplacetransform of the position and $$F$$ that of the force.
But white noise, really?
Having had a look at the structural properties of the dynamics noise, let's now consider its spectrum. With noise like $$w_f \thicksim N(0, R_1)$$, where $$w_f$$ is uncorrelated with $$w_j$$ for $$j \ne k$$ is called white noise in analogy with white light, i.e., "containing all frequencies", or, "has a flat spectrum". White noise can often be a reasonable assumption for measurement noise, but much less so for dynamics noise. If we come back to the example of the temperature controlled room, the disturbance implied by the sun shining on the roof is likely dominated by low frequencies. The sun goes up in the morning and down in the evening, and clouds that may block the sun for a while do not move infinitely fast etc. For a disturbance like this, modeling it as white noise may not be the best choice.
Fear not, we can easily give color to our noise and still write the resulting model on the form
$$\dot x = Ax + Bu + B_ww$$
Let's say that our linear system $$P$$ can be depicted in blockdiagram form as follows:
│w
▼
┌─────┐
│ W │
└──┬──┘
│w̃
│ ┌─────┐
u ▼ │ │ y
──+─►│ P ├───►
│ │
└─────┘
Here, $$w$$ is filtered through another linear system $$W$$ to produce $$w$$. If has a flat white spectrum, the spectrum of $$\skew{2} \tilde w$$ will be colored by the frequency response of $$W$$. Thus, if we want to model that the system is affected by lowfrequency noise $$\skew{2} \tilde w$$, we can choose $$W$$ as some form of lowpass filter. If we write $$W$$ on statespace form as
$$\dot x_w = A_wx_w + B_ww$$
$$\skew{2} \tilde w = C_wx_w$$
we can form an augmented system model $$P_a$$ as follows:
$$\dot x = A_ax_a + B_au + B_{aw}w$$
$$y = C_ax_a$$
where
$$A_a = \left\lbrack \matrix{A & C_w \cr 0 & A_w} \right\rbrack, \ \ \ B_a = \left\lbrack \matrix{B \cr 0} \right\rbrack, \ \ \ B_{aw} = \left\lbrack \matrix{0 \cr B_w} \right\rbrack, \ \ \ C_a = \left\lbrack \matrix{C & 0} \right\rbrack$$
and
$$x_a = \left\lbrack \matrix{x \cr x_w} \right\rbrack$$
the augmented model has a state vector that is comprised of both the state vector of the original system $$P$$, as well as the state vector $$x_w$$ of the disturbance model $$W$$. If we run a Kalman filter with this augmented model, the filter will estimate both the state of the original system $$P$$ as well as the state of the disturbance model $$W$$ for us! We have built what is called a disturbance observer.
It may at this point be instructive to reflect upon why we performed this additional step of modeling the disturbance? By including the disturbance model $$W$$, we tell the Kalman filter what frequencydomain properties the disturbance has, and the filter can use these properties to make better predictions of the state of the system. This brings us to another key point of making use of a state estimator, it can perform sensor fusion.
Sensor fusion
By making use of models of the dynamics, disturbances and measurement noise, the state estimator performs something often referred to as "sensor fusion". As the name suggests, sensor fusion is the process of combining information from multiple sensors to produce a more accurate estimate of the state of the system. In the case of the Kalman filter, the state estimator combines information from the dynamics model, the measurement model and the disturbance models to produce a more accurate estimate of the state of the system. We will contrast this approach to two common stateestimation heuristics

Differentiation for velocity estimation

Complementary filtering for orientation estimation
Velocity is notoriously difficult to directly measure in most applications, but measuring position is often easy. A naive approach to estimating velocity is to differentiate the position measurements. However, there are a couple of problems associated with this approach. First, the noise in the position measurements will be amplified by the differentiation, second, only differentiating the measured position ignores any information of the input to the system. Intuitively, if we know how the system behaves in response to an input, we should be able to use this knowledge to form a better estimate of both the position and the velocity?
Indeed, a Kalman filter allows you to estimate the velocity, taking into account both the input to the system and the noisy position measurement. If the model of the system is perfect, we do not even need a measurement, the model and the input is sufficient to compute what the velocity will be. In practice, models are never perfect, and we thus make use of the "fusion aspect" of the state estimator to incorporate the two different sources of information, the model and the measurement, to produce a better estimate of the velocity.
A slightly more complicated example is the complimentary filter. This filter is often used with inertial measurement units (IMUs), containing accelerometers and gyroscopes to estimate the orientation of a system. Accelerometers are often very noisy, but they measure the correct orientation on average. Gyroscopes are often very accurate, but their reading slowly drifts over time. The complimentary filter combines the information from the accelerometer and the gyroscope to produce a more accurate estimate of the orientation. This is done by lowpass filtering the accelerometer measurement to get rid of the highfrequency measurement noise, and highpass filtering the gyroscope measurement to get rid of the lowfrequency drift.
We can arrive at a filter with these same properties, together with a dynamical model that indicates the system's response to control inputs, by using a Kalman filter. In this case, we would include two different disturbance models, one acting on the accelerometer output $$ya$$ and one the gyroscope output $$yg$$ like this
│wa
▼
┌─────┐
│ Wa │
└──┬──┘
│
┌─────┐ ▼
u │ ├─+─► ya
───►│ P │
│ ├─+─► yg
└─────┘ ▲
│
┌──┴──┐
│ Wg │
└─────┘
▲
│wg
$$W_a$$ would here be chosen as some form of a highpass filter to indicate that the acceleration is corrupted by highfrequency noise. Inversely, $$W_g$$ would be chosen as a lowpass filter since the drift over time can be modeled as a lowfrequency disturbance acting on the measurement. The models of the disturbances are thus the complements of the filters we would have applied if we had performed the filtering manually.
The complimentary filter makes the "complimentary assumption" $$W_g = 1  W_a$$, i.e., $$W_a$$ and $$W_g$$ sum to one. This is a simple and often effective heuristic, but the naive complementary filter does not make any use of the input signal $$u$$ to form the estimate, and will thus suffer from phase loss, sometimes called lag, in response to inputs. This is particularly problematic when there are communication delays present between the sensor and the state estimator. During the delay time, the sensor measurements contain no information at all about any system response to inputs.
Discretization
So far, I have switched between writing dynamics in continuous time, i.e., on the form
$$\dot x(t) = Ax(t) + Bu(t)$$
and in discrete time
$$x_{k+1} = Ax_k + Bu_k$$
Physical systems are often best modeled in continuous time, while some systems, notably those living inside a computer, are inherently discrete time. Kalman filters are thus most often implemented in discrete time, and any continuoustime model must be discretized before it can be used in a Kalman filter. For control purposes, models are often discretized using a zeroorderhold assumption, i.e., input signals are assumed to be constant between sample intervals. This is often a valid assumption for control inputs, but not always for disturbance inputs. If the sample rate is fast in relation to the time constants of the system, the discretization method used does not matter all too much. For the purposes of this tutorial, we will use the zeroorderhold (ZoH) assumption for all inputs, including disturbances.
To learn the details on ZoH discretization consult Discretization of linear state space models (wiki). Here, we will simply state a convenient way of computing this discretization, using the matrix exponential. Let $$A_c$$ and $$B_c$$ be the continuoustime dynamics and input matrices, respectively. Then, the discretetime dynamics and input matrices are given by
$$\left\lbrack \matrix{A_d & B_d \cr 0 & I} \right\rbrack = \exp \left\lgroup \left\lbrack \matrix{A_c & B_c \cr 0 & 0} \right\rbrack T_s \right\rgroup$$
where $$A_D$$ and $$B_D$$ are the discretetime dynamics and input matrices, respectively, and $$T_s$$ is the sample interval. The $$I$$ in the bottom right corner is the identity matrix. To discretize the input matrix for a disturbance model, we simply replace $$B$$ with $$B_w$$, or put all the $$B$$ matrices together by horizontal concatenation and discretize them all at once.
Discretizing the continuous time model of the double integrator with a disturbance force, we get
$$x_{k+1} = \left\lbrack \matrix{1 & T_s \cr 0 & 1} \right\rbrack x_k + \frac 1 m \left\lbrack \matrix{T_s^2/2 \cr T_s} \right\rbrack f + \frac 1 m \left\lbrack \matrix{T_s^2/2 \cr T_s} \right\rbrack w_f$$
$$y_k = [1 \ \ \ 0]x_k + e_k$$
with the corresponding covariance matrix
$$R_1 = \frac{\sigma_{w_f}^2}{m^2} \left\lbrack \matrix{\frac{T_s^4}4 & \frac{T_s^3}2 \cr \frac{T_s^3}2 & T_s^2} \right\rbrack$$
This may look complicated, but it still has a single tuning parameter only, $$\sigma_{w_f}$$
Putting it all together
We will now try to put the learnings from above together, applied to a slightly more complicated example. This time, we will consider a doublemass model, where two masses are connected by a spring and a damper, and an input force can be applied to one of the masses.
This is a common model of transmission systems, where the first mass represents the inertia of a motor, while the second mass represents the inertia of the load. The spring and damper represent the dynamics of the transmission, e.g., the gearbox and transmission shafts etc.
The model, without disturbances, is given by
$$\dot x = \left\lbrack \matrix{0 & 1 & 0 & 0 \cr k/m_1 & c/m_1 & k/m_1 & c/m_1 \cr 0 & 0 & 0 & 1 \cr k/m_2 & c/m_1 & k/m_2 & c/m_2} \right\rbrack x \ \ + \ \left\lbrack \matrix{0 \cr 1/m_1 \cr 0 \cr 0} \right\rbrack u$$
$$y = \left\lbrack \matrix{1 & 0 & 0 & 0} \right\rbrack x + e$$
where $$x = [p_1,v_1,p_2,v_2]$$ is the state vector, $$u=f$$ is the input force and $$y=p_1$$ is the measured position of the first mass.[vel] The parameters $$m_1,m_2,k,$$ and $$C$$ are the masses, spring constant, and damping constant, respectively.
What disturbances could act on such a system? One could imagine a friction force acting on the masses, indeed, most systems with moving parts are subject to friction. Friction is often modeled as a lowfrequency disturbance, in particular Coulomb friction. The Coulomb friction is constant as long as the velocity does not cross zero, at which point it changes sign. We thus adopt an integrating model of this disturbance.
To model the fact that we are slightly uncertain about the dynamics of the flexible transmission, we could model a disturbance force acting on the spring. This could account for an uncertainty in a linear spring constant, but also model the fact that the transmission is not perfectly linear, i.e., it might be a stiffening spring or contain some backlash etc. The question is, what frequency properties should we attribute to this disturbance? Backlash is typically a lowfrequency disturbance, but uncertainties in the stiffness properties of the spring would likely affect higher frequencies as well. We thus let this disturbance have a flat spectrum and omit a model of its frequency properties.
With the friction disturbance $$w_f$$ and the spring disturbance $$w_s$$, we can write the model as
$$\dot x = A_ax_a + B_au + B_{w_f}w_f + B_{w_s}w_s$$
$$y = C_ax_a$$
where
$$A_a = \left\lbrack \matrix{A & C_{w_f} \cr 0 & A_{w_f}} \right\rbrack , \ \ B_a = \left\lbrack \matrix{B \cr 0} \right\rbrack, \ \ B_{w_f} = \left\lbrack \matrix{0 \cr 0 \cr 0 \cr 0 \cr 1/m_1} \right\rbrack , \ \ B_{w_s} = \left\lbrack \matrix{0 \cr 1/m_1 \cr 0 \cr 1/m_2 \cr 0} \right\rbrack, \ \ C_a = \left\lbrack \matrix{C & 0} \right\rbrack$$
and $$x_a = \left\lbrack \matrix{x \cr x_{w_f}} \right\rbrack$$. When modeling $$w_f$$ as an integrating disturbance $$W_f = 1/(m_1s)$$, we get the dynamics
$$A_{w_f} = 0, \ \ B_{w_f} = 1/m_1, \ \ C_{w_f} = 1$$
vel
It's unfortunately common that a single position only is measurable, and oftentimes the sensor is located at the moror output. We are thus unable to directly measure what we often ultimately care about, the position and velocity of the load side.
In code
To demonstrate how constructing a Kalman filter for the doublemass system would look using JuliaSim, we start by defining the dynamics in the form of a StateSpace
model, as well as all the input matrices.
StateSpace{ControlSystemsBase.Continuous, Float64}
A =
0.0 1.0 0.0 0.0 0.0
100.0 1.0 100.0 1.0 1.0
0.0 0.0 0.0 1.0 0.0
100.0 1.0 100.0 1.0 0.0
0.0 0.0 0.0 0.0 0.0
B =
0.0
1.0
0.0
0.0
0.0
C =
1.0 0.0 0.0 0.0 0.0
D =
0.0
Continuoustime statespace model
In practice, models are never perfect, we thus create another version of the model where the spring constant $$k$$ is 10% larger. We will use this perturbed model to test how well the Kalman filter can estimate the state of the system in the presence of model mismatch.
We then define the covariance matrices according to the above reasoning, and discretize them using the c2d
function. Finally, we construct a KalmanFilter
object.
Here, we added a small amount of diagonal covariance to $$R_1(10^{8})$$ to make it strictly positive definite. This improves the numerics as well as acts like a "catch all" covariance that prevents the estimator from malfunctioning completely in case we have forgotten to model some disturbance affecting the system.
When running a Kalman filter in a realworld application, we often receive and process measurements in real time. However, for the purposes of this blog post, we generate some inputs and outputs by simulating the system. We construct a PID controller Cfb
and close the loop around this controller to to simulate how the system behaves under feedback. We let the reference position $$r$$ be an integrated square signal. To obtain both control input and measured output from the simulation, we form the feedback interconnection using the function feedback_control
.
To simulate a Coulomb friction disturbance, we extract the velocity of the first mass, $$v_1 = x_2$$, and compute the friction force as $$k_f \sf sign (v)$$. We then apply this disturbance to the input as seen by the Kalman filter in order to simulate the effect of the disturbance on the estimator. We then call forward_trajectory
to perform Kalman filtering along the entire recorded trajectory, and plot the result alongside the true state trajectory as obtained by the function lsim
.
This figure shows the 5 state variables $$x_a = [p_1, v_1, p_2, v_2, x_{w_f}]$$ of the augmented system model $$P_a$$, as well as the input signal $$u$$ and the measured output $$y$$. The notation $$x(tt1)$$ denotes the estimate of $$x(t)$$ given data available at time $$t1$$. In the plot of the estimated disturbance state $$x_5 = x_{w_f}$$, we plot also the true friction force from the simulation. Analyzing the results, we see that the state estimator needs a couple of samples to catch up to the change in the friction disturbance, but it does correctly converge to a noisy estimate of the friction force in steady state.
We have thus built and simulated a disturbance observer that is capable of estimating the friction force as well as the full state of the dynamical system, based on a dynamical model and a measurement of the position of one of the masses only!
Concluding remarks
In this blog post, we have discussed how we could give meaning to the allsoimportant covariance matrix of the dynamics noise appearing in a Kalman filter. By reasoning about what disturbances we expect to act on the system, both due to disturbance inputs and due to model mismatch, we can construct a covariance matrix that is consistent with our expectations. Here, we modeled disturbances in order to improve the estimation of the state of the system, however, in some applications the disturbance estimate itself is the primary quantity of interest. Examples of disturbance estimation include estimation of external contact forces acting on a robot, and room occupancy estimation (a person contributes approximately 100W of heat power to their surrounding). With all these examples in mind, no wonder people sometimes call state estimators for "virtual sensors"!
Potential improvements
Although we managed to estimate the friction force acting on the doublemass system above, there is still plenty of room for improvement. If we have an accurate, nonlinear model of the friction, we could likely cancel out the friction disturbance much faster than what our simple integrating model did.[int] Even if we did not have an accurate model, making use of the fact that the friction force has a complicated behavior when the velocity crosses zero would likely improve the performance of the estimator. This could easily be done in a Kalmanfiltering framework by giving the covariance a little bump when the velocity crosses or approaches zero. An example of such an adaptive covariance approach is given in the Adaptive Kalmanfilter tutorial.
int
Using a Kalman filter with an integrating disturbance model in a statefeedback controller results in a controller with integral action. See Integral action for more details.
Further reading
To learn more about state estimation in JuliaSim, see the following resources:

Estimation of Kalman filters directly from data using statespace model estimation

In this blog post, we considered state estimation with a linear dynamics model. State estimation can be used also with nonlinear models, even with DAE models!

In the example above, we treated $$\sigma_f^2$$ and $$\sigma_s^2$$ as tuning parameters to be manually selected. However, it is possible to use data to estimate these parameters using, e.g., maximum likelihood estimation. This blog post is long enough as it is, so that's a story for another time. (Here's part of that story if you are really curious: Parameter estimation for state estimators.)
Looking for Model Predictive Control (MPC)?
Learn about JuliaSim Control in our webinar on MPC, trimming, and linearization in JuliaSim.
Watch NowRECENT POSTS

Decoding Cell Variations: Optimizing Battery Manufacturing with JuliaSim Batteries
Jasmine Chokshi • Mar 01, 2024

Unveiling Genie Builder: Create LowCode Apps on JuliaHub
Deep Datta • Feb 27, 2024

Empowering Julia Development: Exploring Static Code Analysis with JuliaHub and Semgrep
Deep Datta • Feb 23, 2024