We humans have been filtering things for virtually our entire history. Water filtering is a simple example. We can filter impurities from water as simply as using our hands to skim dirt and leaves off the top of the water. Another example is filtering out noise from our surroundings. If we paid attention to all the little noises around us we would go crazy. We learn to ignore superfluous sounds (traffic, applicances, etc.) and focus on important sounds, like the voice of the person we're speaking with.
There are also many examples in engineering where filtering is desirable. Radio communications signals are often corrupted with noise. A good filtering algorithm can remove the noise from electromagnetic signals while still retaining the useful information. Another example is voltages. Many countries require in-home filtering of line voltages in order to power personal computers and peripherals. Without filtering, the power fluctuations would drastically shorten the lifespan of the devices.
Kalman filtering was developed in the 1960s, although it has its roots as far back as Karl Gauss in 1795. Kalman filtering has been applied in areas as diverse as aerospace, marine navigation, nuclear power plant instrumentation, demographic modeling, manufacturing, and many others. This paper uses a tutorial example-based approach to explain Kalman filtering.
Consider the problem of estimating the variables of some system. In dynamic systems (that is, systems which vary with time) the system variables are often denoted by the term state variables. Assume that the system variables, represented by the vector x, are governed by the equation xk+1 = Axk + wk where wk is random process noise, and the subscripts on the vectors represent the time step. For instance, if our dynamic system consists of a spacecraft which is accelerating with random bursts of gas from its reaction control system thrusters, the vector x might consist of position p and velocity v. Then the system equation would be given by Equation 1:
where ak is the random time-varying acceleration, and T is the time between step k and step k+1. Now suppose we can measure the position p. Then our measurement at time k can be denoted zk = pk + vk where vk is random measurement noise.
The question which is addressed by the Kalman filter is this: Given our knowledge of the behavior of the system, and given our measurements, what is the best estimate of position and velocity? We know how the system behaves according to the system equation, and we have measurements of the position, so how can we determine the best estimate of the system variables? Surely we can do better than just take each measurement at its face value, especially if we suspect that we have a lot of measurement noise.
The Kalman filter is formulated as follows. Suppose we assume that the process noise wk is white with a covariance matrix Q. Further assume that the measurement noise vk is white with a covariance matrix R, and that it is not correlated with the process noise. We might want to formulate an estimation algorithm such that the following statistical conditions hold:
It so happens that the Kalman filter is the estimation algorithm which satisfies these criteria. There are many alternative ways to formulate the Kalman filter equations. One of the formulations is given in the Equations 2-5 as follows:
In the above equations, the superscript –1 indicates matrix inversion and the superscript T indicates matrix transposition. S is called the covariance of the innovation, K is called the gain matrix, and P is called the covariance of the prediction error.
Equation 5 is fairly intuitive. The first term used to derive the state estimate at time k+1 is just A times the state estimate at time k. This would be the state estimate if we didn't have a measurement. In other words, the state estimate propagates in time just like the state vector (see Equation 1). The second term in Equation 5 is called the correction term, and it represents how much to correct the propagated estimated due to our measurement. If the measurement noise is much greater than the process noise, K will be small (that is, we won't give much credence to the measurement). If the measurement noise is much smaller than the process noise, K will be large (that is, we will give a lot of credence to the measurement).
Let's look at an example. The system represented by Equation 1 was simulated on a computer with random bursts of acceleration which had a standard deviation of 0.5 feet/sec2. The position was measured with an error of 10 feet (one standard deviation). The figure below shows how well the Kalman Filter was able to estimate the position, in spite of the large measurement noise.
The MATLAB® program that I used to generate the above results is available for downloading. Don't worry if you don't know MATLAB - it's an easy-to-read language, almost like pseudo code. The MATLAB parameters I used to generate the above results were alpha = 1, duration = 10, and dt = 0.1. If you use MATLAB to run the program you will get different results every time because of the random noise that is simulated, but the results will be similar to the above plot. Click here to download the MATLAB file. Kevin Wilder implemented this Kalman filtering example in a Mathcad® spreadsheet, which was later corrected by Richard Wall. The spreadsheet and a PDF description are available for download.
Kalman filtering is a huge field whose depths we cannot hope to begin to plumb in such a brief paper as this. Thousands of papers and dozens of textbooks have been written on this subject since its inception in 1960. Some issues that complicate the application of the Kalman filter are the following.
I've written a textbook that condenses everything I've learned about Kalman filtering over the past 20 years into 500 pages. The book web site is http://academic.csuohio.edu/simond/estimation, which provides more MATLAB code and additional tutorial articles. There have also been dozens of other books written about Kalman filtering. Two texts that I've found particularly helpful in my study and application of Kalman filters are the following.
The use of Kalman filters requires a lot of matrix algebra. Some helpful books on this topic are the following.
Some of the classic papers on Kalman filtering have been reprinted by the IEEE Press in "Kalman Filtering: Theory and Application," by H. Sorenson (editor), 1985. Peter D. Joseph's web page was a useful resource on the topic of Kalman filtering until 2009, but unfortunately the web page does not seem to exist any longer. Dr. Joseph worked with Kalman filters since their inception in 1960, and coauthored one of the earliest texts on the subject in 1968. His web page included lessons for the beginning, intermediate, and advanced student. Here are some other web pages on the subject.
Innovatia Software Home     Credentials     Publications     White Papers
Copyright 1998-2010 Innovatia Software. All Rights Reserved.
Email Address: firstname.lastname@example.org, email@example.com
Last Revised: May 10, 2010