Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers Up To Higher Dimensions Our previous Kalman Filter discussion was of a simple one-dimensional model. Now we go up to higher dimensions: n x State vector: m Sense vector: z l Motor vector: u First, a little statistics.
Expectations Let x be a random variable. The expected value E[x] is the mean: E[x] = N 1 x p(x) dx x = x i N 1 The probability-weighted mean of all possible values. The sample mean approaches it. Expected value of a vector x is by T component. E[x] =x =[x1,L xn ]
Variance and Covariance The variance is E[ (x-E[x])2 ] N 1 2 =E[(xx) 2] = (xi x) 2 N 1 Covariance matrix is E[ (x-E[x])(x-E[x])T ] N 1 Cij = (xik xi )(xjk x j ) N k=1 Divide by N1 to make the sample variance an unbiased estimator for the population variance. Biased and Unbiased Estimators Strictly speaking, the sample variance N 1 2 2
2 =E[(xx) ] = (xi x) N 1 is a biased estimate of the population variance. An unbiased estimator is: N 1 2 2 s = (x i x ) N 1 1 But: If the difference between N and N1 ever matters to you, then you are probably up to no good anyway
[Press, et al] Covariance Matrix Along the diagonal, Cii are variances. Off-diagonal Cij are essentially correlations. C1,1 = 12 C2,1 CN ,1 C1,2 2 C2,2 = 2
M 2 CN ,N = N C1,N O L Independent Variation x and y are Gaussian random variables (N=100) Generated with x=1 y=3 Covariance matrix: 0.90 0.44 Cxy = 0.44 8.82
Dependent Variation c and d are random variables. Generated with c=x+y d=x-y Covariance matrix: 10.62 7.93 Ccd = 7.93 8.84 Discrete Kalman Filter n Estimate the state x of a linear stochastic difference equation xk =Axk1 +Buk +wk1 process noise w is drawn from N(0,Q), with covariance matrix Q.
with a measurement zm zk =Hxk +vk measurement noise v is drawn from N(0,R), with covariance matrix R. A, Q are nxn. B is nxl. R is mxm. H is mxn. Estimates and Errors n xk is the estimated state at time-step k. n xk after prediction, before observation. e
=x x Errors: k k k k ek =xk x Error covariance matrices: T k k k P =E[e e ] T k
Pk =E[ek e ] k Pk Kalman Filters task is to update x Time Update (Predictor) Update expected value of x k1 +Buk xk =Ax Update error covariance matrix P T Pk =APk1A +Q Previous statements were simplified versions of the same idea: 3 3
(t ) =x (t2) +u[t3 t2] x 2 2 2 (t ) = (t2) + [t3 t2 ] Measurement Update (Corrector) Update expected value k k
k =x +K k(zk Hx ) x k innovation is zk Hx Update error covariance matrix k Pk =(IK kH)P Compare with previous form 3 3
(t3) =x (t ) +K(t3)(z3 x (t )) x 2 2 (t3) =(1K(t3)) (t3 ) The Kalman Gain The optimal Kalman gain Kk is T T 1 K k =Pk H (HPk H +R) T k T k PH =
HP H +R Compare with previous form 2 (t3 ) K (t3) = 2 2 (t3 ) + 3 Extended Kalman Filter Suppose the state-evolution and measurement equations are non-linear: xk =f (xk1,uk ) +wk1 zk =h(xk) +vk process noise w is drawn from N(0,Q), with covariance matrix Q. measurement noise v is drawn from N(0,R), with covariance matrix R. The Jacobian Matrix
For a scalar function y=f(x), y = f (x)x For a vector function y=f(x), f1 y1 (x) L x1 y =J x = M = M f yn n (x) L x1 f1 (x) xn x1 M M fn xn (x)
xn Linearize the Non-Linear Let A be the Jacobian of f with respect to x. fi A ij = (xk1,uk) x j Let H be the Jacobian of h with respect to x. hi Hij = (xk) x j Then the Kalman Filter equations are almost the same as before! EKF Update Equations Predictor step:
k1,uk ) xk =f (x k T P =APk1A +Q k T k T
1 Kalman gain: K k =P H (HP H +R) Corrector step: k k k =x +K k(zk h(x )) x k Pk =(IK kH)P Catch The Ball Assignment
State evolution is linear (almost). What is A? B=0. Sensor equation is non-linear. What is y=h(x)? What is the Jacobian H(x) of h with respect to x? Errors are treated as additive. Is this OK? What are the covariance matrices Q and R? TTD Intuitive explanations for APAT and HPHT in the update equations.