Hi,
I wanted to put up a quick note on how to use Kalman Filters in OpenCV 2.2 with the C++ API, because all I could find online was using the old C API. Plus the kalman.cpp example that ships with OpenCV is kind of crappy and really doesn’t explain how to use the Kalman Filter.
I’m no expert on Kalman filters though, this is just a quick hack I got going as a test for a project. It worked, so I’m posting the results.
The Filter
So I wanted to do a 2D tracker that is more immune to noise. For that I set up a Kalman filter with 4 dynamic parameters and 2 measurement parameters (no control), where my measurement is: 2D location of object, and dynamic is: 2D location and 2D velocity. Pretty simple, and it makes the transition matrix also simple.
KalmanFilter KF(4, 2, 0); KF.transitionMatrix = *(Mat_<float>(4, 4) << 1,0,1,0, 0,1,0,1, 0,0,1,0, 0,0,0,1); Mat_<float> measurement(2,1); measurement.setTo(Scalar(0)); // init... KF.statePre.at<float>(0) = mouse_info.x; KF.statePre.at<float>(1) = mouse_info.y; KF.statePre.at<float>(2) = 0; KF.statePre.at<float>(3) = 0; setIdentity(KF.measurementMatrix); setIdentity(KF.processNoiseCov, Scalar::all(1e-4)); setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1)); setIdentity(KF.errorCovPost, Scalar::all(.1));
Cool, moving on to the dynamic part.
So I set up a mouse callback to get the mouse position every “frame” (a 100ms wait), and feed that into the filter:
// First predict, to update the internal statePre variable Mat prediction = KF.predict(); Point predictPt(prediction.at<float>(0),prediction.at<float>(1)); // Get mouse point measurement(0) = mouse_info.x; measurement(1) = mouse_info.y; Point measPt(measurement(0),measurement(1)); // The "correct" phase that is going to use the predicted value and our measurement Mat estimated = KF.correct(measurement); Point statePt(estimated.at<float>(0),estimated.at<float>(1));
All the rest is garnish (see the code)..
The important bit is to see that Predict() happens before Correct(). This is according to the excellent Kalman filter tutorial I found. Look carefully at Figure 1-2!! It will sort you out. Also take a look at OpenCV’s internal impl of Kalman, see that it follows these steps closely. Especially Mat& KalmanFilter::predict(const Mat& control)
and Mat& KalmanFilter::correct(const Mat& measurement)
.
Another good place I found that helped me formulate the parameters for the filter is this place. Again, take everything with a grain of salt, because Kalman Filters are very versatile you just need to know how to formulate them right.
Result
Using velocity:
Not using velocity:
Some Video
Code
As usual, grab the code off the SVN:
svn co http://morethantechnical.googlecode.com/svn/trunk/mouse_kalman/main.cpp
Enjoy,
Roy.
50 replies on “Simple Kalman filter for tracking using OpenCV 2.2 [w/ code]”
Really cool kalman filter demo ..
Hi,
I don’t get why the transition matrix is set to:
1,0,1,0, 0,1,0,1, 0,0,1,0, 0,0,0,1
Why such values?
Thanks for the tutorial.
Stéphane
@Stephanw
Yes, I forgot to explain why…
It’s simple, the first two rows are the X and Y values, and the next 2 are X and Y velocity
So X and Y location change according to the former location + the velocity, the velocity is changed according to the last value (it might be interesting to try values < 1 to module the decrease in velocity). Also, time variance is not represented in the system, i.e. if the samples are at S frames/second and the velocity is V pixels/second, the transform should change the measurement in pixels/frame.
Nice code! I would just suggest a little modification to improve the tracking and better feel to it. Using http://en.wikipedia.org/wiki/Equations_of_motion
KalmanFilter KF(6, 2, 0);
Mat_ state(6, 1); /* (x, y, Vx, Vy) */
Mat processNoise(6, 1, CV_32F);
…
KF.statePre.at(0) = mouse_info.x;
KF.statePre.at(1) = mouse_info.y;
KF.statePre.at(2) = 0;
KF.statePre.at(3) = 0;
KF.statePre.at(4) = 0;
KF.statePre.at(5) = 0;
KF.transitionMatrix = *(Mat_(6, 6) << 1,0,1,0,0.5,0, 0,1,0,1,0,0.5, 0,0,1,0,1,0, 0,0,0,1,0,1, 0,0,0,0,1,0, 0,0,0,0,0,1);
KF.measurementMatrix = *(Mat_(2, 6) << 1,0,1,0,0.5,0, 0,1,0,1,0,0.5);
Hello Roy. I was wondering, have you applied kalman filtering to optical flow or any other such application. I am trying to use it with Optical flow to reduce the error in flow tracking, but I am not able to figure out how to formulate the Kalman state space representation.
Thank you, I’ve been scouring the internet for months looking for a basic implementation. Because of this I was able to learn how to use the OpenCV Kalman Filter for my project. It still needs some fine tuning, but I managed to get it to track the Blob cog.
If you’re ever in Malta, email me and I’ll buy you a drink 🙂
My question is related to the initialization with KF.statePre, in example shown:
KF.statePre.at (0) = mouse_info.x;
KF.statePre.at (1) = mouse_info.y;
KF.statePre.at (2) = 0;
KF.statePre.at (3) = 0;
KF.statePre.at (4) = 0;
KF.statePre.at (5) = 0
to make use of variable speed and acceleration KF.statePre.at no need to place (2) = mouse_velocity.x; KF.statePre.at (3) = mouse_velocity.y, and calculate? or the model automatically calculates these variables with KF.transitionMatrix even transition matrix?
I tried this sample but I see that the kalman filtered tracker has a lot of lag (there’s a delay until it reaches the real position), which is clearly not suitable to real-time needs.
Is it possible to fix this??
Thanks for providing your excellent code.
I’m thinking to narrow down the search area for a face localization application in a video stream.
At first in frame0, say time t0, my application can detect position of a face as x0,y0). Then it searchs again the whole area of the next frame to locate the face in t1 which is (x1,y1) and so on… so the computational cost is high for such a blind search.
Assuming I have (x0, y0), and (x1,y1) as previous and current location of face, could you please let me know how can I modify your code to predict “Next Location” of face (instead of mouse location)?
Sorry, maybe this is a simple question, but I’m not so professional in programming. Please let me know which parts of your program should be replaced with the data I have: x0,y0 and x1,y1.
Thanks, Martin
Thx for charing the code.
Very nice example to start kalman filtering with opencv.
But there is one small mistake.
The initial guess should be stored in line 6 etc. should be stored in KF.statePost, as the prediction step uses the corrected state to predict the next state (statePre):
KF.statePre = KF.transitionMatrix * KF.statePost + …
Or did I missunderstood anything?
Excellent blog my friend.
Just one question.
How did you change the params to get to use velocity and not use velocity?
Cheers
Wow really cool!
Maybe you could help me:
Im working on a project to track objects over some frames.
I detect the blob using color informations. Beacause of shadow they sometimes disappear in the next frames and appears later. How is it possible to track them and give the blobs an id so that the same blob has his own id???
Thx
In regard of the frequent question how to take velocity into account (which gets rid of the “lag” some posters refer to:
Contrary to the example in the post above, the code in the main.cpp at its end does not include velocity (or: dx, dy) in the transition matrix.
So instead of this:
x y dx dy
| 1 0 0 0 |
| 0 1 0 0 |
| 0 0 1 0 |
| 0 0 0 1 |
change the matix to calc (x + dx) and (y + dy)
x y dx dy
| 1 0 1 0 |
| 0 1 0 1 |
| 0 0 1 0 |
| 0 0 0 1 |
There ya go! 🙂
thanks for this great example code.
how would i change the code to work with 3d points (XYZ)?
do i have to extend this code somehow ?
KalmanFilter KF(6, 2, 0);
Mat_ state(6, 1); /* (x, y, Vx, Vy) */
Mat processNoise(6, 1, CV_32F);
@stephan
Basically, yes.
Just add a z and Vz parameters to the state, noise, etc.
thanks roy.
so i got it working for a 3d point.
now i am noticing that when the point does a circular/ orbit movement the kalman filter does not make a future prediction with a location ahead of the actual point/
it seems, since orbiting is a constant change of direction the prediction does not work.
but is there a way to use kalman filter to exactly that, have a 3d point rotate around a center and predict its next position? i know the standard opencv example in the book has something similar by filtering the angle. but i need to keep my point and not convert them to angles.
thx.s.
Hi,
There is something I don’t understand. Where are we calculateing the velocity ? I understood that to get the next position we have x + dx but how do we get this dx ? How does the filter knows that the third parameter is dx and not dy ?
I want make a prediction on something that is not a position but a size. I can’t figure out where to add it (because I still want to know position and velocity).
Thanks.
Can you tell me,
If process state ,I am going to estimate are random.
It mean, A,Tranpotation matrix R , process noise covariance and measurement covariance will be not constant.
Then , how kalman filter tune peramaters.
Thanks for this excelent example.
Why do you have some parts of code commented? like:
//measurement += KF.measurementMatrix*state;
//state = KF.transitionMatrix*state + processNoise;
What’s the meaning of that?
It has any utility or makes any difference in prediction/update?
Thanks.
Hello.
Thank you for sharing.
This sample takes 2 measurement parameters. The velocity depends on this two measures (and the last two measures and the time), so i assume we don’t need to provide 4 measurement parameters (positionX, positionY, velocityX, velocityY).
But how openCV knows how to calculate the velocity ?
Thanks for this excelent example.
I do not have an measurement ready every frame, but I do want to use a prediction from the filter every frame. Do I then just feed the prediction instead of the measurement into the correct method to keep timing ok?
Thanks.
Well, I think it’s working… But sometimes, when function imshow is called (the second calling), a new overlapping window is created! Then, when I move the mouse on a window, the program draws in another one. Does anybody know why?
Hi Roy,
It’s great to see that you got the Kalman filter working. I have been trying to use the Kalman filter to smooth the values found by the haar cascade face detection on OpenCV. The face cascade on openCV is used to detect the face and return the face position (x and y coordinates) and the face dimensions (height and width). But this data is quite jumpy and I am trying to smooth it out using the Kalman filter.
I ended up having 8 dynamParams and 4 measureParams, and 8 parameters for the state matrix. I replaced “mouse_info.x and y” in your code with the x and y coordinates of the detected face and added the width and height parameters as well.
This code did not work for me though and I was wondering if you could guide me onto the right way of doing this.
Thanks so much,
Harshita
how would i change the code to work with 3d points (XYZ)?
how can I include the velocities in the parameters? I didn’t quiet understand that yet??!?!
Thanks for your help.
Paulo
Hi,
Thank you very much for this example, it’s very useful!!
However, I can’t get it working… I just copy-pasted the source code, it compiles, but when I execute it the program crashes at “KF.predict()” (I use QtCreator 4.7.4, OpenCV 2.4.2, and it doesn’t tell me what’s the error)
I also tried the sample from the SVN of OpenCV and it doesn’t work either…
Any hints about what might be happening?
Thanks in advance!
Thanks, this really helped me a lot implementing my Kalman filters.
But one should initialize the state using statePost instead of statePre to avoid that the first prediciton yields all zeros.
Regards, Matthäus
I use the same cocept…and write a code in c/opencv.
But it is giving run time errors.please help me.
My Code:
#include “cv.h”
#include “highgui.h”
#include
#include
using namespace std;
CvPoint point=cvPoint(-1,-1);
CvPoint last_point;
vector mousev,kalmanv;
void my_mouse_callback(int event, int x, int y, int flags, void* param)
{
CV_EVENT_LBUTTONDBLCLK;
last_point=point;
point=cvPoint(x,y);
}
IplImage *img;
int main()
{
// Initialize Kalman filter object, window, number generator, etc
IplImage* img = cvCreateImage( cvSize(500,500), 8, 3 );
CvKalman* kalman = cvCreateKalman( 4, 2, 0 );
CvMat* state = cvCreateMat( 4, 1, CV_32FC1 );
CvMat* processnoise=cvCreateMat(4,1,CV_32FC1);
//CvMat* measurement =cvCreateMat(2,1,CV_8SC1);
int measurement[2];
char code = (char)-1;
cvNamedWindow( “kalman”, 1 );
cvSetMouseCallback(“kalman”,my_mouse_callback,NULL);
while(1)
{
if(point.x<0 || point.ystate_pre->data.fl,F1,sizeof(F1));
float trans_vals[] = {1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1};
CvMat tran;
cvInitMatHeader(&tran,4,4,CV_32FC1,trans_vals);
kalman->transition_matrix = cvCloneMat(&tran);
cvSetIdentity( kalman->measurement_matrix);
cvSetIdentity( kalman->process_noise_cov, cvRealScalar(1e-5) );
cvSetIdentity( kalman->measurement_noise_cov, cvRealScalar(1e-1) );
cvSetIdentity( kalman->error_cov_post, cvRealScalar(1) );
mousev.clear();
kalmanv.clear();
while(1)
{
const CvMat* prediction = cvKalmanPredict(kalman,0);
int x1= (int)prediction->data.fl[0];
int y1 =(int)prediction->data.fl[1];
CvPoint predictionPt = cvPoint(x1,y1);
measurement[0]= point.x;
measurement[1] = point.y;
CvPoint measPt = cvPoint(measurement[0],measurement[1]);
mousev.push_back(measPt);
float vals[]={measurement[0],measurement[1]};
CvMat measurement;
cvInitMatHeader(&measurement,1,2,CV_32FC1,vals);
const CvMat* estimated= cvKalmanCorrect( kalman,&measurement);
int x11= (int)estimated->data.fl[0];
int y11 =(int)estimated->data.fl[1];
CvPoint statePt=cvPoint(x11,y11);
kalmanv.push_back(statePt);
cvLine( img, cvPoint(statePt.x – 5, statePt.y – 5 ),cvPoint( statePt.x + 5, statePt.y + 5 ), cvScalar(255,255,255), 2,16);
cvLine( img, cvPoint(statePt.x + 5, statePt.y – 5 ),cvPoint( statePt.x – 5, statePt.y + 5 ), cvScalar(255,255,255), 2,16);
cvLine( img, cvPoint(measPt.x – 5, measPt.y – 5 ),cvPoint( measPt.x + 5, measPt.y + 5 ), cvScalar(255,255,255), 2,16);
cvLine( img, cvPoint(measPt.x + 5, measPt.y – 5 ),cvPoint( measPt.x – 5, measPt.y + 5 ), cvScalar(255,255,255), 2,16);
for (int i = 0; i < mousev.size()-1; i++)
{
cvLine(img, mousev[i], mousev[i+1], cvScalar(255,255,0), 1);
}
for (int i = 0; i 0 )
break;
//cvWaitKey(0);
}
if( code == 27 || code == ‘q’ || code == ‘Q’ )
break;
}
return 0;
}
@andy
Please provide the errors you get, else it’s impossible to understand the problem.
Thanx a lot for reply…fortunately I solved the problem…sorry for trouble…!!
I have another problem….I have used following two values of transition matrix:
1> 1,0,1,0, 0,1,0,1, 0,0,1,0, 0,0,0,1
2> 1,0,0,0, 0,1,0,0, 0,0,1,0, 0,0,0,1
But getting same output…please help me.
hey roy please help me out….!!!
@andy
your problem is then likely not in the transition matrix (if you try these two and the output is the same).
check out the other parts of the code and make sure there are no bugs.
My Working Code is:
#include “cv.h”
#include “highgui.h”
#include
#include
#include “math.h”
using namespace std;
CvPoint point=cvPoint(-1,-1);
CvPoint last_point;
vector mousev,kalmanv;
void my_mouse_callback(int event, int x, int y, int flags, void* param)
{
CV_EVENT_LBUTTONDBLCLK;
last_point=point;
point=cvPoint(x,y);
}
IplImage *img;
int main()
{
IplImage* img = cvCreateImage( cvSize(500,500), 8, 3 );
CvKalman* kalman = cvCreateKalman( 4, 2, 0 );
CvMat* state = cvCreateMat( 4, 1, CV_32FC1 );
CvMat* processnoise=cvCreateMat(4,1,CV_32FC1);
CvMat* measurement =cvCreateMat(2,1,CV_32FC1);
cvZero(measurement);
char code = (char)-1;
cvNamedWindow( “kalman”, 1 );
cvSetMouseCallback(“kalman”,my_mouse_callback,NULL);
while(1)
{
if(point.x<0 || point.ystate_pre->data.fl[0]=point.x;
kalman->state_pre->data.fl[1]=point.y;
kalman->state_pre->data.fl[2]=0;
kalman->state_pre->data.fl[3]=0;
float trans_vals[] = {1,0,1,0,0,1,0,1,0,0,1,0,0,0,0,0};
/*CvMat tran;
cvInitMatHeader(&tran,4,4,CV_32FC1,trans_vals);
kalman->transition_matrix = cvCloneMat(&tran);*/
CvMat* tran=new CvMat;
cvInitMatHeader(tran,4,4,CV_32FC1,trans_vals);
memcpy(static_cast(kalman->transition_matrix->data.fl),static_cast(tran),sizeof(tran));
cvSetIdentity( kalman->measurement_matrix);
cvSetIdentity( kalman->process_noise_cov, cvRealScalar(1e-4) );
cvSetIdentity( kalman->measurement_noise_cov, cvRealScalar(1e-1) );
cvSetIdentity( kalman->error_cov_post, cvRealScalar(.1) );
mousev.clear();
kalmanv.clear();
while(1)
{
const CvMat* prediction = cvKalmanPredict(kalman,0);
float x1= (float)prediction->data.fl[0];
float y1= (float)prediction->data.fl[1];
CvPoint2D32f predictionPt = cvPoint2D32f(x1,y1);
measurement->data.fl[0]=point.x;
measurement->data.fl[1] =point.y;
CvPoint measPt = cvPoint(measurement->data.fl[0],measurement->data.fl[1]);
mousev.push_back(measPt);
const CvMat* estimated= cvKalmanCorrect( kalman,measurement);
float x11= estimated->data.fl[0];
float y11 =estimated->data.fl[1];
printf(“gandla”);
CvPoint statePt=cvPoint(x11,y11);
kalmanv.push_back(statePt);
cvZero(img);
cvLine( img, cvPoint(statePt.x – 5, statePt.y – 5 ),cvPoint( statePt.x + 5, statePt.y + 5 ), cvScalar(255,255,255), 2,16);
cvLine( img, cvPoint(statePt.x + 5, statePt.y – 5 ),cvPoint( statePt.x – 5, statePt.y + 5 ), cvScalar(255,255,255), 2,16);
cvLine( img, cvPoint(measPt.x – 5, measPt.y – 5 ),cvPoint( measPt.x + 5, measPt.y + 5 ), cvScalar(255,255,255), 2,16);
cvLine( img, cvPoint(measPt.x + 5, measPt.y – 5 ),cvPoint( measPt.x – 5, measPt.y + 5 ), cvScalar(255,255,255), 2,16);
for (int i = 0; i < mousev.size()-1; i++)
{
cvLine(img, mousev[i], mousev[i+1], cvScalar(255,255,0), 1);
}
for (int i = 0; i 0 )
break;
//cvWaitKey(0);
}
if( code == 27 || code == ‘q’ || code == ‘Q’ )
break;
}
return 0;
}
Andy, Thanks for the post, I am using this successfully in my implementation. BTW do you have any hints on how I could replace OpenCV with my own code? Perhaps you have a site you could point me at?
Kind Regards
Hello Roy and other
first thanks sharing this code!
and then a question: where are the initial states of the estimated position?
Because the estimated prediction start at (x,y)=(0,0) but i want to
initialize it at an other position!
Can you help me?
I really don’t know if it is my FPS that is too slow, but the kaman predicition is converging too slowly to the true point. Unlike this video: http://www.youtube.com/watch?v=SxtY1jQJ2fc … it takes 15 seconds to cross the screen to reach a my mouse point.
Any ideas?
My latest OpenCV Kalman filter code is setup like so:
measurement; ::zeros(2,1);
(0,2) = dt; (1,3) = dt; (0); (1);
cv::KalmanFilter KF;
cv::Mat_
KF.init(4, 2, 0);
setIdentity(KF.transitionMatrix);
measurement = Mat_
KF.statePre.setTo(0);
setIdentity(KF.measurementMatrix);
setIdentity(KF.processNoiseCov, Scalar::all(.05)); //adjust this for faster convergence - but higher noise
setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1));
setIdentity(KF.errorCovPost, Scalar::all(.1));
Then on each new measurement
float dt = ...; // time since last measurement in millisec
KF.transitionMatrix.at
KF.transitionMatrix.at
Mat prediction = KF.predict();
Mat estimated = KF.correct(measurement);
statePt.x = estimated.at
statePt.y = estimated.at
cout << estimated << endl;
Works ok for me when tracking a mouse.
After some experimentation, I changed:
setIdentity(KF.processNoiseCov, Scalar::all(1e-4));
to
setIdentity(KF.processNoiseCov, Scalar::all(1e-2));
some part of the code is commented ? like measurement+=KF.measurementMatrix*state ;
randn(processNoise, Scalar(0),Scalar::all(sqrt(KF.processNoiseCov.at(0, 0))));
state = KF.transitionMatrix*state + processNoise;
Why is this so ?
Why could it be that setting dt=100ms, which is the update-rate, results in good estimations, while setting it to 1 leads to total randomness?
I used the example I found on Wikipedia:
http://en.wikipedia.org/wiki/Kalman_filter#Example_application.2C_technical
My (constant) Kalman gain is the following:
K = [dt^4/4, dt^3/2]^T * Q/(dt*dt*dt*dt*Q/4 + R)
with Q being the process noise (of the acceleration) and R being the measurement noise.
Why doesn’t the estimation converge towards the real position (always slightly off), maybe because of wrong assumptions of Q and R?
Thank you so much!
hei roy,
thanks.
with the noise parameter it works ok for the latency/delay.
but if I use the dt in the transition it predicts “too much”: if I move the mouse in a straight line and then stop the kalman cross go forward too much and then get back to the mouse..
why did you use the dt in the transition matrix?
then, in the screenshot, what did you mean with “using velocity”? using velocity in the measurement? or only in the dynamics? because I’ve tried to use velocity also in measurement (not sure if it really makes sense) but i didn’t manage to figure out the right values for the matrixes and i get some
segmentation fault or
OpenCV Error: Assertion failed in matmul.cpp (row and lines are not correct..)
anyway, with the noise parameter it works great thanks!
What if I want to keep predicting without entering new data? How should I set the matrixes of the kalman filter?
Currently I’m just feeding the filter with the state it predicted, but it seems kind of stupid. I don’t deeply understand Kalman’s Filter but I feel like I can set the measurement matrix correctly and just get repeteadly KF.predict() to se more steps ahead.
this is an answer to Cauli, he asked a few posts ago about the speed of the algorithm:
Try making the adjustments sugested by Anant in one of the first posts. It adds acceleration and velocity to the model and that will make the convergence much faster.
Hi all,
I’ve changed somehow the Roy’s excellent code. In a nuthshell, I’ve simulated some loss of measurament values when clicking left mouse button. I’ve added a line for drawing the prediction values, so when no measurament values are available, the prediction is still shown. If Roy feels comfortable with it, I could sumbit a shared link so you can check it.
Greetings,
Andrés
Can you post your code, Andres?
[…] https://www.morethantechnical.com/2011/06/17/simple-kalman-filter-for-tracking-using-opencv-2-2-w-cod… […]
Hello !
Thx for sharing this awesome code !
First, I want to know who is Andres ?
Next, I was asking myself, why don’t you, Roy, use the C++, which gaves you a very good library ? Is there any reason ?
Regards,
VOTF
————————————————————————————————
Geeks making the world a bit better.
Just Do It with a justaway !
Yo VOTF,
The Tag is just an error, Roy is using the good library for Kalman Filter !
:p
Nyalgo !
#########################################################
In the name of the neo armstrong cyclone jet armstrong cannon !
Bravo and thanks, Roy! Your example helped me understand how to to Kalman filtering in OpenCV. I have translated it into Python for anyone interested:
https://github.com/simondlevy/OpenCV-Python-Hacks/tree/master/Kalman2D
regarding Stephan Schulz’s point about 3d rotation above, where the Kalman filter does not predict the rotation.
I would suggest that if the motion was formulated using R, delta_R, theta, delta_theta, phi, delta_phi, i.e. a kind of radial units, then the filter would adapt to predicting the correct R and rotational velocity. This would make it more applicable to predicting rotation around planets etc than a Cartesian coordinate system that would “like” to predict straight lines.
I’m only just looking into getting this working but very interesting post.