Hi!
Long time no post… MIT is kicking my ass with work. But it was amazing to come back to so many comments with people anxious to get OpenCV going mobile!
Anyway, just wanted to share my work on object detection using OpenCV2.1 on the Android.
Although it seems like a trivial task, since you can just compile OCV2.1 as native lib and use JNI to access it – actually I havn’t seen too many people claim to have done it nicely and also share code… (Ahem, computer-vision-software.com, share the knowledge!)
Anyway, this is a quickey so I’ll be brief. I followed android-opencv project instructions for compiling (using Crystax NDK), and successfully ran their example CVCamera app on my device. A good starting point.
But – the suggested API they use is so cumbersome… took me a while to figure out, but in the end I couldn’t be bothered with re-writing the silly parts so I just used it as is.
To save you some time, what I basically did was add a function to detect objects:
int Detector::detectAndDrawObjects(int idx, image_pool* pool) { vector<Rect> objects; const static Scalar colors[] = { CV_RGB(0,0,255), CV_RGB(0,128,255), CV_RGB(0,255,255), CV_RGB(0,255,0), CV_RGB(255,128,0), CV_RGB(255,255,0), CV_RGB(255,0,0), CV_RGB(255,0,255)} ; double scale = 2.0; Mat* _img = pool->getImage(idx); Mat tmp; resize(*_img,tmp,Size(_img->cols/2.0, _img->rows/2.0)); double angle = -90.0; Point2f src_center(tmp.rows/2.0, tmp.rows/2.0); Mat rot_mat = getRotationMatrix2D(src_center, angle, 1.0/scale); Mat dst; warpAffine(tmp, dst, rot_mat, Size(tmp.rows,tmp.cols)); flip(dst,dst,1); Mat img = dst; Mat gray, smallImg; //( cvRound (img.rows/scale), cvRound(img.cols/scale), CV_8UC1 ); cvtColor( img, gray, CV_BGR2GRAY ); smallImg = gray; equalizeHist( smallImg, smallImg ); int minobjsize = 40; this->cascade.detectMultiScale( smallImg, objects, 1.1, 2, 0 |CV_HAAR_FIND_BIGGEST_OBJECT //|CV_HAAR_DO_ROUGH_SEARCH |CV_HAAR_SCALE_IMAGE , Size(minobjsize, minobjsize) ); stringstream ss; ss << objects.size() << " objects, " << smallImg.cols << "x" << smallImg.rows; putText(img,ss.str(),Point(20,20),FONT_HERSHEY_PLAIN,1.0,Scalar(0,255,0),2); int i = 0; scale = 1.0; for( vector<Rect>::const_iterator r = objects.begin(); r != objects.end(); r++, i++ ) { Point center; Scalar color = colors[i%8]; int radius; center.x = cvRound((r->x + r->width*0.5)*scale); center.y = cvRound((r->y + r->height*0.5)*scale); radius = cvRound((r->width + r->height)*0.25*scale); circle( img, center, radius, color, 3, 8, 0 ); stringstream ss1; ss1 << r->x << "," << r->y; putText(img,ss1.str(),Point(20,30),FONT_HERSHEY_PLAIN,1.0,Scalar(0,255,0),2); } //whole area rectangle(img,Point(0,0),Point(img.cols-1,img.rows-1),Scalar(0,255,0),3); //a [minobjsize]x[minobjsize] rect rectangle(img,Point(img.cols/2.0 - minobjsize/2.0,img.rows/2.0 - minobjsize/2.0), Point(img.cols/2.0 + minobjsize/2.0,img.rows/2.0 + minobjsize/2.0), Scalar(0,255,0),3); dst.copyTo(*_img); return objects.size(); }
Excuse my messy code, it’s just a modification of facedetect.cpp from OCV examples.
However, one move was to rotate the frame because the silly Samsung Galaxy is delivering frames in “portrait” rather than “landscape” (the warpAffine op). Or rather it’s android-opencv problem with delivering the bytes… but anyway I had to deal with it. The rest is pretty standard stuff.
So what’s going on in the java-side? Nothing much… just a call to the JNI function
class DetectorProcessor implements NativeProcessor.PoolCallback { @Override public void process(int idx, image_pool pool, long timestamp, NativeProcessor nativeProcessor) { Log.i("Detector","Detector process start"); int num = processor.detectAndDrawObjects(idx, pool); Log.i("Detector","Detector process end, found " + num + " objects"); //probably should do something with these objects now.. } }
In a timely fashion – which means adding it to android-ocv’s “Callback Stack”:
LinkedList<PoolCallback> defaultcallbackstack = new LinkedList<PoolCallback>(); defaultcallbackstack.addFirst(new DetectorProcessor()); mPreview.addCallbackStack(defaultcallbackstack);
This will run the JNI call on every frame…
The JNI is created by SWIG following android-ocv:
/* * include the headers required by the generated cpp code */ %{ #include "Detector.h" #include "image_pool.h" using namespace cv; %} //import the android-cv.i file so that swig is aware of all that has been previous defined //notice that it is not an include.... %import "android-cv.i" //make sure to import the image_pool as it is referenced by the Processor java generated class %typemap(javaimports) Detector " import com.opencv.jni.image_pool;// import the image_pool interface for playing nice with // android-opencv //this is exactly as in "Detector.h" class Detector { public: Detector(); virtual ~Detector(); bool initCascade(const char* filename); int detectAndDrawObjects(int idx, image_pool* pool); };
Almost forgot – loading the classifier cascade!
This proved a bit tricky, since just adding the XML to the “assets” doesn’t allow the native code to access it via system file interface, I did a little workaround and just made a temp copy of it and then (when I have an accessible File object) I load it in the Cascade object using absolutePath:
try { InputStream is = getAssets().open("cascade-haar-40.xml"); File tempfile = File.createTempFile("detector", ""); Log.i("Detector","Tempfile:" + tempfile.getAbsolutePath()); FileOutputStream fos = new FileOutputStream(tempfile); byte[] b = new byte[1024]; int read = -1; while((read = is.read(b,0,1024)) > 0) { Log.i("Detector","read " + read); fos.write(b,0,read); } fos.close(); is.close(); boolean res = processor.initCascade(tempfile.getAbsolutePath()); Log.i("Detector","initCascade: " + res); tempfile.delete(); // no longer needed } catch (IOException e) { e.printStackTrace(); }
This is some simple object detection on Android right? and it works in high FPSs too (>10 on Samsung Galaxy S)
I’ll try to upload video proof soon (video of a video is not so simple :), and maybe complete source.
Thanks for tuning in…
Roy.
33 replies on “OpenCV2.1 on Android quickey with Haar object detection [w/ code]”
Great Post, you are a life saver! Thanks for sharing Roy! I had many troubles I could not solve on a similar topic. It is very helpful!!
Do you think you can send me a copy of the source at [email protected]?
Thanks in advance
cascade is of what type? how should i define/declare cascade?
Sorry, i am very new in opencv.
Hi.
I’m following the Android-OpenCV project but I can’t find anywhere a minimalist tutorial on how too use JNI with his own cpp functions in Android-Java (like you did with your Detector.cpp file).
Thanks in advance for your help.
Hi Viish
Yes they are lacking some basic tutorials, but it’s rather a simple thing for an experienced Java programmer.
Anyway, in the CVCamera project you have all you need, but you need to change it to do what your application needs to do.
So, you create your own .cpp and .h and .i (.i is for SWIG, and should be similar to .h with a few more things, check out CVCamera).
And then you run “build.sh”, which should create a wrapper class with JNI calls and a .so compiled library that goes along with your project into the APK.
Then in Java you basically just call the wrapper class.
Hope this helps a bit.
Roy
What is the original size of the image you are getting to the detect function.
Thank you so much, now I can detect faces 😉
Hey Roy,
Thanks for your post.
I am trying to implement lucas kanade optical flow using the same CVCamera template.
For this, I am trying to pull out current and previous image from the image_pool, using:
Mat* img = pool->getImage(idx); and Mat* _img = pool->getImage(idx-1); with a check for the first run. But it doesn’t seem to work.
I also tried to save the previous image in a static variable, but the application crashes after the first frame.
Do you have any suggestions on this?
Thanks
The image pool only holds one frame – the current one. So using getImage(idx-1) will not work.
They put this image pool mechanism for using a buffer of multiple images, but in the end it’s not implemented, so only one image is stored. That’s what I got from looking at the code..
A good way will be to save the previous frame on the object as a member variable, rather than a static variable. Try that
Best
Roy.
Hey Roy,
Thanks a lot dude! It worked. You saved my day..
Thanks
Hi!
Thanks for the great post Roy!
I’m also working on a similar project.
Do you know a way to get the OCV’s Mat to Java?
And Addie, how did you do to get the Optical Flow working?
Could you maybe share a bit of your code?
Thanks
Fai
Hi,
This tutorial seems to be exactly what i’m looking for, but I seem to be having a couple problems. How do you exactly initialize the cascade? Any information you can give me would be greatly appreciated!
Thanks,
Dave
On the C++ side it’s just:
bool Detector::initCascade(const char* filename) {
return cascade.load(filename);
}
This will fire from the Java side, and initialize the cascade.
Hi,
My program too late on loading cascade in android-2.1 (about 40sec).
But, very fast in android-2.2 (about 400ms).
Cascade file’s length is about 1MB.
Do you know how to load cascade faster in android-2.1?
Hi Roy,
Thanks for the info above. I have a query. When i run the CVCamera application on my target device, the frames in the camera are coming very slowly. What i mean is that when i use the normal camera of the phone, it works perfect. But with the CVCamera application, the camera frames are delayed. Can you suggest why this problem occurs and what could be its possible solution?
Hi Roy, I would like to ask whether for the java-side the DetectorProcessor class is it created from a existing java file or a new java file?
DetectorProcessor is an inner class of the activity, so it’s in the same java file.
Roy.
Hi Roy,
I am new to OpenCV and C++, may i ask how do you create the Dectector.h file
Thanks
John
Hi Roy,
I have followed the codings you gave above but however, i am still encountering an error for the processor.initCascade(tempfile.getAbsolutePath()); for the try and catch part. Do you have any suggestions for this ?
Hey Roy,
if i ‘m trying to compile the c files i always get an error: “cannot convert ‘cv::Mat’ to ‘cv::Mat*’ in initialisation” the code to this error is: “Mat* _img = pool->getImage(idx);”
if i change it to
“Mat* _img;
*_img = pool->getImage(idx);” I can build the files but the app crashes everytime if i call the detectanddraw function. Could you help me with this problem?
Christian.
The API of android-opencv library must have changed
So you better use “Mat _img = pool->getImage(idx);”, (this is a shallow struct copy and will not memory, so it’s ok)
And change everything in the rest of the file accordingly
Roy.
@John
This work is not really intended for complete beginners in c++
However I think you should not be discouraged!
Go ahead and read some c++ tutorials, and then try to figure out what’s going on here.
Good Luck,
Roy.
Okay i changed everything and the detector seems to work, because I get 1 or 0 back if there is a face in the image. But there is no feedback in the app just in the debugger. Do you have a suggestion what could cause this error?
I found a solution after deleting the part to rotate the image evrything works fine.
when i build error comes up
‘class Detector’ has no member named ‘cascade’ in Detector.cpp
any ideas why?
everything works fine now thanks for this awesome tutorial
pan: I get same msg, how did you fix it? Thx
Pan :currently im new to android..and i don’t quite get what your coding is about.Do you have any other simple example for me to understand it better? cause i have to do a webcam video stream with face detection project.please reply asap 😡 thks a lot..it would be a great help
Hi,I’m do the project CVCamera but it has some problems. So is it possible to have your project. Anybody please help me. I’m a beginer on android. Thank you very much. My email : [email protected]
@Hai
OpenCV for Android has moved passed this, and now has a far better support and API for Android applications.
Check out: http://opencv.itseez.com/doc/tutorials/introduction/android_binary_package/android_binary_package.html#android-binary-package
Hi, Roy
A nice work u’ve please i ‘ll appreciate if u can send me full code of this application so as to integrate it into my own work.thanks
hey roy, can u send me the source and test sample images of your object detection code.
@rahul, Sorry I don’t have the code anymore… But it looks like OpenCV for Android has gone very far since then, and they actually released a sample with object detection:
https://code.ros.org/svn/opencv/trunk/opencv/samples/android/face-detection/
Hi, Roy
Can you help a function detect smile on Android. I tried use opencv but i can’t detect exactly