Hi
Been working hard at a project for school the past month, implementing one of the more interesting works I’ve seen in the AR arena: Parallel Tracking and Mapping (PTAM) [PDF]. This is a work by George Klein [homepage] and David Murray from Oxford university, presented in ISMAR 2007.
When I first saw it on youtube [link] I immediately saw the immense potential – mobile markerless augmented reality. I thought I should get to know this work a bit more closely, so I chose to implement it as a part of advanced computer vision course, given by Dr. Lior Wolf [link] at TAU.
The work is very extensive, and clearly is a result of deep research in the field, so I set to achieve a few selected features: Stereo initialization, Tracking, and small map upkeeping. I chose not to implement relocalization and full map handling.
This post is kind of a tutorial for 3D reconstruction with OpenCV 2.0. I will show practical use of the functions in cvtriangulation.cpp, which are not documented and in fact incomplete. Furthermore I’ll show how to easily combine OpenCV and OpenGL for 3D augmentations, a thing which is only briefly described in the docs or online.
Here are the step I took and things I learned in the process of implementing the work.
Update: A nice patch by yazor fixes the video mismatching – thanks! and also a nice application by Zentium called “iKat” is doing some kick-ass mobile markerless augmented reality.
Category: graphics
Hi All
It looks like it’s finally here – a way to grab the raw data of the camera frames on the iPhone OS 3.x.
Update: Apple officially supports this in iOS 4.x using AVFoundation, here’s sample code from Apple developer.
A gifted hacker named John DeWeese was nice enough to comment on a post from May 09′ with his method of hacking the APIs to get the frames. Though cumbersome, it looks like it should work, but I haven’t tried it yet. I promise to try it soon and share my results.
Way to go John!
Some code would be awesome…
Roy.
Hi
I wanted to do the simplest recoloring/color-transfer I could find – and the internet is just a bust. Nothing free, good and usable available online… So I implemented the simplest color transfer algorithm in the wolrd – Histogram Matching.
Here’s the implementation with OpenCV
Justin Talbot has done a tremendous job implementing the GrabCut algorithm in C [link to paper, link to code]. I was missing though, the option to load ANY kind of file, not just PPMs and PGMs.
So I tweaked the code a bit to receive a filename and determine how to load it: use the internal P[P|G]M loaders, or offload the work to the OpenCV image loaders that take in many more type. If the OpenCV method is used, the IplImage is converted to the internal GrabCut code representation.
Image<Color>* load( std::string file_name ) { if( file_name.find( ".pgm" ) != std::string::npos ) { return loadFromPGM( file_name ); } else if( file_name.find( ".ppm" ) != std::string::npos ) { return loadFromPPM( file_name ); } else { return loadOpenCV(file_name); } } void fromImageMaskToIplImage(const Image<Real>* image, IplImage* ipli) { for(int x=0;x<image->width();x++) { for(int y=0;y<image->height();y++) { //Color c = (*image)(x,y); Real r = (*image)(x,y); CvScalar s = cvScalarAll(0); if(r == 0.0) { s.val[0] = 255.0; } cvSet2D(ipli,ipli->height - y - 1,x,s); } } } Image<Color>* loadIplImage(IplImage* im) { Image<Color>* image = new Image<Color>(im->width, im->height); for(int x=0;x<im->width;x++) { for(int y=0;y<im->height;y++) { CvScalar v = cvGet2D(im,im->height-y-1,x); Real R, G, B; R = (Real)((unsigned char)v.val[2])/255.0f; G = (Real)((unsigned char)v.val[1])/255.0f; B = (Real)((unsigned char)v.val[0])/255.0f; (*image)(x,y) = Color(R,G,B); } } return image; } Image<Color>* loadOpenCV(std::string file_name) { IplImage* im = cvLoadImage(file_name.c_str(),1); Image<Color>* i = loadIplImage(im); cvReleaseImage(&im); return i; }
Well, there’s nothing fancy here, but it does give you a fully working GrabCut implementation on top of OpenCV… so there’s the contribution.
GrabCutNS::Image<GrabCutNS::Color>* imageGC = GrabCutNS::loadIplImage(orig); GrabCutNS::Image<GrabCutNS::Color>* maskGC = GrabCutNS::loadIplImage(mask); GrabCutNS::GrabCut *grabCut = new GrabCutNS::GrabCut( imageGC ); grabCut->initializeWithMask(maskGC); grabCut->fitGMMs(); //grabCut->refineOnce(); grabCut->refine(); IplImage* __GCtmp = cvCreateImage(cvSize(orig->width,orig->height),8,1); GrabCutNS::fromImageMaskToIplImage(grabCut->getAlphaImage(),__GCtmp); //cvShowImage("result",image); cvShowImage("tmp",__GCtmp); cvWaitKey(30);
I also added the GrabCutNS namespace, to differentiate the Image class from the rest of the code (that probably has an Image already).
Code is as usual available online in the SVN repo.
Enjoy!
Roy.
Hi everyone
In the last weekend I attended GeekCon 2009, a tech-conference, with a friend and colleague Arnon (not Arnon from the blog, who recently had a birthday – Happy B-Day Arnon!). Each team that attended had to create a project they can complete in 2-days of the conference. Our project is called “RunVas”, and the basic idea was to let people run around and paint by doing so. We wanted to combine computer vision with a little artistic angle.
Here’s some more details
Switching, merging or swapping, call it what you like – it’s a pain to pull off. You need to spend a lot of time tuning the colors, blending the edges and smudging to get a decent result. So I wrote a plugin for the wonderful GIMP program that helps this process. The merge is done using a blending algorithm that blends in the colous from the original image into the pasted image.
I’ll write a little bit about coding GIMP plugins, which is very simple, and some about the algorithm and its sources.
Let’s see how it’s done
Hi
I had very high hopes for iPhoneOS 3.1 in the AR arena. With all the hype about it, I naturally thought that with 3.1 developers will be able to bring marker-detection AR to the app-store – meaning, using legal and published APIs. A look around 3.1’s APIs I wasn’t able to find anything that will allow this.
Not all AR is banned. In fact AR apps like Layar will be very much possible, as they rely on compass & gyro to create the AR effect. These don’t require processing the live video feed from the camera, only overlaying data over it. This can be done easily with the new cameraOverlayView property of UIImagePickerController. All you need to do is create a transparent view with the required data, and it will be overlaid on the camera preview.
Sadly, to get marker-detection abilities developers must still hack the system (camera callback rerouting), or use very slow methods (UIGetScreenImage). I can only hope apple will see the potential of letting developers manipulate the live video feed.
Roy.
Hi
OpenCV is by far my favorite CV/Image processing library. When I found an OpenCV port to the iPhone, and even someone tried to get it to do face detection, I just had to try it for myself.
In this post I’ll try to run through the steps I took in order to get OpenCV running on the iPhone, and then how to get OpenCV’s face detection play nice with iPhoneOS’s image buffers and video feed (not yet OS 3.0!). Then i’ll talk a little about optimization
Update: Apple officially supports camera video pixel buffers in iOS 4.x using AVFoundation, here’s sample code from Apple developer.
Update: I do not have the xcodeproj file for this project, please don’t ask for it. Please see here for compiling OpenCV for the iPhone SDK 4.3.
Let’s begin
Hi
The graphics course I took at TAU really expanded my knowledge of 3D rendering, and specifically using OpenGL to do so. The final task of the course, aside from the exam, was to write a 3D game. We were given 3 choices for types of games: worms-like, xonix-like and lightcycle-like. We chose to write our version of Worms in 3D.
I’ll try to take you through some of the problems we encountered, the decisions we made, and show as much code as possible. I’m not, however, gonna take you through the simple (yet grueling) work of actually showing meshes to the screen or moving them around, these subjects are covered extensively online.
The whole game is implemented in Java using JOGL and SWT for 3D rendering. The code is of course available entirely online.
Hi
I saw the stats for the blog a while ago and it seems that the augmented reality topic is hot! 400 clicks/day, that’s awesome!
So I wanted to share with you my latest development in this field – cross compiling the AR app to the iPhone. A job that proved easier than I originally thought, although it took a while to get it working smoothly.
Basically all I did was take NyARToolkit, compile it for armv6 arch, combine it with Norio Namura’s iPhone camera video feed code, slap on some simple OpenGL ES rendering, and bam – Augmented Reality on the iPhone.
Update: Apple officially supports camera video pixel buffers in iOS 4.x using AVFoundation, here’s sample code from Apple developer.
This is how I did it…