Hi
Another quicky on how to use Kinect (libfreenect) with OpenCV 2.1. I already saw people do it, but havn’t seen code.
UPDATE (12/29): OpenKinect posted very good C++ code of using libfreenect with OpenCV2.X APIs: here it is. Plus, their git repo now has a very clean C code: here it is.
So here it goes
Category: programming
Hi,
Just wanted to share a bit of code using OpenCV’s camera extrinsic parameters recovery, camera position and rotation – solvePnP (or it’s C counterpart cvFindExtrinsicCameraParams2). I wanted to get a simple planar object surface recovery for augmented reality, but without using any of the AR libraries, rather dig into some OpenCV and OpenGL code.
This can serve as a primer, or tutorial on how to use OpenCV with OpenGL for AR.
Update 2/16/2015: I wrote another post on OpenCV-OpenGL AR, this time using the fine QGLViewer – a very convenient Qt OpenGL widget.
The program is just a straightforward optical flow based tracking, fed manually with four points which are the planar object’s corners, and solving camera-pose every frame. Plain vanilla AR.
Well the whole cpp file is ~350 lines, but there will only be 20 or less interesting lines… Actually much less. Let’s see what’s up
Hi!
Long time no post… MIT is kicking my ass with work. But it was amazing to come back to so many comments with people anxious to get OpenCV going mobile!
Anyway, just wanted to share my work on object detection using OpenCV2.1 on the Android.
Hi,
I’ll present a quick and simple implementation of image recoloring, in fact more like color transfer between images, using OpenCV in C++ environment. The basis of the algorithm is learning the source color distribution with a GMM using EM, and then applying changes to the target color distribution. It’s fairly easy to implement with OpenCV, as all the “tools” are built in.
I was inspired by Lior Shapira’s work that was presented in Eurographics 09 about image appearance manipulation, and a work about recoloring for the colorblind by Huang et al presented at ICASSP 09. Both works deal with color manipulation using Gaussian Mixture Models.
Update 5/28/2015: Adrien contributed code that works with OpenCV v3! Thanks! https://gist.github.com/adriweb/815c1ac34a0929292db7
Let’s see how it’s done!
ICP – Iterative closest point, is a very trivial algorithm for matching object templates to noisy data. It’s also super easy to program, so it’s good material for a tutorial. The goal is to take a known set of points (usually defining a curve or object exterior) and register it, as good as possible, to a set of other points, usually a larger and noisy set in which we would like to find the object. The basic algorithm is described very briefly in wikipedia, but there are a ton of papers on the subject.
I’ll take you through the steps of programming it with OpenCV.
This is a tutorial on using Graph-Cuts and Gaussian-Mixture-Models for image segmentation with OpenCV in C++ environment.
Update 10/30/2017: See a new implementation of this method using OpenCV-Python, PyMaxflow, SLIC superpixels, Delaunay and other tricks.
Been wokring on my masters thesis for a while now, and the path of my work came across image segmentation. Naturally I became interested in Max-Flow Graph Cuts algorithms, being the “hottest fish in the fish-market” right now if the fish market was the image segmentation scene.
So I went looking for a CPP implementation of graphcut, only to find out that OpenCV already implemented it in v2.0 as part of their GrabCut impl. But I wanted to explore a bit, so I found this implementation by Olga Vexler, which is build upon Kolmogorov’s framework for max-flow algorithms. I was also inspired by Shai Bagon’s usage example of this implementation for Matlab.
Let’s jump in…
Update: check out my new post about this https://www.morethantechnical.com/2012/10/17/head-pose-estimation-with-opencv-opengl-revisited-w-code/
Hi
Just wanted to share a small thing I did with OpenCV – Head Pose Estimation (sometimes known as Gaze Direction Estimation). Many people try to achieve this and there are a ton of papers covering it, including a recent overview of almost all known methods.
I implemented a very quick & dirty solution based on OpenCV’s internal methods that produced surprising results (I expected it to fail), so I decided to share. It is based on 3D-2D point correspondence and then fitting of the points to the 3D model. OpenCV provides a magical method – solvePnP – that does this, given some calibration parameters that I completely disregarded.
Here’s how it’s done
Hi
Been working hard at a project for school the past month, implementing one of the more interesting works I’ve seen in the AR arena: Parallel Tracking and Mapping (PTAM) [PDF]. This is a work by George Klein [homepage] and David Murray from Oxford university, presented in ISMAR 2007.
When I first saw it on youtube [link] I immediately saw the immense potential – mobile markerless augmented reality. I thought I should get to know this work a bit more closely, so I chose to implement it as a part of advanced computer vision course, given by Dr. Lior Wolf [link] at TAU.
The work is very extensive, and clearly is a result of deep research in the field, so I set to achieve a few selected features: Stereo initialization, Tracking, and small map upkeeping. I chose not to implement relocalization and full map handling.
This post is kind of a tutorial for 3D reconstruction with OpenCV 2.0. I will show practical use of the functions in cvtriangulation.cpp, which are not documented and in fact incomplete. Furthermore I’ll show how to easily combine OpenCV and OpenGL for 3D augmentations, a thing which is only briefly described in the docs or online.
Here are the step I took and things I learned in the process of implementing the work.
Update: A nice patch by yazor fixes the video mismatching – thanks! and also a nice application by Zentium called “iKat” is doing some kick-ass mobile markerless augmented reality.
Hi All
It looks like it’s finally here – a way to grab the raw data of the camera frames on the iPhone OS 3.x.
Update: Apple officially supports this in iOS 4.x using AVFoundation, here’s sample code from Apple developer.
A gifted hacker named John DeWeese was nice enough to comment on a post from May 09′ with his method of hacking the APIs to get the frames. Though cumbersome, it looks like it should work, but I haven’t tried it yet. I promise to try it soon and share my results.
Way to go John!
Some code would be awesome…
Roy.