This is a tutorial on using Graph-Cuts and Gaussian-Mixture-Models for image segmentation with OpenCV in C++ environment.
Update 10/30/2017: See a new implementation of this method using OpenCV-Python, PyMaxflow, SLIC superpixels, Delaunay and other tricks.
Been wokring on my masters thesis for a while now, and the path of my work came across image segmentation. Naturally I became interested in Max-Flow Graph Cuts algorithms, being the “hottest fish in the fish-market” right now if the fish market was the image segmentation scene.
So I went looking for a CPP implementation of graphcut, only to find out that OpenCV already implemented it in v2.0 as part of their GrabCut impl. But I wanted to explore a bit, so I found this implementation by Olga Vexler, which is build upon Kolmogorov’s framework for max-flow algorithms. I was also inspired by Shai Bagon’s usage example of this implementation for Matlab.
Let’s jump in…
Category: vision
Update: check out my new post about this https://www.morethantechnical.com/2012/10/17/head-pose-estimation-with-opencv-opengl-revisited-w-code/
Hi
Just wanted to share a small thing I did with OpenCV – Head Pose Estimation (sometimes known as Gaze Direction Estimation). Many people try to achieve this and there are a ton of papers covering it, including a recent overview of almost all known methods.
I implemented a very quick & dirty solution based on OpenCV’s internal methods that produced surprising results (I expected it to fail), so I decided to share. It is based on 3D-2D point correspondence and then fitting of the points to the 3D model. OpenCV provides a magical method – solvePnP – that does this, given some calibration parameters that I completely disregarded.
Here’s how it’s done
Hi
Been working hard at a project for school the past month, implementing one of the more interesting works I’ve seen in the AR arena: Parallel Tracking and Mapping (PTAM) [PDF]. This is a work by George Klein [homepage] and David Murray from Oxford university, presented in ISMAR 2007.
When I first saw it on youtube [link] I immediately saw the immense potential – mobile markerless augmented reality. I thought I should get to know this work a bit more closely, so I chose to implement it as a part of advanced computer vision course, given by Dr. Lior Wolf [link] at TAU.
The work is very extensive, and clearly is a result of deep research in the field, so I set to achieve a few selected features: Stereo initialization, Tracking, and small map upkeeping. I chose not to implement relocalization and full map handling.
This post is kind of a tutorial for 3D reconstruction with OpenCV 2.0. I will show practical use of the functions in cvtriangulation.cpp, which are not documented and in fact incomplete. Furthermore I’ll show how to easily combine OpenCV and OpenGL for 3D augmentations, a thing which is only briefly described in the docs or online.
Here are the step I took and things I learned in the process of implementing the work.
Update: A nice patch by yazor fixes the video mismatching – thanks! and also a nice application by Zentium called “iKat” is doing some kick-ass mobile markerless augmented reality.
Hi All
It looks like it’s finally here – a way to grab the raw data of the camera frames on the iPhone OS 3.x.
Update: Apple officially supports this in iOS 4.x using AVFoundation, here’s sample code from Apple developer.
A gifted hacker named John DeWeese was nice enough to comment on a post from May 09′ with his method of hacking the APIs to get the frames. Though cumbersome, it looks like it should work, but I haven’t tried it yet. I promise to try it soon and share my results.
Way to go John!
Some code would be awesome…
Roy.
Hi
I wanted to do the simplest recoloring/color-transfer I could find – and the internet is just a bust. Nothing free, good and usable available online… So I implemented the simplest color transfer algorithm in the wolrd – Histogram Matching.
Here’s the implementation with OpenCV
Hi
OpenCV is by far my favorite CV/Image processing library. When I found an OpenCV port to the iPhone, and even someone tried to get it to do face detection, I just had to try it for myself.
In this post I’ll try to run through the steps I took in order to get OpenCV running on the iPhone, and then how to get OpenCV’s face detection play nice with iPhoneOS’s image buffers and video feed (not yet OS 3.0!). Then i’ll talk a little about optimization
Update: Apple officially supports camera video pixel buffers in iOS 4.x using AVFoundation, here’s sample code from Apple developer.
Update: I do not have the xcodeproj file for this project, please don’t ask for it. Please see here for compiling OpenCV for the iPhone SDK 4.3.
Let’s begin
This is a Java port of Rob Hess’ implementation of SIFT that I did for a project @ work.
However, I couldn’t port the actual extraction of SIFT descriptors from images as it relies very heavily on OpenCV. So actually all that I ported to native Java is the KD-Tree features matching part, and the rest is in JNI calls to Rob’s code.
I wrote this more as a tutorial to Rob’s work, with an easy JNI interface to Java.
You can find the sources here: https://www.morethantechnical.com/extupload/code/JavaSIFT.zip
Here’s how to use it:
Hi
I recently did a small project combining a Java web service with a OpenCV processing. I tried to transfer the picture from Java environment (as BufferedImage) to OpenCV (IplImage) as seamlessly as possible. This proved a but tricky, especially the Java part where you need to create your own buffer for the image, but it worked out nicely.
Let me show you how I did it