While looking for a very simple way to start up an OpenGL visualizer for quick 3D hacks, I discovered an excellent library called libQGLViewer, and I want to quickly show how easy it is to setup a 3D environment with it. This library provides an easy to access and feature-rich QtWidget you can embed in your UIs or use stand-alone (this may sound like a marketing thing, but they are not paying me anything 🙂
This is based on the library’s own examples at: http://www.libqglviewer.com/examples/index.html, and some of the examples that come with the library source itself.
Let’s see how it’s done
Category: 3d
So I was contacted earlier by someone asking about the Head Pose Estimation work I put up a while back. And I remembered that I needed to go back to that work and fix some things, so it was a great opportunity.
I ended up making it a bit nicer, and it’s also a good chance for us to review some OpenCV-OpenGL interoperation stuff. Things like getting a projection matrix in OpenCV and translating it to an OpenGL ModelView matrix, are very handy.
Let’s get down to the code.
Hello
Sorry for the bombardment of posts, but I want to share some stuff I’ve been working on lately, so when I find time I just shoot the posts out.
So this time I’ll talk shortly about how to get an estimation of a rigid transformation between two clouds, that potentially are also of different scale. You will end up with a rigid transformation (Rotation Translation) and a scale factor, son in fact it will be a Similarity Transformation. We will first find the right scale, and then find the right transformation, given there is one (but we will find the best transformation there is).
Hiya
Just catching up on some bloggin, and I wanna share a snippet of OpenCV code on how to check if your (badly) triangulated 3D points came up co-planar, and therefore a botch triangulation. It’s a very simplistic method, only a few lines, and it also is part of my Structure from Motion Toy Library project.
Hi
I’ve been working feverishly to straighten up the Structure from Motion Toy Library, and make it more robust. During my experiments with different methods I wanted to test out a different method for decomposing the Essential matrix to rotation R and translation t, other than that of Hartley and Zisserman using SVD. That’s when I came upon this paper: here by Berthold Horn from 1990, that traces the steps of Longuet-Higgins who came up with the derivation for the Essential matrix. It has a closed form solution that works pretty well, and here it is implemented with the Eigen math library (a very good library to get to know).
Hello
This time I’ll discuss a basic implementation of a Structure from Motion method, following the steps Hartley and Zisserman show in “The Bible” book: “Multiple View Geometry”. I will show how simply their linear method can be implemented in OpenCV.
I treat this as a kind of tutorial, or a toy example, of how to perform Structure from Motion in OpenCV.
See related posts on using Qt instead of FLTK, triangulation and decomposing the essential matrix.
Update 2017: For a more in-depth tutorial see the new Mastering OpenCV book, chapter 3. Also see a recent post on upgrading to OpenCV3.
Let’s get down to business…
Hi
I sense that a lot of people are looking for a simple triangulation method with OpenCV, when they have two images and matching features.
While OpenCV contains the function cvTriangulatePoints in the triangulation.cpp file, it is not documented, and uses the arcane C API.
Luckily, Hartley and Zisserman describe in their excellent book “Multiple View Geometry” (in many cases considered to be “The Bible” of 3D reconstruction), a simple method for linear triangulation. This method is actually discussed earlier in Hartley’s article “Triangulation“.
I implemented it using the new OpenCV 2.3+ C++ API, which makes it super easy, and here it is before you.
Edit (4/25/2015): In a new post I am using OpenCV’s cv::triangulatePoints() function. The code is available online in a gist.
Edit (6/5/2014): See some of my more recent work on structure from motion in this post on SfM and that post on the recent Qt GUI and SfM library.
Update 2017: See the new Mastering OpenCV3 book with a deeper discussion, and a more recent post on the implications of using OpenCV3 for SfM.
Hi!
I’ve been working on implementing a face image relighting algorithm using spherical harmonics, one of the most elegant methods I’ve seen lately.
I start up by aligning a face model with OpenGL to automatically get the canonical face normals, which brushed up my knowledge of GLSL. Then I continue to estimating real faces “spharmonics”, and relighting.
Let’s start!
I’ve seen some examples of people who build motion parallax capable screens using Kinect, but as usual – they don’t share the code. Too bad.
Well this is your chance to see how it’s done, and it’s fairly simple as well.