Categories
Uncategorized

New Theme, New Toolbar, New Year

Hi dear blog readers
We are trying out a new theme for the blog. We wanted to move out of the default and included themes of WordPress, and into something a bit different.
You will also notice a toolbar at the bottom of the page. It should help you better spread the MoreThenTechnical word! by giving more social networking tools.
So tell us what you think of it! Should we keep it, trash it, or maybe you have an idea of your own?
BTW, tomorrow is new years eve by the jewish calendar, and in Israel we are celebrating the new year (“Rosh Ha’shana”). So these changes to the blog could not have come at a better time – a time for new beginnings.
BTW 2 – I’m working on a post regarding next-generation image editing, so keep posted.
Thanks!
Roy & Arnon

Categories
graphics Mobile phones video

iPhoneOS 3.1 will not allow marker-based AR

no-arHi
I had very high hopes for iPhoneOS 3.1 in the AR arena. With all the hype about it, I naturally thought that with 3.1 developers will be able to bring marker-detection AR to the app-store – meaning, using legal and published APIs. A look around 3.1’s APIs I wasn’t able to find anything that will allow this.
Not all AR is banned. In fact AR apps like Layar will be very much possible, as they rely on compass & gyro to create the AR effect. These don’t require processing the live video feed from the camera, only overlaying data over it. This can be done easily with the new cameraOverlayView property of UIImagePickerController. All you need to do is create a transparent view with the required data, and it will be overlaid on the camera preview.
Sadly, to get marker-detection abilities developers must still hack the system (camera callback rerouting), or use very slow methods (UIGetScreenImage). I can only hope apple will see the potential of letting developers manipulate the live video feed.
Roy.

Categories
graphics Mobile phones programming video vision

Near realtime face detection on the iPhone w/ OpenCV port [w/code,video]

iphone + opencv = winHi
OpenCV is by far my favorite CV/Image processing library. When I found an OpenCV port to the iPhone, and even someone tried to get it to do face detection, I just had to try it for myself.
In this post I’ll try to run through the steps I took in order to get OpenCV running on the iPhone, and then how to get OpenCV’s face detection play nice with iPhoneOS’s image buffers and video feed (not yet OS 3.0!). Then i’ll talk a little about optimization
Update: Apple officially supports camera video pixel buffers in iOS 4.x using AVFoundation, here’s sample code from Apple developer.
Update: I do not have the xcodeproj file for this project, please don’t ask for it. Please see here for compiling OpenCV for the iPhone SDK 4.3.
Let’s begin

Categories
3d graphics gui Java opengl programming school video

Advanced topics in 3D game building [w/ code, video]

snails_3dHi
The graphics course I took at TAU really expanded my knowledge of 3D rendering, and specifically using OpenGL to do so. The final task of the course, aside from the exam, was to write a 3D game. We were given 3 choices for types of games: worms-like, xonix-like and lightcycle-like. We chose to write our version of Worms in 3D.
I’ll try to take you through some of the problems we encountered, the decisions we made, and show as much code as possible. I’m not, however, gonna take you through the simple (yet grueling) work of actually showing meshes to the screen or moving them around, these subjects are covered extensively online.
The whole game is implemented in Java using JOGL and SWT for 3D rendering. The code is of course available entirely online.

Categories
3d graphics Mobile phones opengl programming video

Augmented reality on the iPhone using NyARToolkit [w/ code]

nyarrrHi
I saw the stats for the blog a while ago and it seems that the augmented reality topic is hot! 400 clicks/day, that’s awesome!
So I wanted to share with you my latest development in this field – cross compiling the AR app to the iPhone. A job that proved easier than I originally thought, although it took a while to get it working smoothly.
Basically all I did was take NyARToolkit, compile it for armv6 arch, combine it with Norio Namura’s iPhone camera video feed code, slap on some simple OpenGL ES rendering, and bam – Augmented Reality on the iPhone.
Update: Apple officially supports camera video pixel buffers in iOS 4.x using AVFoundation, here’s sample code from Apple developer.
This is how I did it…

Categories
3d graphics opengl programming video

Augmented Reality with NyARToolkit, OpenCV & OpenGL

arHi
I have been playing around with NyARToolkit’s CPP implementation in the last week, and I got some nice results. I tried to keep it as “casual” as I could and not get into the crevices of every library, instead, I wanted to get results and fast.
First, NyARToolkit is a derivative of the wonderful ARToolkit by the talented people @ HIT Lab NZ & HIT Lab Uni of Washington. NyARToolkit however was ported to many other different platforms, like Java, C# and even Flash (Papervision3D?), and in the process making it object oriented, instead of ARToolkit procedural approach. NyARToolkit have made a great job, so I decided to build from there.
NyART don’t provide any video capturing, and no 3D rendering in their CPP implementation (they do in the other ports), so I set out to build it on my own. OpenCV is like a second language to me, so I decided to take its video grabbing mechanism wrapper for Win32. For 3D rendering I used the straightforward GLUT library which does an excellent job ridding the programmer from all the Win#@$#@ API mumbo-jumbo-CreateWindowEx crap.
So let’s dive in….

Categories
graphics programming vision

Porting Rob Hess's SIFT impl. to Java

beavers_siftThis is a Java port of Rob Hess’ implementation of SIFT that I did for a project @ work.
However, I couldn’t port the actual extraction of SIFT descriptors from images as it relies very heavily on OpenCV. So actually all that I ported to native Java is the KD-Tree features matching part, and the rest is in JNI calls to Rob’s code.
I wrote this more as a tutorial to Rob’s work, with an easy JNI interface to Java.
You can find the sources here: https://www.morethantechnical.com/extupload/code/JavaSIFT.zip
Here’s how to use it:

Categories
graphics gui programming vision work

Combining Java's BufferedImage and OpenCV's IplImage

java_opencv_imgHi
I recently did a small project combining a Java web service with a OpenCV processing. I tried to transfer the picture from Java environment (as BufferedImage) to OpenCV (IplImage) as seamlessly as possible. This proved a but tricky, especially the Java part where you need to create your own buffer for the image, but it worked out nicely.
Let me show you how I did it

Categories
graphics programming video work

iPhone camera frame grabbing and a real-time MeanShift tracker

i_can_has_meanshiftHi
Just wanted to report on a breakthrough in my iPhone-CV digging. I found a true realtime frame grabber for the iPhone preview frame (15fps of ~400×300 video), and successfully integrated this video feed with a pure C++ implementation of the MeanShift tracking algorithm. The whole setup runs at realtime, under a few constraints of course, and gives nice results.
Update: Apple officially supports camera video pixel buffers in iOS 4.x using AVFoundation, here’s sample code from Apple developer.
So lets dig in…

Categories
3d graphics programming school

Tracing wild rays

Hi
I havn’t published in a while. I was back up with work on a project for uni., work and my writing…
But the good thing with keeping busy, is that after a while – you have something to show for! So here’s what i’ve been working on for Comp. Graphics course – A Ray Tracer.