Dieter Vandoren

What?! No posts since june 2010? Yep, time flies (like a jet fighter) when having fun being flooded in work…

Anyway, I’ve been messing around with the Kinect as motion tracking input for my upcoming Integration.03 project (see int.02 and int.01). My development platform is Max, at least for now. The jit.freenect.grab external enables the Kinect in Jitter.

I immediately found an obstruction in that the Kinect’s depth data, when rendered directly as a point cloud or mesh in Jitter, is seriously distorted because of the depth sensing method, lens distortion and the camera perspective. For the depth data to be useful to me I have to undistort it so that it renders in a perspective correct way (eg. floor and ceiling parallel, walls perpendicular to floor, etc).

I found the math to do this at the OpenKinect wiki (thanks Kyle and the OFx folks!) and developed 2 methods for getting it done in Jitter: 1) the jit.expr way and 2) the GLSL shader + jit.gl.slab way. The latter being the fastest one as the calculation is done on the graphic card’s GPU, yet not as fast as I had hoped for. Bottleneck’s probably writing and reading the 640×480 matrix to/from GPU memory. I’d love to hear it if people have ideas for speeding it up!

I thought I’d share the patch as getting it to work was an instructive process for me and perhaps others can learn from it now too. Here’s the patch (with the 2 methods) and the GLSL shader: freenect-undistortion.zip.

7 Responses

  1. daanbr

    thanks for sharing!

  2. daanbr

    In my opinion, if it weren’t for those nasty shadows and such, it’s almost the perfect sensor. Are you thinking about using multiple kinects yet? 

  3. dtr

    absolutely! i’ve been doing some tests but the jit.freenect.grab external seems buggy in that respect. i can’t get 2 kinects working simultaneously. others have had more luck though. see: http://cycling74.com/forums/topic.php?id=31621  

  4. [...] the Kinect depth map in MaxJitter by mp on Apr 13, 2012 • 1:53 pm No Comments 2 methods for undistorting the Kinect depth map in MaxJitter by Dieter Vandoren. I immediately found an obstruction in that the Kinect’s depth [...]

  5. William Turkel

    Hi Dieter, Thanks for sharing your work with the Kinect!  It has really helped me get started with processing point clouds.  I have a question about the jit.expr that you use, which I posted in the Cycling 74 forum http://cycling74.com/forums/topic.php?id=29469&page=2  It seems like you are scaling the x and y axes by the z (depth) information … I got much less distorted looking results by changing the jit.expr, but I am probably missing something.

  6. marlus

    hi, thanks for the article. do you have any patch or experment using jit.openni?

  7. qohiefXE

    http://streetscooterchrome.com/2013rayban/ – ray ban prescription sunglasses

Leave a Reply