What?! No posts since june 2010? Yep, time flies (like a jet fighter) when
having fun being flooded in work…
Anyway, I’ve been messing around with the Kinect as motion tracking input for my upcoming Integration.03 project (see int.02 and int.01). My development platform is Max, at least for now. The jit.freenect.grab external enables the Kinect in Jitter.
I immediately found an obstruction in that the Kinect’s depth data, when rendered directly as a point cloud or mesh in Jitter, is seriously distorted because of the depth sensing method, lens distortion and the camera perspective. For the depth data to be useful to me I have to undistort it so that it renders in a perspective correct way (eg. floor and ceiling parallel, walls perpendicular to floor, etc).
I found the math to do this at the OpenKinect wiki (thanks Kyle and the OFx folks!) and developed 2 methods for getting it done in Jitter: 1) the jit.expr way and 2) the GLSL shader + jit.gl.slab way. The latter being the fastest one as the calculation is done on the graphic card’s GPU, yet not as fast as I had hoped for. Bottleneck’s probably writing and reading the 640×480 matrix to/from GPU memory. I’d love to hear it if people have ideas for speeding it up!
I thought I’d share the patch as getting it to work was an instructive process for me and perhaps others can learn from it now too. Here’s the patch (with the 2 methods) and the GLSL shader: freenect-undistortion.zip.