I wanted to open up a discussion on this technique. What's involved:
Stack projector and Kinect in physical space and align lenses as closely as possible.
Map depth values from Kinect to actual scene depth and render in world space.
[Transform / distort mesh?]
Create camera in scene using projector parameters (FOV/lens angle & lens shift; may require asymmetric frustum). This is necessary as it's unlikely the lens angle of the Kinect (~57 degrees for the IR camera) will match the lens angle of your projector.
If anyone can recommend built-in Cinder functions / external libraries that might be useful, or offer any perspective on the problem as a whole, I'd really appreciate it.
It doesn't seem like (3) ought to be necessary, but Kimchi and Chips mentions a Padé Transformation (they may just be referring to the world->projector transform, though, which I think is handled in step 4); I'm currently stuck on 4, trying to get a camera in Cinder working with an asymmetrical frustrum.
The above ignores lens distortion completely :)
Anyways, I think this is a really promising use of Kinect + Cinder and wanted to see if anyone else is thinking along these lines.
I am trying to do the same kind of thing using the same kind of method than the one from "augmented engineering" or "byo3D" (origial project of augmented engineering I guess). This means calibrating the projector with the Kinect RGB camera. Then you can do all what you suggested more accuratly. Just render the point cloud or mesh from the Kinect from the viewpoint of the projector using its projection matrix you calibrated... I didn't understand fully the Padé transformation but I'll have a look, but the OpenCV calibration seems to me as a safe and easy way to do the calibration. Good luck and keep me updated on your progress.
Cool, let me know how you do with the OpenCV calibration. I'm workshopping the treatment this weekend so I decided to do a rough calibration — lens angle & lens shift only, no barrel distortion. It's not perfect but it's doable. I'd love to incorporate a full calibration though.
Right now the other challenge is that because of how Cinder-Kinect maps the depth values to greyscale, there's no way to reconstruct the depth params with high accuracy (see http://forum.libcinder.org/#topic/23286000000453029 ). That will probably be a necessary component of this type of work.
Calibration with OpenCV is not really difficult. You have to calibrate the Kinect intrinsic parameter first (the RGB camera) with a basic chessboard. Then you stick the chessboard on a white plane where you project another chessboard with your projector. Using the printed chessboard and the calibrated kinect you can compute the plane equation and therefore the projected chessboard coordinates... which leads to an intrinsic calibration of the projector and the relative transformation between projector and camera.
Regarding the depth problem with Cinder-Kinect, you might want to consider using OpenNI as it use firmware information to translate depthmap into real world info and is not dependant of the hardcoded calibration done on another kinect... Also you have a wonderful function to directly distord your depthmap to fit your rgb image and thus you don't need to calibrate the transformation between those two.