I have managed to code a node which outputs XYZ values for the Kinect’s depth data (Microsoft drivers). With this, I managed to create a point cloud using quads and a camera to view them in space. (see attached picture)
I now need a way to move the camera in an orbit-like effect (circle/ellipse) using my point cloud as the center into which the camera is looking. The idea here is to allow the viewer to get a better grasp on the actual 3D volume that is being updated in real time.
I have tried variations between translates and yaw/pan settings on the camera, but have not yet gotten any decent results.
Could anyone be kind enough as to show me how to achieve this?
Thanks.
P.S. I intend to upload my node to github for other users to try it if anyone is interested.
In the end I was able to calibrate the camera rotation around any desired point using an axis and some common sense.
I noticed that for some reason, vvvv has a hardcoded resolution of 320x240 on the Kinect node for the depth camera.
I tweaked the code to change this to 640x480 and got a lot more data out of the Kinect.
However my laptop does not even have dedicated video memory, and my patch is not using GPU to handle all the XYZ coords, instead it is relying strictly on CPU and so, once I pass the 32k quad count, framerate gets very choppy.
Lots of improvements to implement.
In any case, is anyone interested on this Node exposing XYZ coords based on the depth image stream? Not sure if it will be of any use to anyone.