Hi
I am trying to retrieve the Kinect’ s point cloud indexes, it somehow works if I use Depth texture and Pipet to get the z index for each pixel, but isn t that like a double iteration for nothing? How can I extrapolate directly the indexes of each point?
I tried also kinect-point-cloud-shader but it doesn t seem to work with the current drivers I have installed.
Yes, that is my point too, I can do that with Pipet no problems, but it seems to me like a double job == bottleneck.
The kinect is using the depth values to draw the depth texture, and with Pipet we make it back from texture to values iterating for each pixel, doh!
So, is it possible to get those values directly?
tx
Simone
if you need to use all of the values (ie a depth value for each pixel of the texture at once) the only fast way to use that information will be in a shader
The data being returned in the depth map from the Kinect is NOT a “point cloud”. It is the distance from the Kinect of a point that corresponds to a pixel in a plane that is a projection of the Kinect’s view.
In other words, imagine a piece of paper being held in front of the Kinect that corresponds to its view. Divide that paper up into a 640x480 grid, where each element of that grid is a shade of grey that corresponds to the distance of whatever the Kinect sees through that grid element.
So you can see how the X/Y of that grid is NOT a real X/Y position of what the Kinect is seeing at that element/angle, it’s just an index into that grid.
That grid can be converted into real-world XYZ “point cloud” data, but that has to be done externally after you get the depth grid data. That’s what I did in my patch in contributions: openniskeleton-plugin-with-depth-output
Doing this inside a shader is non-trivial, at least to me with my limited knowledge of shaders. The conversion requires returning three floating point numbers (XYZ), not just a plane of data, as once the conversion is done there can be multiple points at the same X and/or Y and/or Z (pick two).
Until then, I’ve worked up another mod to the plugin that does the pointcloud computation inside it filtered by a desired bounding box that returns a spread of 3D vectors and works a lot faster than pipet. I’ll post that as soon as I get it tweaked a bit more.
Yes sir! I ve found that out myself lately cause you can see how the the x,y data is always from one side ot the other of the texture, no matter of the distance from the camera.
I see you need to scale down drastically the amount of points.
So, the shader is the only way to go to get the original full resolution points ?
Thanks for that, with so many Kinect related patches it can be a bit confusing…
S.
Take a look at RGBDemo for the Kinect - it does real-time point cloud generation, and even does it for two Kinects at once and allows you to jive them into one! I haven’t looked at the code yet though to see how hard it would be to integrate, but it is based on OpenNI so at least some of the work has been done.
In the meanwhile though I have the new dynamic plugin working properly, and it is MUCH faster than pipet. I also added a transform input so you can specify your Kinect camera position and the data can be relative to an arbitrary surface, and you can specify the bounding box in those coordinates as well so you only deal with points you’re interested in which can make a huge performance improvement. Soon as I get the help file done I’ll upload it.
I have attached a quick shader based point-cloud-from-depth-map thing. Does something similar than all the current particle engines: generate a big mesh of quads and use the depth map as offset for individual parts of the mesh.
Oh, and do the undistortion based on the depth camera projection matrix (you get this with the specs for your cam and the projector node).