Unfortunately I don’t have a Kinect right now so I can’t recreate your setup. But I feel like your Kinect image looks pretty dark…
So I tested your patch with a Kinect depth image I found online and everything seems to work =)
i’ve made myself a setup for pointcloud using 2 user distributions, one of which i cannot find anymore nowadays, unfortunately.
here’s the setup. neatly arranged and commented… but not properly tested yet, since i don’t have my kinect at home now. it should work though, always did for me.
oh if you want to try fix your setup, might want to try using a changeFormat node on the Kinect DepthImage. It’s L16 grayscale, usually, i had that being problematic once.
(i think it shouldn’t in your case… but doesn’t hurt to try.)
oh, and: make sure you have the addonpack installed, and that you use vvvv 32bit (since a lot of nodes from the addons still need to be transported). stuff that you find and download here depends on it all of the time.
k, good luck. let me know if any of this worked for u
Hiya,@dl-110)) and ((user:Szaben ,Thank for trying to give me a hand in case you do not have kinect .
At present I got the depthmap which formated “L16 grayscale” by using Depth (Kinect Microsoft)
AS @Szaben)) said, the format “L16 grayscale” does not the correct way to get the pointcloud. I’ve tried all different formats by using ChangeFormat (EX9.Texture),but it seem to be not possible to get the correct depthmap as Iwanted yet (which showed by((user:dl-110 above ).
though, in your setup, are you sure you want to set renderMode to “Point”? cause, what “Point” does is it only displays the corners of a Mesh - and depending on how your PointCloud Shader works, it might produce 3 ghost-points for every Point you really want to have rendered.
Try setting it back to “solid” and to control the Point’s size in another way.
so then…: getting a spread of Vectors… you can use the “KinectD2RGB” shader, and then you just use “Pipet (Ex9)” on the resulting Texture. Each Pixel represents a 3D-Vector within the pointcloud (r=x, g=y, b=z).
But beware! handling your PointData in a Shader, or in a Spread of Vectors, will result in immensely different performance. While your Graphics card (Shader approach) can handle up to millions of points a frame, your CPU (Spread approach) surely can’t. I think a good machine will have enough power to render a Kinect Pointcloud on the CPU, but you might need to downscale the resolution - a lot - to get decent performance.
for more information on how to control a fully GPU based particle system, check out particlesgpu-shader-library
…this is sort of advanced stuff. but if you wrote the shaders you are using yourself, it might be just the right address for you.
__
do you just want to resize/move the entire Pointcloud, or you want to have individual Points react to something? …maybe you define your goals more clearly and i’ll be happy to help you out.
I myself had (still have) a project where I want all of the Points from Pointcloud to interact with a second Set of Points in Terms of Attraction; but i got lost in the immense load of calculations necessary.
There are ways…! as i mentioned, you either have full GPU-based particle system, or you reduce your Point-Count drastically.