Hi there,
I am totally new to vvvv (well, played with it for the last about 6 weeks or so) and want to create a small private project using a Kinect2. Maybe at some point I can make it professional and public, but for now its very experimental and I want to see if I can make it in vvvv and don’t need to jump on Unity instead, as vvvv looked more promising.
I have programming background, and 3D and compositing (eyeon/BMD Fusion) is fine as well. My project works already perfectly fine in Fusion, but not realtime. Doing it in realtime would be fantastic. So proof of concept is done.
I managed to get live depth data out of the Kinect2 and manipulated the image in realtime, but I am stuck with some problems, for which I didn’t find a real answer so far. Trying to find tutorials or the like lead me not as far as I needed, so I think now I need to ask here the “pro’s”. Its fairly simple things, but maybe I haven’t yet got the concept of vvvv right.
The Kinect2 nodes output only DX11. I wonder how I can convert the DX11 depth for instance into a grayscale image, which I can then manipulate using e.g. a noise field or similar?
I haven’t understood how I can transform from an DX11 texture to a 2D image for further processing. Is that explained somewhere? Sampe patches would be most interesting, as this is the way I learned everything so far…
I managed to apply a noise in DX11 instead, but I was not able to scale the noise 2D at all - but I want it to be bigger, to reduce its detail.
And how can I mirror the Kinect2 output to match the real world? I tried scaling with -1 on x axis, but I get black screens then… Which node is the right one? How?
Is there a way to make a freeze frame from the depth channel, so I can substract the live image from the freezed depth channel for further processing? That would be important for me.
Can I safe a picture or record a video to disk?
How can I get a value of a pixel/depth so that I can modify the image e.g. based on the measured depth?
I found that I can directly modify fx in c# and that helps for some things I need, but coming from Fusion compositing, I am stuck as of how to transform from 3D DX11 space into 2D screen space and reliable “sizing” such as 1920x1080y images etc.
I also stumbled of diverse demo patches in tutorials, which do not seem to work as they should in the 50 Beta 35x64 version I use. Should I go back to an older version then?
Is there any documentation on the DX11 stuff beyond this page:
?
Looking forward to some insights from you!
Thanks
Stromax