Camera "orbit" movement

Hi,

I have managed to code a node which outputs XYZ values for the Kinect’s depth data (Microsoft drivers). With this, I managed to create a point cloud using quads and a camera to view them in space. (see attached picture)

I now need a way to move the camera in an orbit-like effect (circle/ellipse) using my point cloud as the center into which the camera is looking. The idea here is to allow the viewer to get a better grasp on the actual 3D volume that is being updated in real time.

I have tried variations between translates and yaw/pan settings on the camera, but have not yet gotten any decent results.

Could anyone be kind enough as to show me how to achieve this?

Thanks.

P.S. I intend to upload my node to github for other users to try it if anyone is interested.

https://vvvv.org/sites/default/files/imagecache/large/images/PointCloud_0.jpg

@Dottore ))uploaded a really nice camera you might like to try: ((contribution:camera-(transform-orbit-fly)

Also you may find it easier in general to transform your pointcloud data to the world origin (0,0,0) and orbit around that.

hey, have a look at LookAt (ahahaha), and the Cartesian node.
I attached an example.

For an ellipse, just change the Length-Pin of Cartesian over time.

camera_orbit.v4p (9.4 kB)

1 Like

Thank you both so much for the replies.

I ended up understanding how to properly place my point cloud for a regular camera to work.

Both nodes mentioned provide a lot of cool extra features and parameters which are definitely worth taking a look at.

I will upload a video of my results soon.

Now I want to try and map the color of an animated texture to the colors fr the points in the cloud. Wish me luck!

Thanks for the help!

Hello again,

Here is a vine clip of what I managed to get done with this.

https://vine.co/v/hqjjF6mUYEl?fb_action_ids=10151904468647018&fb_action_types=vine-app%3Apost&fb_source=other_multiline&action_object_map=%7B%2210151904468647018%22%3A220588698091387%7D&action_type_map=%7B%2210151904468647018%22%3A%22vine-app%3Apost%22%7D&action_ref_map=%5B%5D

In the end I was able to calibrate the camera rotation around any desired point using an axis and some common sense.

I noticed that for some reason, vvvv has a hardcoded resolution of 320x240 on the Kinect node for the depth camera.

I tweaked the code to change this to 640x480 and got a lot more data out of the Kinect.

However my laptop does not even have dedicated video memory, and my patch is not using GPU to handle all the XYZ coords, instead it is relying strictly on CPU and so, once I pass the 32k quad count, framerate gets very choppy.

Lots of improvements to implement.

In any case, is anyone interested on this Node exposing XYZ coords based on the depth image stream? Not sure if it will be of any use to anyone.

Again, thanks for the help!

Could you share the way to get the 640x480 output?

Sure thing,

Do you have vvvv’s source code and a compiler at hand?

If not, follow this post to get it all setup:

vvvv sdk

Once you have the Addonspack solution compiling, browse the solution and head down to:

nodes>Devices>MSKinect>Nodes

In there you will need to modify two files called:

KinectRuntimeNode.cs
KinectDepthTextureNode.cs

I think it is safe to say that a basic find and replace from 320 to 640 and from 240 to 480 on those two files should do the trick.

Recompile and open your fresh copy of vvvv to give it a try.

Let me know if you run into any problems.

Good Luck!