Working with several kinects

Hi,
We just started a semester-project concerning an interactive dance performance. We would like to track the dancers skeletons (up to two) on the whole stage with kinects. The stage is not that big, so three kinects in a row with slightly overlapping frustums should be enough.
We already did some tests with one kinect and the OpenNI-Kinect nodes of vvvv.

So, now my question: What do you think is the best way to use 3 kinects to get a continuous tracking of the whole stage?

I think there are two approaches:

  • Using one mashine for every kinect, merging all the informations on one of these mashines and distilling only two skeletons out of it. That’s quite expensive just for the tracking…

  • Using one machine for all kinects. Cheaper and probably more comfortable, because we don’t need OSC or something else to bring the data together. BUT: I read that it would be difficult to use OpenNI with more than one kinect. Are the Microsoft nodes perhaps the better choice?

Did anybody experiment with that already? Thanks for any advice.

you basically have only one option with 3 machines
and for my taste i would use ms sdk since you don’t need any pose to start tracking and it outputs list of interfered joints. But not sure you will have an quick setup like that… depends how you place the kinects tho

Yes, I think you are right. It’s probably the best way to use 3 machines. The new OpenNI doesn’t need a start pose, I think. At least we don’t need to make this pose and it’s recognizing our skeletons… Did you test the ms sdk also with a longer distance like 5 meters? I heard it should be worse a this distance…

the vux kinect plugin has a index pin that i think it’s supposed to be used with multiple kinetcs, i havn’t tested it yet but it’s worth a check if you have 2 devices there

not with skeleton, skeleton one kinect two players same time, up to five ready to go

You could also try with the nyko wide angle lens for the kinect, they work ok, although extremes of vision they get iffy…
I’d also recommend the ms sdk drivers really very good :)

Thanks for all your answers, guys! I will try the ms sdk as well and I will think of a concept for using three kinects with three machines and still getting only one wide image…

I’m into testing of multiple kinect, will take the 1-kinect-for-pc option and try to merge in Boygroup. Any help welcomed

i can report 6 kinects connected to 1 PC (8 core cpu) with ms-kinect drivers tracking 1 skeleton each (haven’t checked with 2 skeletons each but guess that would also work). also briefly tested to work with 8 kinects, but there the 8 cores maxed out completely.

the only important thing: each kinect needs a dedicated usb-controller. typically motherboard has 2 controllers. and we used this 4 usb-controller card. Amazon.de (two of those when we tested the 8 kinects)

note: we did not merge the depth-images, each kinect had a totally different view.

1 Like

in multiple setup with 1 subject:
will be a challeging attempt to fuse all the tracked point together for a 3d model to come up
I would start a project involving the full 3d scan of a person inside a cylinder with spiral up-down moving rgbd sensors, could scan entire body in under a minute. Fusion rocks

can confirm what joreg wrote. with ms sdk you need the kinect on seperate USB Busses. Did only test with 3 Kinects but that runned smoothly on the same machine.

I did start on writing a wrapper logic for tracking a person across multiple kinects. Unfortunately other work came across before i did finish. If you like you can have a look at the project. Its a WPF C# project with OSC output.

The logic i was trying to implement takes as requirement that the kinects slightly overlap, so the skeleton would always be visible. In between kinects i compare joint positions to identify same user.

you can find it here: http://schnitzel.dk/rasmus/files/MultiKinect.zip

hello, I am testing two kinects with microsoft kinect node.

30.2 x86. kinect for windows driver, sdk, runtime all 1.7.

the kinect node’s pin says there are 2 kinect, but how to get two textures?

i tried spreading the index pin but doesnt seem to work.

then tried two nodes, then one of them worked but the other was red.

my machine is a laptop… thought they are plugged into the same usb bus?

I tried different arrangements but no luck. there are other usb deviced connected to my machine as well.

then the microsoft kinect node stoped working altogether. restarted the machine many times but no luck. anyone have similiar experience?

and now I am getting “exception occured in TMPlouginWrapperNode. Evaluate: auf das verworfene objekt kann nicht zugegriffen werden” in the tty renderer as I enabled the exceptionDialog.

I think I try to use the x64 one…see if there i can make it work.

i ended up deleting all the openni related drivers and primesense, and also downloaded the delveopertoolkit. and it seems to work now again…sorry for the fuzz. will report if i can find a two kinect set up that seems stable for a laptop.

thanks to @joreg, i use the different usb bus and it worked :)

Apart from connecting them to the pc, also keep this in mind:
Multiple Kinect interference

@joreg do you still know if that worked inside vvvv with more than one kinect or did you have to use other sdk’s to make use of every kinect on their own?
I am having a student project right now and I am trying to figure out how to make use of two kinect2´s inside of vvvv.
So I think i have to buy something like this Amazon.de
to make use of more then one kinect.
Am i right in thinking that i would need one for every kinect that I use?

thx for the help :)

Ah I saw that connecting more then one kinect2 to one pc is not possible. Only with the first version of the kinect that is possible?? dayum!

@ knoeterich
It’s possible but only with VL.Kinect2 and if both kinects connected to separate usb3 bus … related:

@antokhio aaaawesome!!! that sounds very good! thanks so much :)

@antokhio Hi, I am still struggling with using multiple K2s on one machine in beta. Although I successfully installed the VL.Devices.Kinect2 nuget package to use in VL in beta, I dont see any way to connect more than one device (lloking into the Kinect2 VL node it seems it’s still just using the DefaultSensor).
I need to use skeleton data which seems to be unavailable using the libfreenect2, am I right?