I want to shoot a head closeup with multiple kinect sensors to increase detail and avoid shadowing (Verschattung) caused by pov. has anybody here tried something similar?
I am interested in problems caused by interfering kinect signals. AFAIK at node 17 the interactive mesh wall used K1 with tiny rumble devices to avoid this. I can use K2 or KA. Using multiple machines for recording would not be the problem.
I imagine a fixture with mounted sensors around a sweet spot, shooting a reference object to manually align the point clouds, and then writing all points with offsets into one new texture for particle emission.
the rumble-devices were necessary to not have intereference with the infrared-dot-patterns that K1 projected (and was a problem with multiple devices). with multiple K2 this is most likely not the case and for AK too - here you can even sync multiple devices and have control over an timed offset when the image is to be taken.
using standard drivers, you can only use one K2 on one windows machine. you don’t have this limitation with the AK (here you are more limited by the USB bandwith, you might need more than one USB controller to handle the load).
easiest setup is with one powerful PC and multiple AK devices connected via USB (i personally tried with 4 at once). be aware of the hassle with maximum cable-lenghts you might get. long and working USB cables are available but very expensive.
a different approach would be the VL.Espace toolkit, where you can send the images over IP/NetMQ. you could have multiple AK “thin clients” that just forward the images captured to a different machine which receives those and rebuilds the pointcloud. not so much hassle with cable lenghts (ethernet is cheap) but more with available bandwith. VL.Espace also has a demo patch that shows to spatially calibrate the AKs using VL.OpenCV and a checkerboard.