VL.Espace
A VL toolkit for spatial applications
what’s this?
In short: (for now) it’s Kinect over IP and some handy gimmicks.
No nuget release yet, for using it please check out the repo and use the nifty GammaLauncher start VVVV with it as package repository.
First public demo at the 26th meetup.
longer description:
This toolkit is meant simplify creation of spatial applications (utilizing tracking equipment like RGB-D cams) and be more flexible by bringing them on the network. while many more features would be cool, for now you can do this:
- a computer with a connection to a Kinect can make the device available on the network via VL.IO.NetMQ, so that it can be accessed by one or multiple subscribers.
- another client computer might subscribe to multiple Kinects connected to multiple different computers and merge the data to build a spatial application with realtime tracking
why is this handy?
- delivering data over IP eliminates problems arising with short USB cable lengths typical for these kind of scenarios
- no devices dependencies are necessary on the receiving end (usually the main patch) and therefore might decrease application complexity.
- potentially (good hardware provided) this would allow to scale installations to utilize many devices at once.
does this really work as advertised?
well, to a certain degree. data handling and network transmission is made in a best effort approach so it really depends on your environment. also, there are no guarantees regarding latency and synchronisation. don’t expect a super-tight experience when using multiple devices.
all this is EXPERIMENTAL (so test your setup thoroughly when using this).
which devices work?
For now there is support for
- AzureKinect and it’s body tracking
- comes with means to reconstruct pointclouds on the remote computer
- Kinect2 and its bodytracking
- Just with basic functionality like image and skeleton send/receive.
what else is there?
- a helper to calibrate multiple cams to a common origin in space (thanks to VL.OpenCV)
- a compute shader based OneEuroFilter for smoothing the depth image (per pixel)
- MDNS integration for easy device discovery in dynamic networks (in progress)
- basic helppatches demonstrating functionality (in progress).
how do i start?
Sorry, for now documentation is sparse.
Nuget release is planned. the sources can be accessed here. easiest way to start is to check out the repo and add the entire folde to the package repositories in GammaLauncher.
Code is separated into multiple sub-packages so one only has to reference the stuff that an application needs (e.g. while a computer connected to a Kinect needs the device dependencies, while a consumer on a different computer of the data doesn’t).
Best way to try out is to use it with an AzureKinect device (which has most functionality covered).
- start the AzurKinectServer headless on a computer with the devices attached
- in a second VVVV instance (local or on a different computer) start the AzureKinectReceiver helppatch and adjust the IP if necessary.
- to get communication going by changing some of the
future vision?
So many things that would fit in here like:
- more supported RGB-D devices (ZED cam, Orbbec, LeapMotion)
- different tracking device families like OpenXR based devices, LIDARs, … would be useful to integrate
- a unified skeleton for bodytracking that abstracts over the output from different devices. ideally this should make devices exchangeable to a certain degree and allow for mixed device environments
- related: merging skeletons of the same person seen by different devices
- improved handling of images (resizing, compressing) on the server side
- VL.Devices.AzureKinect could be extended to provide MJPEG data (would save resources)
- bridge to FUSE
- improved synchronisation
- multicasting for network transmission to multiple devices (should theoretically be possibly with NetMQ) and would reduce network traffic on the sender for multiple receivers.
- integrate projector calibration
- …
contributions welcome!
bugreports, usability and performance improvements, feature extensions - all welcome! please use this forum or the github issues for communication
thanks!
a big thank you to the entire VVVV community for being an inspiration, sharing code and being helpful in many ways! in particular some ideas from @catweasel, @lasal and @ravazquez found their way in some of the parts of the project.