I have been playing around with Kinect/Projector calibration with Rulr (this video definitely helped a lot) and the results are pretty insane!
Once you get your Rulr basic patch (the video walks you through building it) it’s litterally a matter of 30 seconds and a couple of snapshots to have it perfectly matched.
Now I get the projector’s matrix exported and imported to vvvv properly, and while it’s aaaalmost there, it’s nowhere as clean as when using the NodeThroughView in Rulr (which basically projects the pointcloud through the projector view).
Attached are 2 pictures IRL using a chessboard, which is a good precision test when projected onto itself.
The "clean" one is with NodeThroughView inside Rulr.
The "dirty" one is using the template Kinect/Projector import-to-vvvv patch provided with Rulr here (I just had to update it to dx11.pointcloud’s latest PointCloudBuffer, because with the version provided with Rulr it’s just completely off for some reason; my update is attached).
How can the result differ that much while the process is basically the same if both software (= just using the matrix transform to watch the pointcloud from the projector’s POV)?
Or am I missing a subtlety here?
Or did I screw up anything updating to latest PointCloudBuffer?
One of possible hints :
There are many ways people go from Depth to World (3D XYZ) using the Kinect - and most of them are wrong (but work fine for most situations). Which method are you using to go from Depth to World? Can i see the source code of the method used?
Please no zips or patches since I cannot check it whilst away from my desk. Ideally
Screenshot
GitHub links to source code of the nodes you are using (or otherwise upload it into something i can see on my phone, etc)
My starting point is the example patch you provide on Github here.
This is the first screenshot. It uses the “PointCloudBuffer (DX11.PointCloud Kinect World)” node provided in the …/modules folder.
Though it gives very funny results; second image attached is a screenshot of the view I had from my living room… It’s like all points below the floor grid are split in 3 different orthogonal parts :/
Trying to fix that, and the “least worse” results I’ve got so far is by modifying this patch a bit:
It seems to use the Depth and FOV to generate world coordinates internally. This is NOT ACCURATE (but is fine for most graphical purposes).
Can you hack together a new “PointCloud (DX11.PointCloud Kinect)” from the one currenly in DX11.PointCloud master and the one which I made which uses the World as input?
Mmm wouldn’t the World based-approach bring me back exactly to this version attached with by Main.v4p you have in Rulr’s Platform Examples?
What I don’t get is why I have out-of-the box so strange results with this base patch you’re probably using all the time… (again the “exploded” pointcloud shots attached in my previous post use this patch)
Does it work fine for you? It’s just the latest in Rulr’s Github.
Maybe you use another version of your PointCloudBuffer World now, and forgot to update it under /modules?
It kinds of reminds me of problems when multiple world or texture coordinates do not use the same system ([-1;1] vs [0;1], opposite axis, etc.) and positions might be wrapping around, but I don’t get why you wouldn’t have the same symptons on your side…