Kinect RGB Offset

Hi all,

I am struggling to create a textured kinect depth emitter for FUSE. Main problem is RGB offset. I currently use a Kinect2 I but also have an azure somewhere.

tex format (because I need to record and playback)
RGB: R8G8B8A8_UNorm_SRgb
DEPTH: R16_UNorm
WORLD: R32G32B32A43_Float

Test1: kinect world texture sampled rgb as xyz positions; produces the correct pointcloud but unable to fix RGB. In regard to this thread, I tested @lasal s Kinect shader, but “MapRGBToDepth” produces these colors. (is this for azure maybe?), regardless if I plugin the depth or the world texture.

Test2: Just sample from depth and offset particles on Z. the image seems totally “squished” together. Do I have to offset this exponentially, or the real question is: how exactly are the depth values saved in this texture?

K2PlusFuse.zip (3.7 MB)

IIRC the depth texture stores values linearly as millimeters. however, since you get the texture as a R16_UNORM (i.e. 16 Bit quantization mapped to a 0 to 1 range), you have to multiply the sampled values by 2^16 / 1000 = 65.535 to get the actual z-offset in meters.

6 Likes

thanks @motzi , exactly the answer I was looking for!

yes, relevant info. that should be somewhere in the documentation.
and preferably also eg. when you hover the depth texture