Hi all,
I am struggling to create a textured kinect depth emitter for FUSE. Main problem is RGB offset. I currently use a Kinect2 I but also have an azure somewhere.
tex format (because I need to record and playback)
RGB: R8G8B8A8_UNorm_SRgb
DEPTH: R16_UNorm
WORLD: R32G32B32A43_Float
Test1: kinect world texture sampled rgb as xyz positions; produces the correct pointcloud but unable to fix RGB. In regard to this thread, I tested @lasal s Kinect shader, but “MapRGBToDepth” produces these colors. (is this for azure maybe?), regardless if I plugin the depth or the world texture.
Test2: Just sample from depth and offset particles on Z. the image seems totally “squished” together. Do I have to offset this exponentially, or the real question is: how exactly are the depth values saved in this texture?
K2PlusFuse.zip (3.7 MB)