Hey, newbie here.
I’m working on a project where I want to move and rotate a pointcloud based on a VR headset’s rotation/position in space. For now I’m trying to understand the basics of camera - object relation by using a sample sdf object. I got it to always face the camera using “Billboard” node, but can’t figure out how to correctly calculate the translation for the object so it would always stay in front of the camera at a given distance no matter how I move the camera in space. Has anyone come across this problem? I’m sure there’s probably a node or some logic I’m missing here.
Hey, for this scenario you would use the inverse of the ViewMatrix from the camera and add the transformation you want relative to the camera. You don’t need to use Billboard or Decompose for that.
First of all, thank you very much for such thorough replies! It helped a lot!
After progressing a bit further I stumbled upon another hurdle and since it’s connected to the same topic I wanted to ask here again instead of creating a new post.
What I have here is position (x,y,z) and quaternion rotation. I would like to assemble them into matrix to achieve same result as mentioned before. Do i simply create the a new matrix and “add” rotation and position through “translate” nodes?
How I want it to look: