I’m using ConstantProjection (DX11.Effect) to render from the view of the spectator.
The surfaces that is being projected onto are plain rectangular screens. The way I can see how to get the image I need to feed to the projector is to place a camera seeing a whole screen and sending that to the corresponding projector.
In this case that is quite a hassle, considering that the projected image is just a flat rectangular image.
I would like to avoid setting up a camera and in stead just get the texture that is on the quad, how can I do that?
Sune
UPDATE: I have a functioning setup using cameras, but I think there are some pixel precision that does not align precisely.
you could generate a fitting Homography from the four corner coordinates of the quad and their projections and use that in a TransformTexture node.
but I suspect there won’t be much performance difference compared to the proper way you outlined.
edit: i am using ConstantProjection with World extensively, and I am sure, the technique is precise, as soon as the texture transform input is good.
I need the image on the surfaces.
As it is now, I have just placed some virtual projectors perpendicularly in front/above the screens, but it feels like there should be a simpler way of doing this so I don’t have to make “complicated” calculations to make the projection fill the projector image exactly.
Hi, the only guess i have so far, you can unwrap the hole thing to texture coordinates.
For that you can try to use tech, can’t find forum post but here is an example: render2tex.zip (149.7 KB)
But the problem is that, you are projecting, something on top, so basically you would have to modify constanprojection code to output result to UV instead of position…
Basically you can try to replace:
that
Out.PosWVP=mul(In.PosO,tWVP);
with that
output.pos = float4((input.uv.xy-0.5)2float2(1,-1),0,1);