Hi, often i deal with a problem of detection in space, in dx9 there was intersect nodes which worked with meshes, sometimes i did buttons with them, sometimes they let me walk on terrain etc.
but what about dx11 and detection in space? Most of the time i am actually using things like length, or = (first pin is position of objects, second pin position of detection and epsilon is the size of cube thats detectin)
In unity you can very easily setup raycasting, or detection in meshes, if there would be few nodes that can do these things, it would be awesome boost in functionality. I know that detection of any mesh that enters some area sounds like job for whole physics engine/vvvv50, but i think we would benefit with few simpler nodes - node(detection mesh input, detection mesh transformation, mesh input, mesh transformation)
is it hard to do? i know there is for example few nodes to detect kinect point cloud with few boxes, maybe i can dig around there, but is operation like this too hard to execute on cpu rather then gpu?
When i was working on installation for a clinet which needed touching meshes that have been animated i have spent so much time to getting right transformations for meshes and setting up intersect, i think i had to in the end remake whole project from dx11 to dx9.
And now if i have an idea for a game like thing in vvvv, in which i have to for example transform a cube to any rotation and scale, how to do this without physical engine in simplest way? (for example make a long corridor scale 10,1,1, with random rotation) In past i even did array of cubes that i could arrange to the shape of detection i wanted, but thats not really fun
i am now kinda spoiled by the easy use of unity in this kind of thing
thanks :3