i am currently trying to understand a bit more of Dx11 in vvvv. Right now i am stuck with the alphachannel and how to get it to work with a simple quad.
Even with the help of the blend node i think i get only half the values to work with the alpha.
With Blend (DX11.RenderState) everything seems to works fine.
Blend Advanced helppatch mentions some math involved in pixel color: maybe the culprit is that math (cutting values at some point, maybe?).
Also Alpha To Coverage = 0 makes things smooth, although I don’t know if with other scenarios you would end having what you expect.
Nonetheless there could be something wrong with Blend Advanced.
Did you try blend renderstate advance? Also there is difference between layer blend and texture blend. If I get you correct you are looking for blend TextureFX
Yea this was my first try to get it done with blend advanced but it didnt work out.
As far as i understand it i need not texture blending but layer blending so that if one object is behind another i can see it trough the first object.
This must be layer blending right?
It isn’t exactly right, 2d blend when you programmaticaly blend one picture in to antother (such as photoshop) you do with texture blend.
Layer blend is basicaly a shader flag witch says what to do with a pixel if it’s something already written on that pixel, i think it does that during rasterizer stage, and thouse are hardcoded inside directx, quite limited and used mostly in some special cases. (Gpu vendors promise to do programatic blends for layer in future…) this is a short description but you can try google yourself for renderstates…
Maybe i got you wrong but i think you are looking for photoshop style blends.
Is one slice of spreaded quads in 3d space like a picture in photoshop?
I am not that good with the whole shader thing but i think i have to get more into it if i want to continue with Dx11 ;)
Is this layerblending?
Because this is what i would like to schieve using the blendAdvanced node with all the other options like substract for example.
No it’s geometry.
You have ps layers when mix textures.
DX (but ogl the same), does “more” than ps when it comes to blending, in the sense that you not only can blend the pixels of textures, but you can blend also the pixels that represent the geometry (on which the textures are applied).
That’s why you have Blend Renderstate and Blend DX11.TextureFX, first it’s for geometry, the other for textures.
Why don’t you create three renderers: in the first you draw one red [1,0,0] and one cyan [0,0.5,1] quads and you animate them (rotating on one axis) and there you use Blend renderstate, in the second just one quad with changing colors; of these first two renderers you use the texture they output to blend them with texture fx, and the result you would feed to a fullscreen quad to be seen in the third renderer.