My Patch transforms special video footage to parallel perspective animation. For this I use the following workflow:
FileStream (avi) - VideoTexture - Buffer (EX9.Texture) - Quad with LinearSpread
So all frames of the Video are loaded to the Buffer (up to 1600Frames), I show a slice of each Frame in the Buffer with linear spread and animate the position of the slice (which part of the Frames i do see in the renderer).
Well this works awesome so far. But only with low resolution like PAL/NTSC. But then it comes to HD or even higher resolution, the Buffer-Node turns red, vvvv runs with 1fps and misses frames of the video footage.
So here my question: Is there another way of buffering the textures? Would it bring more performance using a dir with all frames of the video as jpgs or png (will work for this project, but iam also working on doing it with live camera footage)? Does Buffer (EX9.Texture) buffer on the gpu or ram? Is a 1920x1920 resolution better for the patch or do i stay at 1920x1080 (I heard its better having files like that in vvvv??)?