I’m working on real-time texture generation from a spread of colors. Most of the time, this involves a relatively small spread (~15000 colors) to create a small texture (~300x50px). I use a SpreadBuilder to manipulate sub-spreads of colors, as suggested in the Gray Book. At the end of the process, I convert the SpreadBuilder to an immutable type via ToSpread and feed it to DynamicTexture2D.
One thing I’ve noticed is that I get periodic (literally) lags, even with this small amount of data. The larger the incoming spread, the more frequent the lag becomes. Removing ToSpread eliminates the lag completely, although that doesn’t seem to be the intended workflow for DynamicTexture2D:
In this case ToSpread makes an unnecessary copy that just fills ram and puts burden on the garbage collector, which leads to the hickups you describe. The great thing about the DynamicTexture… nodes is that they allow you to connect a variety of collections and don’t require a Spread as input.
It would be interesting to understand what makes you feel that the version that works as intended is not the intended way of doing it?!
Well, may be I got it wrong but I’ve read this in the Gray Book:
We would like to encourage you to use spread builders only locally: to create spreads. Pass around the spread that you just built. Don’t pass the builder itself. Even when you need to store a spread for later usage: store the spread, not the builder. It helps when reasoning about a patch.
To me that means that spread builders aren’t meant to be used directly outside spread manipulations (creating spreads, adding/removing items from spread e.t.c.).
Although passing spread builder in dynamic texture works (well, almost), I still see some obscure difference which is a bit hard to track - after some period of time dynamic texture stops updating itself. Moreover, this behaviour differs on different machines (on some of them it is almost never happens) I haven’t noticed such effect when I was feeding spread as an input. So, that brought me to that conclusion
You might remove the data before it is copied to the GPU.
In the patch, it looks like you are clearing the SpreadBuilder directly after the region, as the last operation in the frame.
fill -> copy -> clear
Try the clear as the first operation (of the next frame), directly before you fill it. This way, the data will be present during the rendering call after the main loop update.
Edit: maybe you add to the spreadbuilder every frame? Try initializing it with the size you need and just set the data and overwrite it every frame instead of clearing.
hey, thank you for the suggestion! I’ve tried to clear Spreabuilder as you proposed, but haven’t noticed significant difference - this issue appears randomly from time to time. I’ll test it a bit longer to be sure (it usually happens each 5-20 minutes at “normal” speed)
No, SpreadBuilder is created once (on Create operation)
I’ve also tried your idea (initialization with fixed size) before, but it didn’t performed well in my particular case unfortunately. The output texture size is variable and can be relatively big in some cases. Since that process node is used in different areas of the patch I thought it would be too generous to have a big SpreadBuilder for potentially small textures
I’ve replicated this behaviour on a simple example. The way I generate data in my patch is different, of course, but the logic is the same: Create SpreadBuilder once > Add some data > Feed it in DynamicTexture > Clear SpreadBuilder (or do it in the beginning of the next frame as in example). Here DynamicTexture starts to update from time to time - I haven’t noticed that in my patch may be because of lower rate of data changes
Huch, seems you stumbled over an old one here. The internally used GraphicsData node was not working correctly with mutable collection types. It was using a cache region which does not trigger for a spread builder when its count changes. Fixed in upcoming preview build, thanks for pointing it out and posting a patch showing it.
Regarding the lag you get when you call ToSpread - if a newly allocated object is large enough (> 85,000 bytes), the dotnet runtime allocates it on the so called large object heap (LOH), which in turn will only get cleaned up by a Gen2 garbage collection which is the most expensive one. In other words, if your patch allocates large objects every other frame, it will force a Gen2 collection at some point.
There was some hope on the horizon just recently regarding that topic with a different garbage collector for dotnet, however after some initial tests we had to abandon it for now. Hopefully Microsoft recognizes the demand for it and adds it as an option to choose from in the future. Until then we’ll need to handle these cases ourselves, by (like you did in your patch) allocating such large objects / collections only once and re-using them as best as we can.