I told AI to explain it, as if it were me:
This is a very good question. Understanding this concept is a key step to getting better performance out of your patches.
The type hierarchy
First, it is important to understand that a Spread<T>
is an IReadOnlyList<T>
. It implements that interface. You can think of IReadOnlyList<T>
as a general contract or a job description that says “I am a list that you can only read from.” Since Spread<T>
fulfills this contract perfectly, it can be connected to such an input without any explicit conversion. The other way around is not true.
The reason for IReadOnlyList<T>
: Avoiding GC Pressure
The reason the FFT node and other modern nodes output IReadOnlyList
is to avoid a performance killer known as Garbage Collector (GC) Pressure.
Every time you create a new object, like a Spread
, you are allocating new memory. When the Spread
from the last frame is no longer needed, it becomes garbage. The GC then has to kick in to clean up this memory. This cleanup process is not free; it consumes CPU time.
Let’s use a factory analogy:
- An efficient node like FFT is a worker with a reusable tray. For each frame, it just refills the same tray with new data. It’s fast and creates zero waste. This is the
IReadOnlyList
output.
- When you convert this to a
Spread
, you are forcing that worker to grab a brand-new cardboard box for every single frame, copy the data into it, and throw the old box away.
This cleanup of all the discarded boxes is the job of the Garbage Collector. It’s a cumulative effect. A single node is no problem, so your test, of course, has no issues at all, but if many nodes in your patch do this, the GC has to work constantly in the background. This extra CPU load competes with your patch’s main logic. When your application has to meet a strict 60fps deadline, this can be the cause of a missed frame, which you perceive as a stutter.
The FFT node’s IReadOnlyList
output is a zero-allocation strategy. It gives you a fast, read-only “window” to its internal, reusable buffer. By converting it back to a Spread
right away, you undo this benefit and create the very problem the node was designed to prevent.
The real-world problem: Teaching, Legacy Nodes, and Beginner Confusion
So, if IReadOnlyList
is better, why do so many workshops show an immediate conversion to Spread
?
From a teaching perspective, this is the main hurdle. Students are taught that Spread
is the fundamental, friendly, go-to tool. Then, they encounter a high-performance node that outputs this 'scary-looking IReadOnlyList<T>
. Their immediate impulse is to convert it back to a Spread
just to get back to familiar ground.
This is not the student’s nor the teacher’s fault. It’s a natural consequence of encountering a mix of modern, performant nodes alongside older, unoptimized ones that have Spread<T>
as an input type. This outdated design in legacy nodes forces the user into the allocation, creating a pattern that seems normal but is inefficient.
The key lesson to teach is that the conversion is often unnecessary. You can (and should) use ForEach
, GetSlice
, and many other nodes directly on the IReadOnlyList
. It’s just as powerful for reading data, but more performant.
The best approach for node design is:
- Inputs should be liberal: Use
Sequence<T>
, IReadOnlyList<T>
, etc., to accept Spreads
, Arrays
, etc., without forcing allocations.
- Outputs should be conservative: For high-frequency data, output
IReadOnlyList<T>
for performance and safety. For slower-changing or constant data, Spread
is great.
How to fix it?
The fix for this situation is often easy: open the definition of the legacy nodes and change the type of the input pin from Spread<T>
to IReadOnlyList<T>
. Much is gained by this simple change.
This is a good opportunity for anyone in the community to contribute. If you find such nodes, consider updating them and sharing the fix. Many current and future students will thank you.
Cheers,
tonfilm