Hey y’allll
Back on this very sexy topic.
So I finally arranged a little test patch to try and reproduce this strange long init time when trying to communicate with numerous OSC clients over UDP.
I don’t have with me the 64 devices I used to talk to, so I set up the patch to send to localhost on 265 different ports.
I tried the two methods mentioned earlier:
- method 1: using a regular OSCClient in a ForEach loop for the 256 different destinations (each OSCClient opens and uses it own socket, so 265 total)
- method 2: a modified OSCClient_UdpSocketInput, that allows sharing one single socket across the 265 senders.
Each method is enclosed in a ManageProcess to enable/disable each one separately and compare perfs.
You’ll notice that with either one, when you F9, it takes about 2 to 5 seconds (at least with my machine here, with i9 CPU) before the dummy LFO begins running.
Even if you disable both sending methods, I noticed that just having the 256 OSCServers ready for reception produced a significant (almost as long) lag before the patch become responsive.
In my past project, again it wasn’t local communication, it was also in 2022 (so way older vvvv release, older computer, etc.) but the time was rather 30sec or more (enough to fear vvvv had crashed at each launch until it comes to life).
Here, a 5 sec lag is totally tolerable in my case, but I’m still curious about what/why it would take so long?
Is there a rational/normal explanation to this or am I putting my finger on something fishy?
I’m trying to avoid any “way worse than that” when I’m in the context of the full real setup again.
Thanks for your insights!
T
Debug OSCClient_UdpSocke 20250213.vl (299.8 KB)