I’ve been doing a lot of neural-network-assisted programming lately. Quite often it lets me ship whole projects without writing a single line of code myself. Right now I’m working a lot with Cursor. Everything you’re discussing here feels very important and very interesting.
I already see that I have to add some RAG-like rules for *.vl files in Cursor so it doesn’t pull them into the context. BUT sometimes it still does, and then it suddenly gives insights like: “you already have these nodes connected like this and that”. That’s pretty mind-blowing. It would be great if Cursor could skip raw *.vl files and all the “sugar” / extra noise, and include in the context only the important parts: the relationships between nodes, their structure, inputs and outputs, help texts on the nodes, for example.
But what I actually want to point out is that I already raised a topic that is already key for programming with neural networks:
You can make vvvv much more accessible for neural-network-driven workflows right now just by allowing seamless C# code execution — neural networks constantly generate code for vvvv that can be run with a simple copy-paste, but importing this code from AI into vvvv is a real nightmare at the moment. And thanks to this forum in particular, the models are already quite good at understanding many of vvvv’s limitations and nuances. How about starting not with an MCP server, but with making it easier for AI to drop C# code straight into a vvvv project?
MCP and RAG would definitely make the whole vibe-coding experience with vvvv MUCH better. I’m sure I don’t even need to “throw ideas” here about how exactly this experience could be improved (like reading the structure of *.vl files without overloading the context with garbage, or having a proper database of how different things are built and how they work). I just want to draw some attention to what can already be improved today, please. The last thing I personally need is an MCP server that patches nodes for me that I can already patch myself.