@Devvvvs great meetup last night. Excited to see the work on the package manger
Regarding the changes with auto package download and execution I see the huge benefit, particularly for newbies, but I do think there’s a pretty serious security risk.
Malicious nugets are unfortunately very real, some of them are very nasty.
I appreciate media arts isn’t considered a particularly high priority sector for hackers to target but that doesn’t help me sleep at night.
A worst case scenario
-
An attacker poses as a newbie, puts up a patch they need help with on the forums or chat
-
One of our very friendly experts downloads it and runs it, automatically executes vl code as well as now potentially nugets and their scripts. Here’s an overview of everything a nuget can do when it installs.
-
Then truly worst case this is a shai-halud style attack (a recent set of massive attack on the node.js ecosytem) specifically targeted at developers. It looks for developer credentials like git ssh keys.
-
Then it publishes new packages containing more malware
-
Even more developers get affected and the cycle continues. Some big names got infected this way.
Solutions
Eight ideas
1. Ask before installing nugets
Simply a step where the user is prompted “this patch needs to download XYZ packages, continue?”
2. Definitive option to ‘open but don’t run’
If a user is suspicious of a vl file then they could open it with an easy menu option like ‘open stopped’, so it doesn’t run as soon as it’s opened. Then the user can review the code.
I think this is already possible, currently you can stop gamma with F8, open a file and it shouldn’t run for even one frame right?
All the same I would have more confidence with an explicit ‘open but don’t run’ command in the top left menu.
3. Sandbox-lite
Treat any vl files with filesystem writing/network fetch (or other nodes with potential that can be used to install / persist malware) same as an external packages where gamma will check with the user before it runs them.
Then at least you got a warning before.
If you think it’s suspicous you can instead click some button like ‘open but don’t run’ and review what’s going on before it runs, with perhaps the suspicious nodes loaded in the error list and you can jump to them.
I guess anything where there’s external code directly from C# should also be treated as an external package.
I don’t know if this is possible.
4. Vulnerability scan
The package manager should use nugets own vulnerability database api and scan all packages the user tries to install for vulnerabilities.
If vulnerabilities are found then it should be high friction UX to override that and install so newbies can’t just click through it.
Should probably also include scanning for deprecated packages, in case they were deprecated due to a security issue.
Note I don’t think nugets own nuget install xyz command automatically does a vulnerability scan first, which is quite insane to me.
5. Ongoing vulnerability scan
When a project is opened (or every 24 hrs etc) the package manager checks all referenced packages for new vulnerabilities, again using the nuget API. Warn the user if any are found.
6. Version Pinning
The default option should always be to install the exact version of a dependency referenced by a patch.
Upgrading a dependency to latest should always be a manual action.
(I think this is how it works already).
This helps block supply chain attacks where new versions of a previously safe dependency are malicious. Blindly upgrading a whole tree of hundreds of dependencies to latest is often a vector of attack in the NPM ecosystem.
7. Cooldown
Vulnerabilities on brand new packages are usually found within a certain timeframe. Simply putting a cooldown to make people think twice before installing brand new stuff can fight the spread of major attacks.
Package manager then would display a warning when user wants to install a package published less than X time ago (by default 7 days but user can choose).
8. Real Sandbox
This is huge of course, but becomes even more relevant in our world now with AI agents who might accidentally screw your machine.
So in the world of Visual Studio Code and its clones (cursor, google antigravity etc) you can use them in “SSH” mode, where you run the IDE locally but it actually does the compilation, running and package stuff on another remote machine. That other machine is typically a VM but can also be a real machine (which in our case is probably needed to make e.g. shaders work properly on a real GPU).
This second machine notably has a completely isolated IO and file system that is completely separated from the developers local machine that probably also has the developers life, passwords, logged in browser sessions etc.
That’s a big step BUT we did it before, this is boygrouping from the vvvv beta days. And it was a great feature, you could program from your laptop across 10 machines rigged up in the air next to beamers. It also maybe drives revenue with more vvvv licences required? (Although I would appreciate if your standard licence included remote development on 1 target)
I know much of this is hard and will have complications beyond what I’m describing here, but I think given enough time we will definitely get bitten by malicious nugets, if not an actual targeted attack on the vvvv community.
Here’s my old post on this topic: