Computer Security Will Improve If It Survives That Long
The next few months are scary
Anthropic has launched Glasswing, a program to help software vendors fix all of their security problems before bad guys use them to take over the world.
For context, the state of computer security is an utter nightmare and always has been. There are massive security problems all over the place, just waiting to be discovered. Security researchers find more of them all the time, only limited by the amount of effort they put in. The only reason why the entire world hasn’t gotten hacked into oblivion long ago is that professional security researchers are, for the most part, good people trying to do good and defend rather than hack.
Anthropic’s new model seems to be a substantial advance in AI’s ability to find security problems, but this process has already started. The prior model, Opus, especially with appropriate tooling, is entirely capable of agentically searching over even a very mature codebase and finding a gigantic lump of security problems in it. This is happening for everything and has been for over a month now, which seems like forever. There’s this massive glop of security problems getting found and reported to all the big software projects, and they’re scrambling to try and fix them all at once. There’s a window of opportunity for bad guys right now to do a similar thing and find security problems in everything with very little effort, and exploit them. It’s very important that the defenders stay ahead of the game.
In the end, this will be a good thing for security. We’re going to have software with many fewer security problems in it. Even though the attackers will have enhanced capabilities of finding problems, the net balance will be fewer security problems found in the wild because there will hardly be any there to be found. But right now we have a python eating a horse situation where everybody is trying to fix everything as quickly as they possibly can after finding what would have been the next few decades’ worth of issues all in one go.
Having something like this take aim at your codebase is just going to become part of the normal development and release process. Nothing ever goes into production without a serious security scan. It's actually better than that, because it's not just going to be searching for security problems; it's going to be searching for bugs. Security problems are a particularly bad kind of bug, but it will be finding bugs in general and improving code quality overall.
Everyone assumes that AI results in very low code quality, which can happen if you use it wrong, but it can also result in very high-quality code if you use it right. It's really not clear what the net results are going to be. Likely, we're going to see some codebases with atrocious quality and some codebases with extremely high quality, and it's not going to be consistent across projects. Just like today. There are going to be some projects that are a weird combination of both, that you have very well-vetted spaghetti code.

