CanSecWest is a Vancouver security conference which, among other things, holds a browser exploitation contest called Pwn2Own. If you can demonstrate arbitrary code execution against a fully-patched browser, you win cash and — if you’re the first victor — a computer.
Ten days ago, comrade Nils e-mailed to let me know he was going to be at the conference. I couldn’t make it myself, being stuck in Europe for the moment, but ever since that e-mail, I’ve been giggling like a schoolgirl about what I expected Nils would do at Pwn2Own.
What he wound up doing far exceeded my expectations. First, Nils scored against Safari on OS X. Then he scored again, hitting Internet Explorer 8 on Windows 7 (despite ASLR, DEP, and friends), snapping everyone’s head to attention. I was anticipating this might take place; the hardcore Sotirov/Dowd paper set the stage for it last year and Nils is smart enough to do it, yet the fact he pulled it off is still indisputably impressive. But the part no one saw coming: he asked for a third slot and scored against Firefox 3 on OS X, leaving Chrome the only browser to escape unscarred.
One man, two operating systems, three fallen browsers? I have no choice but to officially award comrade Nils the Ivan Krstić Seal of Mad Fucking Props.
And we now return to your regularly scheduled programming.
(Update, March 23rd: I originally believed he scored against Firefox on Windows, which turned out not to be the case. It was on OS X.)
After my HCS talk last week, a grad student who was in attendance mailed to ask for my thoughts about the intersection of security and programming languages.
I’ve received this question with some frequency, and even gave a brief talk about it last year. The subject matter is rather nuanced, and providing an explanation that does it justice would take a lot of effort, so it’s been sitting on my “to properly write about when I have some time” pile for quite a while now. Unfortunately, it recently became clear to me that The Pile is mostly a black hole. Not wishing to sorely disappoint Greg the Grad Student, I sent him the following sketch of an answer.
If I had to grossly overgeneralize, I’d say people looking at language security fall in roughly three schools of thought:
- The “My name is Correctness, king of kings” people say that security problems are merely one manifestation of incorrectness, which is dissonance between what the program is supposed to do and what its implementation actually does. This tends to be the group led by mathematicians, and you can recognize them because their solutions revolve around proofs and the writing and (automatic) verification thereof.
- The “If you don’t use a bazooka, you can’t blow things up” people say that security problems are a byproduct of exposing insufficiently intelligent or well-trained programmers to dangerous language features that don’t come with a safety interlock. You can identify these guys because they tend to make new languages that no one uses, and frequently describe them as “like popular language X but safer”.
- The “We need to change how we fundamentally build software” people say that security problems are the result of having insufficiently fine-grained methods for delegating individual bits of authority to individual parts of a running program, which traditionally results in all parts of a program having all the authority, which means the attack surface becomes a Cartesian product of every part of the program and every bit of authority which the program uses. You can spot these guys because they tend to throw around the phrase “object-capability model”.
Now, while I’m already grossly overgeneralizing, I think the first group is almost useless, the second group is almost irrelevant, and the third group is absolutely horrible at explaining what the hell they’re talking about.
(If I was trying to be less overly general, I’d mention that in some instances the groups overlap substantially, and some subsets of these groups, such as the subset of group 2 that’s working on SFI and sandboxing, are relevant and occasionally produce good work.)
For a bunch of papers in the “mathematicians do it provably correctly” group 1 (though most not focused on security), see the publications section of the Alloy website.
Finally, for the “practice safe hex” group 2, take a look at Cyclone (paper, website), NaCl (paper, website) and Vx32 (paper, website).
Combined, these will give you enough references to chase the subject matter as far down the rabbit hole as you dare descend. Good luck, and may the gods have mercy on your soul.
This Thursday, the fine people at the Harvard Computer Society are hosting my one last talk in Boston before I run away and switch coasts. I’ll be focusing on two questions: why are our computers so insecure, and why is it so hard to fix the situation?
While I hope to offer some insights that the technologists in the audience haven’t heard before, this is also my first security talk in a few years that doesn’t require much of a security background. Which is to say, the only prerequisite is a bit of curiosity. The talk is open to the public — hope to see you there!
When: This Thursday, March 5th, 7PM
Where: Harvard Science Center, room 112, 1 Oxford Street, Cambridge, MA (Map)
What: The Bitter Tale of Desktop Security: Our 35-year War
Abstract: It’s 2009. About 75% of all corporate machines are infected with at least one piece of malicious code. We’re seeing the emergence of weapons-grade botnets, designer trojans, and smart mobile malware. The black hat community is graduating from a ragtag army of rebels without a cause to a group of well-paid professionals engaging in research-quality work to rake in profits and evade detection. The entrenched players in the security industry have been predictably slow to respond. Now, seemingly bewildered by the new security landscape, they are increasingly claiming that salvation lies in restrictive new systems which threaten to transform your computer into little more than a glorified abacus. There must be a better way.
This session doesn’t require a security background: we will turn to history to try and explain why none of our machines are secure. We’ll then look at the problems of legacy and authority and explain why the road to a secure desktop is fraught with such toil and peril.