I’m going to take two security stories from the last week, one of which I think illustrates how to do it well, and another which illustrates how to do it badly. They come from very different areas under the vast umbrella of ‘security’. One is very much physical, and the other very much not. These days however, the two are inseparably linked.
First up: Google Chrome
Google recently offered anyone to earn up to $60,000 per exploit for finding bugs in its flagship browser, Chrome. One student did indeed walk away with the prize money. This kind of competition is quite common in security research, the most famous such competition is Pwn2Own. Why is this method favoured then? Simple economics. If you find a bug in a major piece of software such as Chrome or Windows, that exploit is worth a lot of money. Stuxnet contained four Windows ‘zero-day‘ exploits, each of which would have had a hefty black-market price tag. This simple fact fueled speculation the Stuxnet was a government initiative - given the vast amount of resources required to build it.
So Google is following a good tradition of offering its software freely to hackers to see if they can exploit it. If they do the hackers gain financially, Google gains financially (as a prize-money oriented approach is fairly cheap, in the grand scheme of things, you don’t have to pay anyone who fails to find an exploit!), and users of Google Chrome gain through using a more secure browser. The only people that lose out are the criminals.
Second: Airport Security
By which of course I mean the TSA. Much has been made of a recent video in which a man takes a metal box through the TSA’s new scanners undetected. Putting aside the fact that airport security is just security theatre for a second, let’s think about the implications. First of all, was this man right to put the security to the test? He has posted a video which by now potential terrorists have potentially seen, in which he describes a potential exploit of the system. Isn’t this a bad thing to do? Well, no. Now that the TSA is aware, they can fix the security hole. Note also that he gave the TSA plenty of warning before going public. It’s entirely possible that knowledge of these exploits was already known to terrorists, who wouldn’t have told the TSA. Making this information public is the right thing to do.
The TSA’s response has been strange though. While I agree it should discourage people from trying to find exploits (since if everyone was trying this out the line for security at the airport would be even longer!), the text from their response has to be read to be believed:
Imaging technology has been extremely effective [...]. It’s one of the best tools available to detect metallic and non-metallic items, such as… you know… things that go BOOM.
Their jovial reply comes across as very awkward. They trash-talk the guy who found the exploit, and don’t mention whether they intend to fix it. The overall impression of the article is that they are holding their hands over their ears and shouting ‘The system is secure’ repeatedly, despite evidence to the contrary. This kind of behaviour is sadly common. In 2010, banks tried to censor research that showed the Chip & PIN was insecure. Most of the banks took a very long time to plug the security hole (I can’t find out if all major banks have yet, even now).
It’s not that simple, unfortunately. Open security is a great thing for software (if you’re software relies on secrecy for security, it’s not secure). However it gets trickier when you’re dealing with physical systems, just letting people try and exploit the system all the time is a recipe for disaster. The systems are designed around the assumption that 99% of the time (at least) they won’t pick up anything. Nearly every single positive case will be an innocent mistake on the part of the passenger. Most TSA agents will never see, let alone catch a malicious traveller in their entire careers. There is no baseline against which to establish meaningful tests.
Despite this, the machines are tested in the field, and it’s by people hired and trained to do so. The tests are often written by the people who design and make the machines. There’s no equivalent of the kind of free-for-all attack that Google’s Pwnium competition encourages. Running a challenge over the course of a day doesn’t work either. TSA agents are not a dumb piece of software, so unless the test was completely blind (which is impossible if it’s a public competition) there would be no benefit, as the agents would be much more alert than normal.
Open security for all of these systems is the dream. Currently a lot of airport security relies on people not knowing how it works (behavioural analysis, for example). This is all fine, until it gets leaked, which it always does. You simply can’t keep a secret that big entrusted to so many people. I’m not going to link to where this stuff is, but it’s freely available on the internet and other networks.
Technology is hugely useful in this scenario. However intrusive it may seem, machines to detect stress, to detect hidden objects and many other kinds of things are the way forward, with the simple caveat that they must be open. I don’t mind an intrusive picture being taken by a machine, as long as I can read the code that performs the obfustication. Your word that it’s doing that is simply not good enough.
Unfortunately this way of thinking goes against the grain of decades of security thinking at airports, so I’m pessimistic. I can only hope that sooner or later people realise that intrusive security is at this point inevitable, and what we should be campaigning for is not less of it, but to make it open.