The open-source community got a lump of coal in its Yule 1999 stocking from renowned hacker John Carmack, the genius behind id Software and such games as Castle Wolfenstein, Doom, and the Quake series. Carmack's .plan file noted a problem that has swiftly emerged since the Quake 1 source was released under GPL; it seems that some people have used their ability to modify the Quake client as a mechanism for cheating.

This may at first sight seem a trivial issue — who cares if a few testoterone-pumped teenagers cheat at a shoot-em-up game? But in today's internetworked world, countermeasures against Quake cheating arguably provide an important laboratory model for cases that are decidedly not trivial, such as electronic commerce, securities trading, and banking.

The Quake model is made particularly relevant by its open-source connection. Open source advocates (including me) have been making a strong argument over the last two years that open-source software such as Linux and Apache is fundamentally more secure than its closed-source competitors. Cryptographers have long understood that no encryption system can really be considered well-tested until it has been energetically and repeatedly attacked by experts who have full knowledge of the algorithms it uses. Open-source advocates argue that there is nothing special about cryptography but its high stakes — that, in general, open peer review is the only road to systems which are not merely accidentally secure by obscurity, but robustly secure by design.

Carmack, therefore, caused a bit of a flutter on Slashdot when he went on to to suggest that only a pair of closed-source encryption programs could solve the Quake-cheating problem. The problem, as he correctly pointed out, is that asking the open-source client to verify its own correctness won't work; a sufficiently clever cracker could always write a client that would simulate the right kinds of responses and then cheat.

A debate ensued, with several people pointing out that trusting the client half of a client-server pair is bad security policy whether the client code is open or closed. Fundamentally, there's no way for the server to be sure it isn't talking to a clever simulation of `correct' behavior. Thus, opening the source to Quake 1 didn't create security problems, it merely exposed one that was already present (and exploitable, and for all anyone knew already secretly exploited) in the design of the game.

Carmack weighed in to make clear that the Quake-cheating problem is subtler than many of the debaters were assuming. It's not possible for a cheating client to give a player infinite ammunition or life points; the server does not in fact trust the client about these things, and manages them itself. This is correct design; whether or not it's open-source, a bank should not depend on a customer's client software to tell the bank what the customer's balance is!

Carmack observes that "the [cheating] clients/proxies focus on two main areas — giving the player more information than they should have, and performing actions more skillfully."

The serious "more information" cheats depend on a performance hack. In order to hold down the number of updates of the Quake world it has to pass to the client, the server gives the client information about the location of objects and opponents that the player can't yet see, but might be able to see before the next update. The server then counts on the client not to make those things visible until they "should" be (e.g, until the user gets to a particular location in the maze the client is simulating). A cheating client can reveal an opponent seconds before the player would turn the corner and expose himself to fire.

The "more skillfully" cheats substitute the computer's speed and accuracy for tasks that the server and other players expect the player's hands and brain to be performing. Carmack talks about "aim bots" which automatically lock the player's gun onto visible opponents and fire it with inhuman accuracy.

And indeed it's hard to see how either of these sorts of cheats can be prevented given an open-source client and no way independent of the client itself to check that the client is honest. Thus Carmack's suggestion of a closed-source Quake-launcher program that would take a checksum of the client binary, communicate with the server to make sure the binary is on an approved list, and then handle communication with the server over a cryptographically-secured channel.

Carmack's argument seems watertight. What's wrong with this picture? Are we really looking at a demonstration that closed source is necessary for security? And if not, what can we learn about securing our systems from the Quake case?

I think one major lesson is simple. It's this: if you want a really secure system, you can't trade away security to get performance. Quake makes this trade by sending anticipatory information for the client to cache in order to lower its update rate. Carmack read this essay in draft and commented "With a sub-100 msec ping and extremely steady latency, it would be possible to force a synchronous update with no extra information at all, but in the world of 200-400 msec latency [and] low bandwidth modems, it just plain wouldn't work." So it may have been a necessary choice under the constraints for which Quake was designed, but it violates the first rule of good security design: minimum disclosure.

When you do that, you should expect to get cracked, whether your client is open or closed — and, indeed, Carmack himself points out that the see-around-corners cheat can be implemented by a scanner proxy sitting between a closed client and the server and filtering communications from server to client.

Closing the source of the client may obscure the protocol between client and server, but that won't stop a clever cracker with a packet sniffer and too much time on his hands. Carmack confirms that even without the packet sniffer or access to source there are a variety of ways to flush out anticipatory information, ranging from tweaking the gamma and brightness on your screen to banish shadows to hacking your graphics card's device drivers to do transforms of the world model (such as making walls transparent).

We're back in familiar territory here; the history of computer security is littered with the metaphorical (and in some cases maybe literal) corpses of people who thought security through obscurity was sufficient. Crackers love that kind of naivete and prey on it ruthlessly.

In some similar cases, you might solve this problem by sending an encrypted form of the anticipatory information, and releasing it by transmitting a decryption key when the client should see it. But this won't work when the information has to be used by the client before the player should see it. In general, a security-conscious designer must assume that the player knows the information immediately upon its receipt by the client.

The aim-bot cheat is even trickier to prevent. The difference between human and aim-bot actions is measured only in milliseconds of timing. Changing the protocol to stop it from leaking information won't banish aim-bots. It would take threat monitoring by the the server (doing statistical analysis of player action timings) to even detect them, and (as Carmack points out) "that is an arms race that will end with skilled human players eventually getting identified as subtle bots."

Fortunately, the aim-bot cheat is also much less interesting from a general security point of view. It's hard to imagine anything but a twitch game in which the client user can cheat effectively by altering the millisecond-level timing of command packets. So the real lesson of both cheats may be that a closed-source program like Carmack's hypothetical secured program launcher is indeed a good idea for security — but only if you're a hyperadrenalized space marine on a shooting spree.

(Any computer game at which computers are better than most humans has analogous cheats, some of which aren't detectable even in principle. Carmack observes "correspondence chess has been subverted from its original intent by players using computers." This isn't something security design can fix.)

If Quake had been designed to be open-source from the beginning, the performance hack that makes see-around-corners possible could never have been considered — and either the design wouldn't have depended on millisecond packet timing at all, or aim-bot recognition would have been built in to the server from the beginning. This teaches our most important lesson — that open source is the key to security because it changes the behavior of developers.

Open source keeps designers honest. By depriving them of the crutch of obscurity, it forces them towards using methods that are provably secure not only against known attacks but against all possible attacks by an intruder with full knowledge of the system and its source code. This is real security, the kind cryptographers and other professional paranoids respect.

It's the kind of security the Linux kernel and the Apache webserver have, and the kind people victimized by the Melissa and Chernobyl viruses and Back Orifice and the latest Microsoft-crack-of-the-week don't have. If you're betting your personal privacy or your business's critical functions on the integrity of software, it's the kind of security you want, too.

To recap, the real lessons of the Quake cheats are (a) never trust a client program to be honest, (b) you can't have real security if you trade it away to get performance, (c) real security comes not from obscurity but from minimum disclosure and threat monitoring, and most importantly (d) only open source can force designers to use provably secure methods.

So, far from being a telling strike against open source, the case of the Quake cheats actually highlights the kinds of biases and subtle design errors that creep into software when it's designed for closed-source distribution and performance at the expense of security. This trade-off may be acceptable (even necessary) in a shoot-em-up, but it's not tolerable in the running gears of the information economy. Avoiding such hidden security gotchas is, in fact, a good reason for software consumers to demand open source for anything more mission-critical than a Quake game.


The following postscript was added later on the 27th.


Within a day after I first posted this essay, I learned that aim-bot proxies (programs that sit between the Quake and the client and do the aim-bot thing) had in fact been written for Quake long before the source was released. So the point about closed source not stopping a clever hacker with a packet sniffer is not just theoretical; that crack had already happened.

I also learned that Carmack's closed-source verifier would be almost trivial to spoof — if it just did its check at startup time, it would have no way to verify that the binary that was run is the same one that was checksummed (and the Unix ptrace() call or equivalent could easily be used to break that assumption). A more elaborate version could re-checksum the client binary at random intervals — but ultimately, the `secured' verifier itself could be neutered with a sufficiently clever binary patch that made it always return `OK' to the server. Generating such a patch with a symbolic debugger and a disassembler would not even be particularly hard.

There's just no getting around the fact that the execution path on the client machine is going to be under the client-user's control. Thus, good security has to be designed as though any code on the client side is open.


The following post-postscript was added on the 29th


If you are a gamer disgruntled because you think I have (a) claimed the design of Quake is bogus, or (b) dissed John Carmack, calm down. Carmack himself knows better; he read and approved a draft of this essay, and groks that it's not really about Quake at all. Take ten deep breaths, then reread it carefully.