This essay appeared in the June-July 2004 issue of Analog magazine. It came first in the 2005 Analytical Laboratory awards for fact articles.

There is a revolution going on in software development today that Analog readers should find especially interesting — because that revolution has some roots in the hard-SF tradition, and because it raises some fundamental questions about not just the technological machinery of our computers but the social, economic and political machinery that surrounds software development. Because computers and the Internet are becoming such a vital part of our infrastructure, the way human beings frame answers to those questions may well play a significant part in shaping the future of our civilization.

That revolution is called “open-source development”, its showpieces are the Internet and the Linux operating system, it's founded on a re-discovery of the power of decentralized peer networks for verifying solutions to complex problems, and it finally offers us the prospect of routinely achieving decent reliability and mean-time-between-failure (MTBF) rates in software. That is, using open-source techniques we can achieve error rates at least comparable to those in computer hardware engineering, and in important cases (such as the Internet) comparable to the robustness of large-scale civil engineering. Open source gives us these benefits, however, at the cost of effectively dynamiting the standard secrecy-intensive business models for software production, a change with large economic repercussions — including, very probably, the near-term collapse or radical transformation of Microsoft and most of the rest of the current software industry.

This article is a report from the front lines by a long-time Analog reader who found himself semi-accidentally cast as one of the revolution's theoreticians. I have written this article partly as personal history because that history shows how the tradition of Analog-style hard SF was an important ingredient in the (re)discovery of open source.

The Open Source Idea

The “source” in “open source” refers to what computer scientists call “source code” — the human-readable, human-editable form of a program that computer programmers work with. Most computer users only see “object code”, the opaque block of bits that the computer actually runs. While it is relatively easy to translate source code into object code (typically using a special kind of program called a “compiler”), it is extremely difficult (in many cases, effectively impossible) to translate object code back into readable source code.

The core idea of open-source development is very simple: open-source programmers have learned that secrecy is the enemy of quality. The most effective way to achieve reliability in software is to publish its source code for active peer review by other programmers and by non-programmer domain experts in the software's application area. This premise implies an inversion of traditional prescriptions for managing software development — a shift from small, vertically organized, rigidly focused groups of developers working on secret code to large, self-selected, diffuse, horizontal, and somewhat chaotic peer communities working on a common base of open source code (as, for example, in the case of the Linux operating system or Apache webserver).

In many other fields, peer review would not be considered a particularly radical idea. Historically, the way we have gotten high reliability of results in engineering and the sciences is by institutionalizing peer review. Physicists don't hide their experimental plans from each other; instead, they skeptically check each others' work. Civil engineers don't build dams or suspension bridges without having the blueprints sanity-checked first by other engineers independent of the original design group.

Early on, however, mainstream software engineering moved in the opposite direction (away from open development and peer review) for two reasons. One was purely economic — companies producing software discovered a business model in which, rather than providing reasonable quality or service, they were able to extract enormous rents simply by using code secrecy to lock in their customers. The other was more technical — software engineers came to believe that large, loosely-managed development groups were a sure recipe for disaster.

For many years, one of the axioms of software engineering was ”Brooks's Law”, proposed in Fred Brooks's pioneering work on software project management, The Mythical Man-Month [MMM]. Brooks observed “Adding more programmers to a late software project makes it later.” Software engineers came to believe that a project's vulnerability to bugs and problems scaled with the number of interaction paths between code written by different developers (that is, as the square of the number of developers). This implied that projects with many developers should be expected to collapse under the weight of unplanned and unintended interactions. Reliability, it was thought, could only be achieved by small ”surgical teams”, executing rigidly pre-defined specifications and isolated from their peers to avoid distractions.

Although Brooks did not advocate code secrecy as such, Brooks's Law seemed to support the practice; if adding more programmers couldn't help and isolating your team from distractions is good practice, then why ever reveal your code? The accepted doctrine of commercial software production came to include code secrecy as well as the surgical-team organization.

Unfortunately for that accepted doctrine, the reliability of the closed-source software it produces remained, on the whole, hideously bad. Even leaving aside subtler failures to get the intended function of a program right, gross errors such as crashes and hangs and lost data remained all too common. Gerald Weinberg observed in 1971 that “If architects built houses the way programmers built programmers, the first woodpecker to come along would destroy civilization”. In the ensuing thirty years most of the software industry, addicted to its secrecy rents, resolutely avoided even thinking about whether its theory of closed-source development management might be broken; instead, managers took the easy out and blamed the programmers. Software consumers, for their part, were brainwashed and pummeled into a sort of numb acceptance — persuaded that software flakiness was inevitable and they'd just have to pay extortionate prices for the continued privilege of living with it.

In retrospect, the infrastructure of the Internet should probably have taught us better sooner than it did. Almost all of the Internet's core protocols were developed through open-source implementations — and its reliability is extremely good. There have been few enough serious software crashes in the Internet core to count on one hand, and the last one happened in the 1980s. When you have problems with your Internet access, it is invariably a problem with the closed-source software on your PC that happens after the Internet gets the bits to you.

The Internet is a particularly compelling demonstration because it is the largest and most complex single system of cooperating hardware and software in existence. It's multi-platform, heterogenous, international, and served user populations of widely varying backgrounds through thirty years and many generations of computer hardware and networking technology.

The pattern is simple and compelling. Where we have open-source software, we have peer review and high reliability. Where we don't, reliability suffers terribly. Peer review is a major reason why airplanes crash much less often than programs, even though airplane parts wear out while program bits do not. Aeronautical engineers (like Internet hackers) have learned to use a design process that is top-to-bottom transparent, with all layers of the system's design and implementation open to constant improvement and third-party peer review. Indeed in most parts of the world such transparency is required by law — and where it isn't, insurance companies demand it!

People who actually write code generally warm to the open-source idea very quickly once they understand that they can still have jobs and pay their bills; open-source development is much more productive and more fun than the traditional closed-source mode, and you get to have your beautiful source code known and appreciated by your peers instead of being locked up in a vault somewhere. The defense of closed source doesn't come from programmers, but from managers and investors. Our business culture has traditionally considered collecting secrecy rent a wonderful way to garner large profits from a minimum of actual work, and thus has tended to treat the defense of proprietary intellectual property as an absolute imperative. In this view, publishing your organization's source code would be an irresponsible sacrifice of future gains — and, if Brooks's Law is correct, a pointless one.

Why Closed-Source Development Is In Trouble

The problem with the proprietary, closed-source way of doing software is that, increasingly, the brainwashing isn't working anymore. The costs from normal software error rates are exceeding the tolerance of even the most thoroughly acquiescent customers (one visible symptom of this trend is the exponential trend of increase in email Trojan horses and website cracks). Partly this is because the cost per error rises as the customers come to rely more on their software. Partly it's because the bugginess of closed-source software is getting progressively worse in absolute terms of errors per line of code.

To see why this is, consider how the internal complexity and vulnerability to bugs of a program scales as it grows larger. Both measures are driven by the number of possible unanticipated interactions between different portions of the program; thus, they increase roughly as the square of program code volume. But the average code volume of programs is itself increasing geometrically with time (roughly tracking the 18-to-24-month doubling period in hardware capability predicted by Moore's Law); thus the complexity of the debugging task (and the number of skilled programmer-hours required to debug a typical program) is also increasing geometrically at a higher rate — much faster than any one organization can hire programmers.

Thus, traditional development managers within closed-source shops are increasingly finding that they just can't muster enough skilled attention on closed, monolithic programs to get them even minimally debugged. A major index of this problem is the known-bug inventory of Microsoft Windows, which has actually gone up in every release since 1995 — in a leaked internal memo, Microsoft admitted to over 63,000 “unresolved issues” in its Windows 2000 release. Over time, closed-source development costs are rocketing for results that get proportionately worse. This is creating economic pressure on development managers to think the previously unthinkable.

In the open-source world, on the other hand, we've found that it's effective to knock down the walls, expose the process, and invite as many volunteers as possible to bring their individual talents to bear on the design, implementation, and debugging tasks. Not only does this bring in many more well-motivated programmers and testers than a closed-source producer can afford to hire, it also turns out to change the behavior of developers in important ways that lower overall error rates (just as the prospect of peer review in other kinds of engineering raises the quality bar for the drawings engineers will let out the door).

The results in software MTBF are dramatic. Returning to our real-world example: Microsoft Windows machines are subject to frequent lockups, generally require rebooting more than once a week, and need periodic re-installation from scratch to eliminate problems such as registry creep and DLL conflicts. Linux systems, on the other hand, are so stable that many only go offline when brought down for hardware fixes and upgrades.

What's going on here is not that Brooks's Law has been repealed, but that given a large peer-reviewer population and cheap communications its effects can be swamped by competing nonlinearities that are not otherwise visible. This resembles the relationship between Newtonian and Einsteinian physics — the older model is still valid at low energies, but if you push energies and velocities high enough you get surprises like nuclear explosions or Linux.

A Brief History of Open Source

But why now? Why didn't these effects become clear ten years ago, or await discovery for another decade? To understand the timing and current impact of open source and some of its larger lessons, it's helpful to know where and how the method evolved. Its history is intertwined with the rise of the Internet.

The practice of open-source development began nearly thirty years before it was named or analyzed. The roots of today's open-source culture go back to the late 1960s and the first steps towards the Internet's predecessor, ARPAnet. From 1969 to 1983 the open-source culture evolved its practice completely without a theory or ideology. I personally became involved exactly halfway through that period, in 1976, and remember those early days well. We exchanged source code to solve problems. We learned how to manage distributed open-source collaborations over the infant Internet without labeling the practice or reflecting much on what we we were doing. We (not I, personally, but the culture I was part of) were the hackers who built the Internet — and, later, the World Wide Web.

Note to all Analog authors (and editors): when you refer to a computer criminal or security breaker as a “hacker”, you are committing a vulgar error that annoys real hackers no end. Those people are properly called “crackers” and what they do is “cracking”; it involves little skill and less creativity. The most important difference, though, is that hackers build things — crackers break them. For discussion, see my Web document How To Become A Hacker [HH].

In 1983, the hacker community was galvanized by Richard M. Stallman's GNU Manifesto. Stallman (even then generally known by his initials as “RMS”) was already a guru revered for brilliantly inventive technical work in the late 1970s. RMS's manifesto attacked closed source code on moral grounds; he asserted a right of computer users to access and modify the code they depend upon, declared a crusade against the ownership of software, and proposed a program of building an entire production-quality environment of ”free software” modeled on the powerful Unix operating system.

RMS's call to action proved both effective and controversial. His technical reputation and personal charisma were such that during the next decade, thousands of programmers cooperated with his “Free Software Foundation” (FSF) to produce critically needed free-software tools like compilers, programmable editors, file utilities, and Internet communications programs (I was an early and frequent contributor myself). His choice of Unix as a model also proved sound; during the same period Unix became the workhorse operating system of serious computing and the emerging Internet. On the other hand, RMS's general attack on intellectual property and the quasi-Marxist flavor of much of his propaganda turned off many hackers and utterly alienated most software producers and customers outside the hacker culture itself.

By 1990, Internet and Unix hackers really did form a culture in the sense social scientists use that term. We had developed shared values and practices, a common body of folklore and history, a distinctive style of humor, and an elaborated slang described in the well-known Jargon File. This culture (and its sense of in-group identification) had been successfully transmitted between generations. And there were lots of us, scattered like a sort of invisible college or freemasonry throughout universities, research labs, and corporations all over the planet and including many of the best and brightest computer programmers in the world. I had been anthropologically fascinated by this community for fifteen years; when I accepted primary editorial responsibility for the Jargon File around 1991 I became one of the culture's principal historians, a move which was to have unexpectedly large consequences five years later.

The FSF successfully mobilized an astonishing amount of talent, but never fully achieved its goals. Partly this was because RMS's rhetoric was so offputting to most of the businesspeople who buy software and run software companies. But part of it was a technical failure. By 1991 most of the toolkit for RMS's free-software Unix had been written — but the central and all-important kernel (the part of the operating system that remains resident in computer memory all the time, brokering requests by programs to use the physical hardware) was still missing. In fact, development of the FSF's kernel had stagnated for five years, with no release in sight. Into this gap stepped Linus Torvalds.

Torvalds was a university student in Finland, who, frustrated with the high cost of proprietary Unixes, decided to write his own for his personal computer. But not by himself, no — Torvalds, then in his early twenties and a generation younger than the original Internet cadre, had grown up immersed in the hacker culture and half-instinctively turned the project into the largest Internet collaboration in history. In doing so he intensified the existing practices of the hacker culture to a previously unheard-of level, and produced dramatic results. Linux, by providing a single visible focus for open-source development, assimilated to itself the development efforts and momentum of almost the entire hacker culture, perhaps as many as 750,000 developers worldwide. Within two years, by late 1993, Linux became seriously competitive with proprietary Unix operating systems and Microsoft Windows — and, having been developed by the natives of the Internet, actually made a better Internet platform than any of its competitors.

The Author as Accidental Revolutionary

That's when I got involved with Linux. I'd been a happy Unix hacker and FSF contributor since 1982. Since 1990, as maintainer of the Jargon File, I had fallen into the role of the hacker tribe's own observer-participant anthropologist. When the earliest packaged versions of Linux obtruded on my consciousness it was partly because they shipped with quite a few lines of code I had written myself. And Linux presented me with a disturbing puzzle.

Like many hackers, I had gradually and unconsciously learned how to do open-source development over a period of years — without ever confronting how completely its practices contradicted the conventional Brooksian wisdom about how software should be done. I had learned open-source instincts, but had no theory to explain why they led to effective behavior. Linux, by presenting an entire world-class operating system built from the ground up by a huge disconnected mob of semi-amateur volunteers, finally forced me to face the problem. I realized that if Brooks's Law were the whole story, Linux should be impossible.

After three years of coding and thinking and research, in 1996, I wrote a paper called The Cathedral and the Bazaar [CatB] in which I suggested that distributed peer review was the secret of Linux's success, and proposed what I called “Linus's Law”: given a sufficiently large number of eyeballs, all bugs are shallow. In that paper I began a detailed analysis of the social mechanisms that support open-source development in the hacker community. In its sequels, Homesteading the Noosphere and The Magic Cauldron, I extended that analysis, and even proposed business models that would support sustained profits from software development without relying on code secrecy. (The basic insight there is that in order to live without secrecy rents, we need to reconstruct software production as a genuine service industry like medicine or automotive repair.)

The Cathedral and the Bazaar was written as anthropology, but (rather to my astonishment) the hacker community quickly adopted it as a manifesto. As Robert Heinlein famously observed, “it steam engines when it's steam-engine time”, and following the mainstreaming of the Internet in 1993-1994 the time was ripe. I now believe some close equivalent of the analysis in this paper and its sequels would inevitably have been uttered sometime between 1994 and 2000 by someone else, even had I pursued my original vocation as a foundational mathematician. All it took was some ability to disregard preconceptions long enough to see the logically obvious — a skill I was trained in by reading SF. There is probably a near timeline, dear reader, on which I am reading this article as written by you!

In our timeline, this work supplied the missing theory to explain existing open-source practice. Unlike RMS's free-software crusade, it offered a justification that people could evaluate and accept without having to change their position on whether intellectual property was a good or an evil. This energized the community like nothing had since the GNU manifesto — and, unlike the GNU manifesto, it was an argument understandable to businesspeople and others outside the hacker community. The effect was to boost open-source hackers and their allies from feeling like marginalized subversives into being armed and motivated revolutionaries, ready to break out of their ghetto and take on the world.

The shot heard around the world in this particular revolution was the source-code release of the ‘Mozilla’ web browser at midnight of April Fool's Day, 1998. Linux had begun to show geometric growth in market share two years earlier, but only a handful of people in the computer trade press noticed the early going. It was Netscape's unprecedented decision to open the code for a key part of its product line that made Wall Street sit up and take notice. And, incidentally, changed my life — because Netscape's top executives pointed at my work to explain their move.

Since then, open-source development has posed a serious public challenge to the established software industry. The industry's leading edge, electronic-commerce and Internet services companies, has generally embraced the method — as has IBM, the once and perhaps future king of the industry. Other major technology companies such as Sun Microsystems, Intel, Hewlett-Packard, Cisco, and SGI have discovered strategic advantages in backing open-source projects. Linux- and open-source-centered new firms like Red Hat Software have run spectacularly successful IPOs, survived the bursting of the dot.com bubble in early 2001, and are now demonstrating the revenue potential from using open-source development to create product and services businesses.

Adoption trends have been even more dramatic outside the U.S. than within it. We can afford the high costs of closed source, even as we grumble about them; Europeans are less wealthy, and in the Third World pressure to find cheaper alternatives is intense. Increasingly, open source fills those needs. Appropriately enough, the only place it has not succeeded at winning hearts and minds is where it was imposed by government fiat — in communist China the government attempted to mandate a massive nationwide switchover from Windows to ‘Red Flag Linux’ and failed.

Perhaps the ultimate endorsement came when Microsoft tried to use Linux as an antitrust defence. At trial in 1999, Microsoft's attorneys talked up the wonders of Linux in an effort to convince the judge that vigorous competition still existed in the desktop operating systems market. The judge, considering Microsoft's 91% market share there, was unconvinced. But by the Fall of 2000, Linux had passed Windows in market share on Web and Internet server machines, had taken over embedded computing — and looked set to crack Microsoft's monopoly on the desktop. a development that is gathering steam now (though, as of this writing in 2003, still primarily outside the U.S.).

Links To The SF Tradition

And those connections to the SF tradition? The culture of open-source hackers is deeply pervaded by imagery and attitudes derived from SF. I documented this influence a decade ago in the expanded print version of the Jargon File The New Hacker's Dictionary [NHD]); it shows very clearly in hacker slang, which is replete with SF references.

While the obvious influences from the cyberpunks of the 1980s are present, the Campbellian hard SF of writers like Larry Niven, Vernor Vinge, and Greg Bear has had a more lasting and important influence. Terry Pratchett, author of the Discworld novels enjoyed by many SF fans with a low opinion of the generic fantasy they satirize, is also enormously popular among hackers. I single out these four writers in particular because, unlike the cyberpunks, they have frequently been invited speakers at hacker-run conferences.

The worlds of SF and the hacker community blend perhaps most seamlessly in Neal Stephenson, a writer and programmer whose acclaimed fiction (Snow Crash, The Diamond Age, Cryptonomicon) is complemented by In the Beginning Was The Command Line [CL]. This wide-ranging nonfiction essay on the psychology of computer interfaces was inspired by Stephenson's observations of the Linux community, and is in significant part a meditation on open source.

More personally, SF taught me to think of people and cultures as adaptive machines. SF also taught me that the universe doesn't respect the neat little compartments human beings like to chop their knowledge into. Robert Heinlein, in particular, showed me the value of the encyclopedic-synthesist stance. The cross-disciplinary analysis I did in The Cathedral and the Bazaar and its sequels was the result of a direct, intentional, and conscious execution of Lazarus Long's advice that “Specialization is for insects.” It is understood in that way by many of my peers in the hacker culture.

Lessons For The Larger World

How can giving up on central control, pre-planning and the vertical command organization of software development produce better results? The answer is implicit in the way that cost nonlinearities associated with scaling change the tradeoffs of complex systems.

Ask any architect. Have you ever wondered what the practical limit on the height of skyscrapers is? Turns out it's not strength of materials, nor our ability to design very tall structures that are stable under load. It's elevators!

For a skyscraper to be useful, people have to be able to get in and out of it at least twice a day (four times if they eat lunch). The number of elevators a building needs to get people in and out of it rises with the number of people in it, which is roughly proportional to its floor space, which is proportional to the square of the height. Thus, as buildings get taller a larger and larger percentage of the building core has to become elevators. At some critical height, so much of the building has to be elevators that the activity on the remaining floor space can't pay for any more of them. The communications overhead implied by the system's single choke point (the ground floor) crowds out production. Instead of building a taller vertical skyscraper, you need several shorter buildings connected by a subway.

Or, ask any economist. Today's slow-motion collapse of closed-source software development mirrors the collapse of central economic planning two decades ago, and proceeds from the same underlying problems. Command systems are poor at dealing with complexity and ambiguity; as complexity rises, it inevitably outstrips the coping capacity of planners. As planning deteriorates, accelerating malinvestment pulls down the whole system. In economics, this is the end-stage of collectivism correctly predicted by economist F.A. Hayek in the 1930s fifty years before it was acted out in the Soviet Union. In software development, we observe a similar tendency of planned systems to complexify until they collapse of their own weight.

Ecologists, too, have learned to respect the kind of decentralized self-organization that occurs at every level of living systems. The tremendous interwoven complexity of an ecology isn't designed — it doesn't happen because any central organizer planned a preconceived set of interactions between the different species that make it up. We know this because those interactions aren't even stable over historical let alone evolutionary time — climate fluctuations, predator-prey cycles, and sporadic events such as major fires or disease epidemics can and do change the rules at any time. Nevertheless, ecologies develop and sustain extremely rich interactions from the unscripted behavior of the selfish adaptive machines that compose them.

Ecologies, market economies and open-source development all have crucial patterns in common; they are all examples of what computer scientist John Holland has called a “Complex Adaptive System” (CAS). CASs are composed of selfish adaptive agents which have only limited, local information about the state of the system. Their complexity arises not from global planning but as an unintended result of each agent's search for better, more competitive adaptive strategies. Global equilibrium and order at each level of a CAS emerges as what systems theorists call an “epiphenomenon” — organization that is not predictable from knowing only the rules of the next lower level. The information that sustains that organization is distributed and largely implicit in the evolved structure of the CAS itself, not explicit and centralized in the knowledge of any one agent.

The distributed intelligence of CASs, in fact, is precisely why they exhibit higher complexity, and cope with complexity, far more capably than can planned centralized systems. Distribution means there is no critical node, no single point of failure to be overwhelmed as the system scales up. Because the agents in such systems are constantly varying their adaptive behaviors in search of an edge on the competition, unpredictable stresses are far less likely to disrupt the CAS as a whole than they would be to blindside a planned system with planners looking in the wrong direction.

The flip side of this is simple, and it's the same lesson we learn from the elevator effect: centralization doesn't scale. Even if the environment of a growing technological or social or ecological system is miraculously simple and stable, the escalating internal complexity and communication costs of the system itself will eventually boost it into a regime where centralization fails.

In the end, nothing less will do for dealing with environments of high complexity than the distributed implicit knowledge and self-organizing chaos of markets, of ecologies — or of open-source development.

The history of the open-source revolution also reminds us that technological change does not happen in isolation. New technologies and innovations can go unrecognized and under-utilized for years if the social machinery to exploit them doesn't exist. Business models are important too. I've spent a lot more words that sound like business reporting in this article than you'll see in a typical Analog fact piece, and that was on purpose — the lesson is that open-source development couldn't be practiced at a level significant to the general economy until somebody figured out how to explain its effects in market terms and other people figured out how to turn those efficiency gains into profits.

The subtler lesson is that the full use of a new technology may demand new narratives, new ways of seeing the world — and the technology itself doesn't automatically generate the narrative to go with it. Without the right enabling theory or generative myth to organize peoples' perceptions of otherwise isolated facts, even the most powerful set of innovations may languish in the margins of the economy for a long time. The Mayans had the wheel, but only used it for children's toys; they did real cargo hauling with drag sledges.

Hackers did open-source development as a folk practice for fifteen years before RMS tried to create a new way of seeing the world around it. The wrong explanatory myth (as, in, arguably, RMS's moral crusade against intellectual property) may actually retard acceptance.

Therefore, finding the right narrative to help people understand a technology can actually be a critical factor in promoting it and shaping the future. John Campbell knew this when he encouraged the SF writers of the Forties, Fifties and Sixties to celebrate the exploration of space. I, a frustrated would-be SF author and aspiring Heinleinian generalist, rediscovered it rather by accident when I wrote a simple little anthropology paper that rocked the software industry to its foundations. I have no doubt there are other technologies out there waiting for their moment, waiting for the imaginative spark that the SF tradition can provide to liberate their full potential.

Truly it has been written that the best way to predict the future is to invent it. As science-fiction readers and writers, and especially as the proud upholders of the Astounding/Analog tradition of hard SF, it's our job to create the generative myths of tomorrow.

References

[MMM]

Brooks, Fred; The Mythical Man-Month; Addison-Wesley; ISBN 0-201-83595-9.

[CatB]

Raymond, Eric S.; The Cathedral and the Bazaar. Available on the Web at http://www.catb.org/~esr/writings/cathedral-bazaar/. Published by O'Reilly & Associates in 1999; a second edition was released in January 2001.

[NHD]

Raymond, Eric S. (ed.); The New Hacker's Dictionary (3rd Edition.); MIT Press, 1996; ISBN ISBN 0-262-68092-0. Available on the Web at http://www.catb.org/~esr/jargon/

[HH]

Raymond, Eric S; How To Become A Hacker. Available on the Web at http://www.catb.org/~esr/faqs/hacker-howto.html.

[CL]

Stephenson, Neal; In The Beginning Was The Command Line. Available for download on the Web at http://www.cryptonomicon.com/beginning.html; there is a text version at http://www.spack.org/essays/commandline.html