While BSD-style sockets over TCP/IP have become the dominant IPC method under Unix, there are still live controversies over the right way to partition by multiprogramming. Some obsolete methods have not yet completely died, and some techniques of questionable utility have been imported from other operating systems (often in association with graphics or GUI programming). We'll be touring some dangerous swamps here; beware the crocodiles.
Unix (born 1969) long predates TCP/IP (born 1980) and the ubiquitous networking of the 1990s and later. Anonymous pipes, redirection, and shellout have been in Unix since very early days, but the history of Unix is littered with the corpses of APIs tied to obsolescent IPC and networking models, beginning with the mx() facility that appeared in Version 6 (1976) and was dropped before Version 7 (1979).
Eventually BSD sockets won out as IPC was unified with networking. But this didn't happen until after fifteen years of experimentation that left a number of relics behind. It's useful to know about these because there are likely to be references to them in your Unix documentation that might give the misleading impression that they're still in use. These obsolete methods are described in more detail in Unix Network Programming [Stevens90].
The System V IPC facilities are message-passing facilities based on the System V shared memory facility we described earlier.
Programs that cooperate using System V IPC usually define shared protocols based on exchanging short (up to 8K) binary messages. The relevant manual pages are msgctl(2) and friends. As this style has been largely superseded by text protocols passed between sockets, we do not give an example here.
The System V IPC facilities are present in Linux and other modern Unixes. However, as they are a legacy feature, they are not exercised very often. The Linux version is still known to have bugs as of mid-2003. Nobody seems to care enough to fix them.
Streams networking was invented for Unix Version 8 (1985) by Dennis Ritchie. A re-implementation called STREAMS (yes, it is all-capitals in the documentation) first became available in the 3.0 release of System V Unix (1986). The STREAMS facility provided a full-duplex interface (functionally not unlike a BSD socket, and like sockets, accessible through normal read(2) and write(2) operations after initial setup) between a user process and a specified device driver in the kernel. The device driver might be hardware such as a serial or network card, or it might be a software-only pseudodevice set up to pass data between user processes.
An interesting feature of both streams and STREAMS[76] is that it is possible to push protocol-translation modules into the kernel's processing path, so that the device the user process ‘sees’ through the full-duplex channel is actually filtered. This capability could be used, for example, to implement a line-editing protocol for a terminal device. Or one could implement protocols such as IP or TCP without wiring them directly into the kernel.
Streams originated as an attempt to clean up a messy feature of the kernel called ‘line disciplines’ — alternative modes of processing character streams coming from serial terminals and early local-area networks. But as serial terminals faded from view, Ethernet LANs became ubiquitous, and TCP/IP drove out other protocol stacks and migrated into Unix kernels, the extra flexibility provided by STREAMS had less and less utility. In 2003, System V Unix still supports STREAMS, as do some System V/BSD hybrids such as Digital Unix and Sun Microsystems' Solaris.
Linux and other open-source Unixes have effectively discarded STREAMS. Linux kernel modules and libraries are available from the LiS project, but (as of mid-2003) are not integrated into the stock Linux kernel. They will not be supported under non-Unix operating systems.
Despite occasional exceptions such as NFS (Network File System) and the GNOME project, attempts to import CORBA, ASN.1, and other forms of remote-procedure-call interface have largely failed — these technologies have not been naturalized into the Unix culture.
There seem to be several underlying reasons for this. One is that RPC interfaces are not readily discoverable; that is, it is difficult to query these interfaces for their capabilities, and difficult to monitor them in action without building single-use tools as complex as the programs being monitored (we examined some of the reasons for this in Chapter 6). They have the same version skew problems as libraries, but those problems are harder to track because they're distributed and not generally obvious at link time.
As a related issue, interfaces that have richer type signatures also tend to be more complex, therefore more brittle. Over time, they tend to succumb to ontology creep as the inventory of types that get passed across interfaces grows steadily larger and the individual types more elaborate. Ontology creep is a problem because structs are more likely to mismatch than strings; if the ontologies of the programs on each side don't exactly match, it can be very hard to teach them to communicate at all, and fiendishly difficult to resolve bugs. The most successful RPC applications, such as the Network File System, are those in which the application domain naturally has only a few simple data types.
The usual argument for RPC is that it permits “richer” interfaces than methods like text streams — that is, interfaces with a more elaborate and application-specific ontology of data types. But the Rule of Simplicity applies! We observed in Chapter 4 that one of the functions of interfaces is as choke points that prevent the implementation details of modules from leaking into each other. Therefore, the main argument in favor of RPC is also an argument that it increases global complexity rather than minimizing it.
With classical RPC, it's too easy to do things in a complicated and obscure way instead of keeping them simple. RPC seems to encourage the production of large, baroque, over-engineered systems with obfuscated interfaces, high global complexity, and serious version-skew and reliability problems — a perfect example of thick glue layers run amok.
Windows COM and DCOM are perhaps the archetypal examples of how bad this can get, but there are plenty of others. Apple abandoned OpenDoc, and both CORBA and the once wildly hyped Java RMI have receded from view in the Unix world as people have gained field experience with them. This may well be because these methods don't actually solve more problems than they cause.
Andrew S. Tanenbaum and Robbert van Renesse have given us a detailed analysis of the general problem in A Critique of the Remote Procedure Call Paradigm [Tanenbaum-VanRenesse], a paper which should serve as a strong cautionary note to anyone considering an architecture based on RPC.
All these problems may predict long-term difficulties for the relatively few Unix projects that use RPC. Of these projects, perhaps the best known is the GNOME desktop effort.[77] These problems also contribute to the notorious security vulnerabilities of exposing NFS servers.
Unix tradition, on the other hand, strongly favors transparent and discoverable interfaces. This is one of the forces behind the Unix culture's continuing attachment to IPC through textual protocols. It is often argued that the parsing overhead of textual protocols is a performance problem relative to binary RPCs — but RPC interfaces tend to have latency problems that are far worse, because (a) you can't readily anticipate how much data marshaling and unmarshaling a given call will involve, and (b) the RPC model tends to encourage programmers to treat network transactions as cost-free. Adding even one additional round trip to a transaction interface tends to add enough network latency to swamp any overhead from parsing or marshaling.
Even if text streams were less efficient than RPC, the performance loss would be marginal and linear, the kind better addressed by upgrading your hardware than by expending development time or adding architectural complexity. Anything you might lose in performance by using text streams, you gain back in the ability to design systems that are simpler — easier to monitor, to model, and to understand.
Today, RPC and the Unix attachment to text streams are converging in an interesting way, through protocols like XML-RPC and SOAP. These, being textual and transparent, are more palatable to Unix programmers than the ugly and heavyweight binary serialization formats they replace. While they don't solve all the more general problems pointed out by Tanenbaum and van Renesse, they do in some ways combine the advantages of both text-stream and RPC worlds.
Though Unix developers have long been comfortable with computation by multiple cooperating processes, they do not have a native tradition of using threads (processes that share their entire address spaces). These are a recent import from elsewhere, and the fact that Unix programmers generally dislike them is not merely accident or historical contingency.
From a complexity-control point of view, threads are a bad substitute for lightweight processes with their own address spaces; the idea of threads is native to operating systems with expensive process-spawning and weak IPC facilities.
By definition, though daughter threads of a process typically have separate local-variable stacks, they share the same global memory. The task of managing contentions and critical regions in this shared address space is quite difficult and a fertile source of global complexity and bugs. It can be done, but as the complexity of one's locking regime rises, the chance of races and deadlocks due to unanticipated interactions rises correspondingly.
Threads are a fertile source of bugs because they can too easily know too much about each others' internal states. There is no automatic encapsulation, as there would be between processes with separate address spaces that must do explicit IPC to communicate. Thus, threaded programs suffer from not just ordinary contention problems, but from entire new categories of timing-dependent bugs that are excruciatingly difficult to even reproduce, let alone fix.
Thread developers have been waking up to this problem. Recent thread implementations and standards show an increasing concern with providing thread-local storage, which is intended to limit problems arising from the shared global address space. As threading APIs move in this direction, thread programming starts to look more and more like a controlled use of shared memory.
To add insult to injury, threading has performance costs that erode its advantages over conventional process partitioning. While threading can get rid of some of the overhead of rapidly switching process contexts, locking shared data structures so threads won't step on each other can be just as expensive.
This problem is fundamental, and has also been a continuing issue in the design of Unix kernels for symmetric multiprocessing. As your resource-locking gets finer-grained, latency due to locking overhead can increase fast enough to swamp the gains from locking less core memory.
One final difficulty with threads is that threading standards still tend to be weak and underspecified as of mid-2003. Theoretically conforming libraries for Unix standards such as POSIX threads (1003.1c) can nevertheless exhibit alarming differences in behavior across platforms, especially with respect to signals, interactions with other IPC methods, and resource cleanup times. Windows and classic MacOS have native threading models and interrupt facilities quite different from those of Unix and will often require considerable porting effort even for simple threading cases. The upshot is that you cannot count on threaded programs to be portable.
For more discussion and a lucid contrast with event-driven programming, see Why Threads Are a Bad Idea [Ousterhout96].