Each of the three different major interface styles we have described (batch, command-line, and GUI) implies a characteristic kind of control flow in the applications that use them.
Batch programs, for example, lived in a timeless world where they read from input sources and write to output sinks without having to worry about timing, synchronization, or concurrency issues; all those problems were the responsibility of human operators.
The basic control flow of command-line programs, on the other hand, is a request-response loop on a single device. When a Unix CLI is running on a terminal or terminal emulator, it can assume it has undisputed control of that device. Furthermore, there is only one kind of input event; an incoming keystroke. So the program can enter a loop that repeatedly waits for input of a single kind on a single device, processes it, and writes output to single device without concerns about whether that device is available. The fact that such programs sometimes have to poll storage or network devices at odd times does not change this basic picture.
Programs with GUIs live in a more complex world. To start with, there are more kinds of input events. The obvious ones are key presses and releases, mouse-button presses and releases, and mouse movement notifications. But the context of a window system implies other kinds as well: expose events, for example, notify a program when a window needs to be be drawn or redrawn because that window (or some part of it) has gone from being obscured to being visible. Further, GUIs may have more than one window open at a time, so events tied to a window cannot be simple atoms but have to include a detail field containing a window index.
What will follow next, then, is an overview of the X programming model. Toolkits may simplify it in various ways (say, by merging event types, or abstracting away the event-reading loop in various ways), but knowing what is going on behind the toolkit calls will help you understand any toolkit API better.