Chapter 6. Reality: Debugging and Testing User Interface Designs

We come now to the part of getting good at user interface design that we predict most programmers will find hardest to accept — the debugging and testing techniques.

Sadly, most programmers have no history of debugging user interfaces with the amount of attention and respect they would give (say) network interfaces. With network procols, you begin the job by finding out what the requirements of other side are. If you have to deal with byte-order issues, or ASCII-EBCDIC translation, or the application has strange latency requirements, you cope. But you would never assume that a protocol you simply dreamed up in your own head after reading the requirements will work first time; you have actually try it out and see how it works on a real system.

In UI design the users have alaborate requirements, which we've explored in earlier chapters. For writing a good UI, mere theoretical knowledge of those requirements is not enough. You cannot expect to deploy an untested prototype and expect life to be good without tweaking in response to real-world feedback. You need to try your design out with actual users resembling your target population, and iteratively adjust to respond to their needs — just as you would when communicating with any other piece of quirky hardware.

In this chapter, we'll explain two methods for collecting that real-world feedback. The first — heuristic evaluation — is useful for the early stages of debugging an interface. It's good for catching single-point usability problems, places where the interface breaks the design rules presented in this book. The second — end-user testing — is the true reality check, where you will find out if you have problems that go beyond single-point failures into the entire design metaphor of the interface.

You can integrate these methods with story-based design into a four-step recipe that can be performed even by the relatively lightweight, low-budget, non-hierarchical development organizations characteristic of open-source projects.

The four steps of this recipe, which you can repeatedly cycle through as your project evolves, are:

  1. Persona-based design
  2. Heuristic evaluation by the developers
  3. Heuristic evaluation by non-developers
  4. End-user testing

We've already discussed persona-based design at the end of the Wetware chapter. By keeping the requirements of fictive but emotionally real users foregrounded during design, this technique naturally leads into involving real users later in the cycle. By keeping developers in some kind of identification with real users, it helps them avoid the kinds of errors that can make end-user testing a punishing, scarifying experience for all parties involved.