If you create only one view of the requirements, you must believe it. You have no other choice. If you develop multiple views, though, you can compare them to look for disconnects that reveal errors and different interpretations. There’s an old saying, variously attributed to the Swedish Army, the Swiss Army, the Norwegian Boy Scouts, a Scottish prayer, and a Scandinavian proverb: “When the map and the terrain disagree, believe the terrain.” Unfortunately, we have no absolute “terrain” for requirements: every representation is a map! Even though you can’t tell which representation is correct, differences between them indicate problems. In this article, adapted from my book, More About Software Requirements (Microsoft Press, 2006), I’ll explain the value of creating more than one view of your requirements.
Consider the figure below. A use case presents a high-level view of requirements from the user’s perspective. It describes some goal the user needs to accomplish with the help of the system, expressed as a sequence of interactions between a user and the system that leads to an outcome of value. A primary purpose of the use case technique is to help the BA derive the functional requirements that developers must implement to let users perform a use case. These functional requirements represent a second, more detailed view. The BA might also draw graphical analysis models, diagrams that represent additional views of the requirements. The BA should be able to relate the functional requirements to elements shown in the models to make sure that these complementary views agree.
Suppose a tester were to write some test cases—thereby creating yet another view of the requirements—based on the use case. Now the BA can compare the test cases with the functional requirements and the analysis models to look for mismatches. You might find that a particular test case cannot be “executed” with the current set of functional requirements. (Note that you don’t even have to have any executable software to benefit from this sort of conceptual testing.) Or you might discover that a specific functional requirement is not covered by any test case. Such disconnects can reveal missing and unnecessary requirements, as well as missing and incorrect test cases.
Ambiguities, assumptions, or missing information in the use case description can lead the BA and the tester to different conclusions, which they can find by comparing the various views derived from the use case. Every time I’ve performed this kind of comparison between test cases, functional requirements, and other views, I have discovered errors. Fixing those errors at this stage is a whole lot easier than fixing them much later in the project.
Pictures, such as graphical analysis models, represent information at a higher level of abstraction than detailed text. It’s often valuable to step back from the trees and see the forest as a whole—the big picture. This is a valuable way to find missing and incorrect requirements. A reviewer could examine two pages of detailed functional requirements that seem to make sense, but he might never detect the one requirement that isn’t there or the one that is simply wrong.
Suppose, however, you draw a picture—a model—based on your mental image of how that part of the system ought to function. Then you compare the model to the list of requirements, and you find that a line from one object in the diagram goes to a different destination than the corresponding requirement says it should. Which is correct? The BA might not be able to determine which is right, but the difference between the two views points out a problem you need to resolve.
This high level of abstraction allows the reader to see how pieces of information fit together without getting mired in the gory details immediately. Of course, developers and testers eventually will need the details so they can do their jobs. Early in the days of structured analysis, the philosophy was that diagrams could completely replace the need for a detailed requirements specification. This simply doesn’t work. Models, prototypes, and the like are highly valuable, but they don’t contain all the information developers and testers need. Use high-abstraction models to supplement—not replace—textual specifications with an alternative communication vehicle.
Different people learn and comprehend information in different ways. Some people like to read, others prefer pictures, while still others learn best by listening or while manipulating something with their hands. To accommodate these different learning styles, try to depict information in a variety of ways. It’s not obvious how to help tactile learners examine a requirements specification, but multimedia can provide a rich communication experience. You can embed hyperlinks in a word-processing document to other types of objects:
- Sound clips that explain a particular concept.
- Video clips that illustrate how something works or how a user performs a task.
- Photographs that depict views of an object from various angles.
I see little use of hypertext in the requirements documents I review, yet hypertext is an excellent way to provide your readers with easy access to other related information. Consider incorporating links to reusable sources of common data definitions, user class and actor descriptions, glossary entries, business rules, and the like.
As with every other requirements technique, creating multiple views incurs a cost. In addition to expending effort to create the different views, the BA must keep them all current as changes are made. If the views get out of sync with each other, readers won’t know which one to believe (if any). This reduces the value of the multiple views. You might not need to update a high-level model every time you change some detailed functionality description, but you’ll probably need to revise the corresponding test cases. Don’t become a slave to the modeling, caught in the analysis paralysis trap of endlessly perfecting the pictures. In many cases, though, the cost of creating and maintaining multiple representations is more than outweighed by the insights into the requirements that different views offer and the errors they reveal early in the game.