DETAILS


COLUMNS


CONTRIBUTIONS



Presenting Visual Information Responsibly

Vol.33 No.3 August 1999
ACM SIGGRAPH


Bad Graphs, Good Lessons



Alan J.Davis
Alan J.Davis
PricewaterhouseCoopers

“They come by it honestly,” we say of progeny who seem to have inherited a distinctive talent or characteristic. So, when computer visualizations lie and confuse, at least they come by it honestly. That’s because data graphs constitute an important branch in their lineage and yet are, surprisingly often, not trustworthy.

Graphs have spawned useful design guidelines (e.g., avoid 3D for 2D data; Tufte’s shrink principle: graphs can be shrunk way down) [10]. Some of these relate to the psychology of perception (e.g., be careful when red meets blue [7]). This article, however, sets out a selection of broader lessons – observations about displays and how difficulties can arise – that can be extrapolated to various forms of visualization.

The illustrations in this article are based on real examples published in annual reports, magazines and newspapers. References to “the reader” are intended to apply to users of visualizations, regardless of the display medium.

1. Rules for honest and clear presentation will be broken, even if well known.

Deception and confusion have been long-standing concerns in the discussion of graphs. Misleading graphs were the focus of a significant portion of Darrell Huff’s 1954 classic, How to Lie with Statistics (Editor’s note: Highly recommended book!). The theme was reprised and expanded upon by Howard Wainer in “How to Display Data Badly” [11]. The best known of all books on visual display, Tufte’s Visual Display of Quantitative Information, [10] also used no-holds-barred terminology in introducing the “Lie Factor” as a simple measure of graph distortion.

Sadly, long after the accessible critical scrutiny of Huff, Wainer, Tufte, et al, problems and abuses still persist. The erratic quality of graphs has been studied most systematically in the context of annual reports of large corporations. The results have been remarkably consistent, across different countries and spanning a decade. In 1992, Beattie and Jones published a study of 1989 annual reports from 240 of the 500 largest U.K. companies. They reported that 30 percent of key financial variables graphed suffered from substantial measurement distortion [1]. Earlier, Steinbart reached similar conclusions about 1986 annual reports of America’s largest companies [9]. The Canadian Institute of Chartered Accountants expressed like concerns in a less quantitative 1993 research report [5]. A second study by Beattie and Jones found similar problems in the 1991 and 1992 reports of major corporations based in the U.S., the U.K., France, Germany, the Netherlands and Australia [2]. Courtis found that although the annual reports of major Hong Kong corporations used graphs less frequently than in the countries just mentioned, problems were actually more common [6].

Distortions tend to favour the reporting entity. A comparison of U.S. annual reports from 1991-93 and from 1996-97 showed that 33 percent and 17 percent, respectively, “distorted the graphs in some way to make them appear more favorable” [8]. A third, far more extensive study, also by Beattie and Jones, told a similar story for top U.K. companies: from 1988 to 1992, between 7 percent and 13 percent of graphs materially understated quantities (visually), while 17 percent to 24 percent materially overstated them [3]. The same study also demonstrated that when corporate results are good, graphs are used more frequently and placed more prominently.

Good advice on graph design has long been readily available. Much is simply common sense. The distressing conclusion is that many simple, powerful and familiar visualizations are used with inadequate care or thought – or with an intention to deceive. Is there any reason to believe that more sophisticated computer graphics are any better?

2. Things go wrong.

From conception and design to final form, whether printed or on screen, visualizations go through many stages, and things can go wrong along the way. A case in point: my very first article about graphs critiqued a graph that had appeared in an earlier issue of the same publication. Readers must have been confused when I drew attention to a problem with the “yellow” bars, because the printer had changed them into green.

Careful checking and testing can reduce the incidence of error, but problems will persist. Friendly redundancy, such as making all square markers red and all triangular markers blue, can sometimes provide enough additional clues so that an error is not fatal, especially when the visualization is accompanied by discussion or instructions.

3. Bad visualizations reflect badly on the source.

It is easy to think that a display is just a display. But from the reader’s point of view, everything – including text and visualizations – says something about its source. In an annual report, for example, a distorted graph undercuts the credibility of the organization, and perhaps even of its auditors, who have little or nothing to do with the graph’s design. Visualizations that confuse or that are discovered to mislead make a strong statement about the presenter’s level of care, knowledge or veracity. To make matters worse, even if the distortion is apparent, the reader may be left with a false impression of the data.

To protect credibility, creators and publishers should give visualizations thorough, skeptical scrutiny.

4. A design failure that does not come to the reader’s attention may be worse than one that does.

If the reader fails to detect a deception, the presenter’s reputation may remain unscathed, but the reader suffers. Unaware of the distortion, he or she is at risk of making ill-founded decisions or forming false (and lasting) impressions. Being unaware of the problem, the reader cannot even attempt to correct the distortion. There is no moral dilemma here; clearly, the designers of the visualization must do their best to avoid hidden distortions.

5. Human intervention in computer-generated displays may be necessary.

Computer-generated graphs can usually be relied upon for accuracy, but often fail in other respects. Many problems involve labels. In a graph created by a spreadsheet, for example, labels can easily pile up on top of each other and become illegible. In any 3D visualization, one element can obscure another; important results can disappear into oblivion. Human intervention can avoid confusion.

6. Human intervention can create problems.

Graphs often start out as data-driven computer drawings, which subsequently are modified to refine particular features, such as textures, labels, shadows and so on. In the process, errors can be introduced. For example, labels can be switched or connected to the wrong element, so that a slice of a pie labeled 31 percent is thinner than one labeled 17 percent. Any intervention, even copying a graph from a drawing program into a desktop publishing program, creates the opportunity for error. The lesson is an old one worth repeating: be careful.

7. Bad visualizations can be cruel.

Correct information can be extracted from some bad visualizations, but only with much extra effort.

Figure 1 shows three cruel graphs. The first (Figure 1A) shows a steady decline in net income per share, even though it seems to be shouting “increase.” The convention that time flows towards the right is so strong that reinterpreting the graph would be a struggle, assuming the reader even takes note of the reversal of the time scale.

Labels can be an effective instrument of torture, as illustrated in Figures 1B and 1C. In the pie chart (Figure 1B), the lines that connect the names of the cities to the corresponding slices have been drawn tidily, governed by the spacing of the vertical and horizontal elements. This might look nice, but it comes at the expense of the reader, who must work hard to connect the cities with the right slices.

The third graph (Figure 1C) has silly intervals in the scale that make the data unnecessarily difficult to extract. Sometimes this sort of problem can be attributed to software that selects a preprogrammed number of intervals and labels the data accordingly, without regard for the human preference for round numbers. Even more disturbing is another common cause: designers (i.e., people) who decide that all the graphs in a document should have a fixed number of horizontal lines, regardless of what suits the numbers.

An apt punishment for creators of cruel graphs would be to read a book with its pages bound in random order, but properly numbered. That exercise might drive home the point that readers should not be forced to exert themselves because of inconsiderate design.

8. Recording data is not the same as revealing information.

On several occasions, when I have shown a slide of a distorted bar graph, somebody in the audience will argue that as long as the numbers that each bar represents are displayed clearly, the graph is doing its job. Such comments miss the point: the job of a graph is not merely to record data (something a table often can do more efficiently). A graph should expose patterns (or lack of patterns), anomalies, relative sizes and other important characteristics at a glance. In other words, graphs should reveal information. The three cruel graphs above may do a passable job of recording data, but not of revealing information.

Revealing information is essentially “visual arithmetic” – the mental processing a reader does when looking at a graph – such as making estimates and comparisons and seeing trends, patterns and anomalies. For every graph, two key questions are: (1) does it reveal or merely record? and (2) does it help or hinder visual arithmetic? Designers of other types of visualizations should ask comparable questions to test the efficacy of their representations.

9. Visualizations should be as accurate as they appear, and appear as accurate as they are.

A thin line drawn through points that are known only roughly gives a false impression of precision. A fat line drawn through points that are known precisely gives the mistaken impression of approximation. While neither lies about the data, per se, both mislead the reader about an important characteristic: its accuracy. The form and style of a display should be consistent with the nature of the data.

10. Hiding complexity can be risky.

Although complexity can create confusion, the easy solution of hiding the complexity can be counter-productive, because it may obscure important clues. That said, messy data may provide little more than a distraction. The challenge is to walk the line between gross oversimplification and unproductive chaos. One good model for achieving this elusive balance is a regression line drawn through scattered data: the best of both worlds, allowing the reader to shift focus at will between the complex, precise data and the simplified approximation. Good visualizations often allow readers to jump easily between different levels of detail.

11. Features that appear meaningful should have meaning.

Features such as length, colour and shape can add meaning to a display [4]. Some of these same elements have a valid but different role as design features that have no information content, such as a thin green border around every page. The challenge is to ensure that meaning is not attributed to features that are purely aesthetic or decorative.

Consider the three variations on a pie chart in Figure 2, all of which attempt to display the same data. The towering slices in the multi-height design (Figure 2A) are the chart’s dominant features, but in many published examples have no meaning. The less exotic 3D pie chart (Figure 2B) has problems of its own. The thickness of the disc has no meaning, but at least is consistent for all slices. Nevertheless, the edges are invisible for the slices at the back, robbing those slices of visual weight. Other characteristics are problematic as well. Although the areas of the slices maintain their proportionality, the angles and the length of their outer edges do not. A reader could reasonably take the angles to be meaningful, but they are not.

True perspective (not illustrated) is problematic. Perspective would put the common point of the slices behind the geometric center of the ellipse. In a 3D column graph, columns near the back are shorter that those representing an equal quantity near the front. Whether the reader’s familiarity with perspective from ordinary experience is sufficient to compensate for such variations, and indeed whether isometric 3D (i.e., lacking perspective) itself creates problems is an interesting question for designers.

Flat, circular pie charts should avoid all these problems, but may suffer from others. Some, like Figure 2C, are inexplicably drawn with the slices meeting off-center. That distorts the areas and the angles. The only remaining meaning resides in the labels – not a happy fate for a graph.

Readers must be able to distinguish easily between features that have meaning and those that do not.

12. A set of optimal displays may not be an optimal set of displays.

Putting one good graph near another can create problems for the reader.

Each of the graphs in Figure 3 provides a clear picture of the unit value of a mutual fund over time. However, the scales in both dimensions are substantially different. As a result, comparisons of growth are difficult to make, because slopes are not comparable. Readers should be able to make comparisons among clusters of graphs. In visualizations, the way different displays relate to each other should be an important consideration.

13. Unexpected changes cause chaos unless clearly signaled.

A reliable frame of reference is essential for understanding graphs, just as it is for functioning in the real world. (Consider the dizziness and nausea a person might experience in a room with walls that appear to move erratically – a challenge for virtual reality.) Deceit and confusion in graphs often can be traced to unexpected changes in scale that are not clearly signaled.

Many of the difficulties illustrated earlier stem from unexpected, unsignaled change. In Figure 1A, it is the reversal of the time axis relative to a strong convention, signaled only by the numerals in the time axis. The reader can detect the change only by more careful observation than ought to be necessary. If, for some reason, a reverse time scale were essential, nothing less than a big arrow marked “Time” and pointing to the left would be an adequate signal. In Figure 3 the surprise change is in the scale of both axes, as well as in the meaning of slopes.

Unsignaled changes are especially pernicious in two common types of graph problems: non-zero baselines and distorted scales. Figure 4 illustrates a typical non-zero baseline graph, along with the same data recast with a zero baseline. The unexpected change in the non-zero baseline graph is that proportions are not what they appear to be: 14 percent growth over two years look more like 200 percent. With a zero baseline (right), 14 percent looks like 14 percent.

Reasonable differences of opinion exist about whether non-zero baselines are a bad thing. Certainly, non-zero baselines can offer a clearer picture of small changes. In some cases, however, it is the absence of change that is most important. (My rule of thumb is that the higher resolution that non-zero baselines offer is not worth the loss of proportionality unless at least half the graph can be thrown away.) In any case, non-zero baselines must be clearly indicated as such, ideally not just by numerals on an axis but by more graphic means, such as altering the spacing of the bottom grid lines.

An annoying feature of many non-zero baseline graphs is that the baseline may be labeled as zero, making one important clue to the graph’s true nature explicitly misleading (as in Figure 4A). In the same vein, a peculiar fashion seems to have crept into graph design: not labelling the zero in otherwise unobjectionable zero-baseline graphs (Figure 4B). This forces the wary reader to determine whether the graph really does begin at zero after all. (The significance of zero seems to be lost on many graph designers.)

Unlike non-zero baseline graphs, which offer some benefits in certain circumstances, distorted graphs are simply misleading and confusing. The distortion in the example in Figure 5 can be detected by comparing the differences between pairs of the bars. In particular, the numeric difference between the first pair is $5, as it is between the last pair. However, the visible differences are not the same, so the graph must be distorted. In this case, the first three bars are drawn consistently, and with a zero baseline. Only the fourth bar is distorted. This distortion is an unexpected change scale, which is entirely unsignaled. The reader must work hard, first to notice the problem, then to ignore the graph or correct it mentally.

In some graphs, the actual scales are distorted. This happens occasionally in the vertical axis, where close examination may show that the scale marked on the axis is not linear (and not logarithmic). More often, distortions are in horizontal time scales. As in Figure 5, the data may be plotted equally spaced, even though corresponding to varying intervals. This makes slopes difficult to compare, and can introduce meaningless kinks in lines that are really at a constant slope.

Closely allied to unexpected and unsignaled change is meaningless identity: things that look the same but represent something different. For example, the spacing of the horizontal scale in Figure 5 exhibits meaningless identity.

Visualizations should avoid unexpected changes in frames of reference. At the very least, such sudden changes should be clearly signaled.

Conclusion

At their best, familiar data displays such as bar graphs, line graphs and pie charts epitomize clarity and economy, not merely displaying data but revealing patterns and anomalies that would otherwise be difficult to detect. At their worst, graphs show how visualizations can lie and confuse. Understanding how and why can help to illuminate the problems of more sophisticated computer-based displays.

Graphs remain an important form of data visualization, having flourished for close to two centuries. If after all these years graphs are so often distorted and difficult to interpret, why should the current generation of computer graphics be any better? Indeed, with motion and other variables to contend with, it would be surprising if many did not turn out to be much worse. Fortunately, bad graphs are good teachers, and many of their lessons apply to other members of the ever-evolving family of visualizations.

References

  1.  Beattie, V. and M.J. Jones. “The Communication of Information Using Graphs in Corporate Annual Reports,” ACCA Certified Research Report 31, ACCA: 29 Lincoln’s Inn Fields, London, U.K. WC2A 3EE (executive summary), 1992.
  2.  Beattie, V. and M.J. Jones. Financial Graphs in Corporate Annual Reports - A Review of Practice in Six Countries, Institute of Chartered Accountants in England and Wales: Chartered Accountants’ Hall, P. O. Box 433, Moorgate Place, London, EC2P 2BJ, 1996.
  3.  Beattie, V. and M. J. Jones. “Graphical Reporting Choices: Communication or Manipulation,” ACCA Certified Research Report 56, ACCA: 29 Lincoln’s Inn Fields, London, U.K., WC2A 3EE, page 33, 1998.
  4.  Bertin, J. (translated by William J. Berg). Semiology of Graphics: Diagrams, Networks, Maps, Madison, WI, University of Wisconsin Press, 1983.
  5.  The Canadian Institute of Chartered Accountants, Using Ratios and Graphics in Financial Reporting, Toronto, 1993.
  6.  Courtis, J. K. “Corporate Annual Report Graphical Communication in Hong Kong: Effective or Misleading,” The Journal of Business Communication, July, v.34 No. 3, pp. 269-288, 1997.
  7.  Kosslyn, Stephen M. Elements of Graph Design, W.H. Freeman and Company, New York, NY, U.S.A., 1994.
  8.  Louwers, T.J., M.K. Pitman and R.R. Radtke. “Please Pass the Salt: A Look at Creative Reporting in Annual Reports,” Today’s CPA, May 1, 1999, pp. 20-23.
  9.  Steinbart, P.J. “The Auditor’s Responsibility for the Accuracy of Graphs in Annual Reports: Some Evidence of the Need for Additional Guidelines,” Accounting Horizons, September, pp. 60-70, 1989.
  10.  Tufte, Edward R. The Visual Display of Quantitative Information, Cheshire, CT, Graphics Press, 1983.
  11.  Wainer, H. “How to display data badly,” The American Statistician, v38, pp. 37-47. The article (slightly updated) appears in Visual Revelations — Graphical Tales of Fate and Deception from Napoleon Bonaparte to Ross Perot, Copernicus (Springer-Verlag), New York, NY, 1997.



Alan J. Davis is in charge of tax communications for PricewaterhouseCoopers LLP in Toronto, Canada. He has written and spoken extensively about the design and use of graphs, and is an Associate Editor of Information Design Journal. Davis has a law degree and an M.B.A.

Alan J. Davis
PricewaterhouseCoopers LLP
Toronto, Canada


The copyright of articles and images printed remains with the author unless otherwise indicated.