VizFix Critique Approach and Rubric

Through case study, this site will research where did some visualizations go astray, what does it take to correct them, and how do we avoid these mistakes.  This will captured in an exhaustive long-form blog post.  Each post will begin with a critique.  The rubric for the critique is based on the critique framework laid out in the course materials.

  1. Overview — What the visualization is about: a brief overview of the data, objective, and techniques of the visualization
  2. Explanation of data: What kinds of datasets are used in the visualization? Is it a time series, categorical, geospatial, network, or something else? How many dimensions does the visualization use? How about the credibility of the dataset?
  3. Explanation of visualization techniques: what kinds of visualization methods are used? Does it use histogram? or scatterplot?
  4. Effectiveness of the visualization: does the visualization achieve its objective well? Which methods do or do not work? Why? Are there better ways to visualize the same information?
  5. Integrity of the visualization: does it distort data or make use of perceptual biases to give wrong impression? are there any biases that can be corrected by employing other ways to visualize the data?
  6. Design: how well/badly is it designed? Why? Is it engaging? Can there be improvements?
  7. Interesting: Is it interesting? Does it draw the attention of the reader? Is it only interesting to a specific audience, or will mass media pick up on it quickly?

Those topics are a bit broad, so  I will ask the following questions (and perhaps new ones) in order to put specific points.

  • Are the axis properly marked?
  • Are things arranged such that they mislead?
  • How does the positioning they compare to Gestalt principles?
  • Does the choice of colors cause perception issues?
  • Does the use of light cause issues properly perceiving colors?
  • If there is a gradient, does it lend itself to being perceived properly?
  • Are labels visible?
  • Are labels consistently applied?
  • Would color-blindness lead to misperceptions or difficulty understanding it?
  • Does the visualization print well in grayscale?
  • Can things be removed to simplify the graphic?
  • How many dimensions are used? How many are actually needed?
  • For histograms, are bins used properly?
  • Is the appropriate scale used? (linear v. log)
  • Is the data properly interpolated, extrapolated, averaged, etc?
  • If high-dimensional, are they properly represented?

Note, many of these questions cut across some of the six general categories noted above.  This list is not meant to be exhaustive, and will be updated as new questions come up that should be used.

Once the image is critiqued, I will create a new version.  If done properly, the systematic approach to the critique will hopefully lead to actionable items to be addressed in the revised version of the images.  This will be an iterative process, with multiple versions shared in each blog post.  A ‘best’ improvement will be selected and compared side-by-side with the original image.  Subsequently, each article will contain a summation paragraph about the lessons learned from rehabbing that visualization and advice on avoiding that trap in the future.

Please feel free to comment below how you think I might be able to improve my rubric for the critiques.

Okay, let’s get to work.



Featured image courtesy of Bruno Boutot.  Used under license by CC BY-NC-SA 2.0.