Updating a Visualization of Likert Scale Results

Nick Walsh
Insightful Software
4 min readNov 13, 2018

--

Last month, we launched the 2018 edition of The State of Theology alongside Ligonier Ministries. The microsite, updated every two years, allows users to examine survey data centered around religious beliefs. Each response is paired with demographic data, opening the door for an exploratory visualization (aptly named Data Explorer).

Having also built the 2016 edition, this year’s run offered a chance to iterate, introduce features, and revisit some assumptions. Visualizing data requires a series of choices (tradeoffs may be the better word) to present the results appropriately, which we’ll explore here in a bit more detail.

2016 Approach

Each version of the survey uses a five-point Likert scale — participants are given a statement, then asked to indicate whether they:

  • Strongly Agree
  • Somewhat Agree
  • Neither Agree nor Disagree
  • Somewhat Disagree
  • Strongly Disagree

Following along with their corresponding whitepaper, our first take emphasized relative areas for the two extremes: agree or disagree. A tweaked area chart (extending out from a vertically centered axis) was picked as the clearest representation.

The Good

The approach passed the at-a-glance test: relative weightings of each response were clear, and the design itself was attractive. Demographic filters were simple to mix-and-match, and the resulting subset was easy to compare to the full population.

Pain Points

We sought to avoid different for different’s sake, but there were a few areas to improve on:

  • The actual percentage for each response required an interaction to reveal, which felt a bit overkill for 5–10 numbers.
  • Each area was smoothed via D3 interpolation, reducing the accuracy a bit.
  • Filtering demographic data didn’t clearly visualize the subset size compared to the full population.
  • The Data Explorer was mostly unusable at narrow screen widths.

Things to Explore

In addition to iterating on previous decisions, we added a new goal: using animation as a source of information. Individual responses are great, but the realization hit that a filtered group’s movement from question to question could be surfaced as well.

Plots, Sans Scatter

We tried a few options, new and old — revisiting the cutting room floor from 2016 didn’t yield any additional insights. Try as we might, radar charts are pretty much never the answer.

Attempting to bin D3 force-directed bars

Bar charts did the trick (and made it into other portions of the microsite), but lacked some intrigue in showing movement of groups between statements.

Binned Beeswarm

After a some exploration via CodePen and D3, we landed on a beeswarm-esque visual that bins individuals into a bar for each answer. Benefits include:

  • Bar chart-style simplicity in comparing columns.
  • The ability to see how filtered demographics compare to the total population (both in proportion and size).
  • The means to follow respondents from question to question.

Implementation

Putting our choice into production yielded another set of choices. 3,000 data points in motion was just weighty enough to need performance tuning.

The first build continued with the D3 and SVG stack of 2016. It worked, but switching between questions introduced a lot of jank. Paired with the usual headaches in making D3 and React coexist, we scrapped this early.

Using D3 to handle data joins in memory and output to canvas worked much better… until we tried it manually and saw 4–5x improvements in JavaScript processing time. Since the math was relatively simple, we stuck with the from scratch approach.

Take two, manually handling the data and math

Lessons

As mentioned at the outset, choices in visualization don’t really boil down to right or wrong. The exploration phase serves as an opportunity to figure out which features of your data should be highlighted.

With a new set of goals in mind, we were able to revisit past decisions around rating scales and shift what held the spotlight.

The same sorts of tradeoffs are in play during implementation, too—we yielded some extra time to manual positioning math, but it offered a solid performance boost.

The most important lesson, though, is that it’s really tricky to make decently-sized GIFs for demonstrating moving particles in an article.

To see it in action, visit The State of Theology Data Explorer.

--

--