GistLabsAI Blog and News

How Does This Data Make Me Feel?

Written by John Heintz | Mar 5, 2024 2:48:10 AM


Usability in AI: Psychology Matters

The related blog post: "Video: How Does This Data Make Me Feel?" has a video presentation of this content and slides.

This is the story of how our AI startup unblocked product user adoption by incorporating usability and psychology principles.  Users went from always needing our help to independently using the product.  This allowed us to focus on innovation as opposed to training.  Along the way, we learned two important questions than can help everyone building visualizations that help people take action.

Our users were stuck.  They were interested in the insights our product offered, but they couldn’t understand them without our help.  We tried training and explaining how to use the results to make decisions.  It didn’t work–no amount of our time would enable them to use the product independently.  Back to the drawing board!  

We started by researching several psychology references and principles.  We then asked two key questions relevant for data, AI, and dashboards:

  1. How does this data make me feel–good or bad?  
  2. What default action are we nudging users to choose?

These two questions led to simple changes that allowed users to independently understand and manage their risks and focus on removing them.

Context

We were building a product that forecast when projects and their tasks and milestones would really finish.  Our AI algorithm generated large amounts of probabilistic data and visualizations. We had cool math that transformed small updates into sophisticated data, but it needed an expert to interpret it.  Our users needed to make better and faster decisions on these project insights without our help.

Our Initial Visualizations

Figure 1: Likelihood Distribution for Project Completion

Our first attempt to visualize the data had value:

  • We can show that right now the project is most likely to finish in 8 or 9 weeks
  • There is a 68% confidence that the project will finish before the 14 week deadline
  • There is a 32% risk the project will be late and miss the deadline. 

Is this good or bad?  It depends on the context:

  • If this is an agile feature being deployed in a SaaS company then 68% is pretty good confidence.
  • If this is a deadline for regulatory submission, a tradeshow, or a Holiday, then 32% risk is terrible

We didn’t just have one of these visualizations, but many. Each task, milestone, and the overall project had one likelihood diagram for the starting forecast and the finishing forecast.  We regenerated them all at least once a week

Figure 2: Likelihood distributions

This was information overload for our customers.  They needed to have easy decision making at a glance, powered by our complex data and models.  

We considered ideas from “Thinking Fast and Slow” by Daniel Kahneman and the psychology of fast autonomic responses versus slow higher order cognitive functioning. We realized that we were asking our users to always apply slow thinking and re-evaluate the context in order to draw any conclusions from our data. We needed to be able to tap into their fast and intuitive thinking capabilities. This would unlock the benefits of machines and humans working together.

We began our journey with an initial Risk Burndown visualization.  

Our First Risk Burndown

Figure 3: Risk starts high and goes down. From Figure 1, we plot 32% risk at Iteration 3, above the target line the risk threshold right now. This is an improvement over a series of likelihood distributions, illustrates what the risk ought to be, and builds a timeline showing the entire context.

This pulled all the context and timeline into a single visualization that was much more useful for our user. However, there was still a problem.  When risk decreases, there is a “green in green” effect that is confusing:

Figure 4: When risk is lower than burndown line then “green in green”

How does this visualization make us feel? (Not good!)

We knew there must be a better way, but we couldn’t get there alone. We contracted The Graphic Standard to help.  Here’s how they solved it:

Figure 5: The UX recommendation showing only deviations from the target line

Like many innovations–in hindsight it’s obvious! The best result was our users’ feedback.  Not only did training now take less than a half hour, they also said the visualization was immediately intuitive and sparked the right emotional response at a glance:

  • Red is bad. More red is worse.
  • Green is good. More green is better.

This is what the product evolved to look like, here with a few helpful annotations. Even without those words, the meaning is still clear.

Figure 6: The usable Risk Burndown

What should our users do? The “nudge”

Users still needed our help to decide what to do with these insights.  So, we implemented a “nudge.”  The product suggested the most likely action by default and gave users the freedom to use their judgment to choose another.  This allowed humans to be in the loop with AI. 

The book “Nudge: Improving Decisions About Health, Wealth, and Happiness” by Richard Thaler and Cass Sunstein inspired this change. The psychology: people have a strong tendency to choose the default option, the nudge, without forbidding them from choosing any option. 

The bottom of Figure 5 shows the initial default actions:

Figure 7: The recommended actions to take, the “nudge”

This evolved to become a sorted list of “what’s causing the most risk”. When the risk spiked red above the line, the algorithm would prioritize the most important issues to address.

Risk of Missing Milestone 3 Delivery:

  1. (9 days risk) Task 3 Handoff from Team Y
  2. (4 days risk) Team L Velocity Slower Than Expected
  3. (2 days risk) Task 5 Estimate Uncertainty

The default next action was for users to talk to Team Y about Task 3.  The product handled the slow thinking–running the numbers and predicting the best course of action.  That didn’t guarantee the risks would materialize.  Users might have additional information to choose a different action and could still use their own judgment

We had set the defaults and visualization to allow their fast and intuitive thinking to take action leading to the best outcome.  Just as important, they felt confident and did it without our help.

Conclusion

Providing intuitive visualizations and actionable insights out of comprehensive data is challenging.  Our success came from joining the power of an AI algorithm, psychological principles, intuitive visual design and embracing human decision making. 

If you’d like help, we love creating insightful and intuitive AI and data systems, just ask us.