Building Trust in AI: How to Get Buy in for the Black Box

RS3812_shutterstock_465653942

Maybe it’s something innate to human nature, or maybe we’ve all seen one too many sci-fi movies (I’m looking at you Hal and 2001: A Space Odyssey), but people tend to view new technology skeptically. This is especially true when it comes to technology that makes recommendations or tells us how to do something. A prime example of this is GPS we all depend on GPS to get us to and from our destinations now, but when it first came on the market, many people preferred hard-copy maps to digital ones.

We’re now entering a world where AI will play an increasingly large and important role in our lives. Yet when it comes to the enterprise, many people at all levels of organizations do not trust AI-generated insights. If you’re someone tasked with getting buy-in across a company to motivate adoption of AI-based tools, the fear of the unknown and mistrust of AI can be a huge impediment.

Based on my experience, I see overcoming skepticism of AI as a two-step process that first requires users to understand how AI works so that they can then learn to trust it. Mistrust arises when the trip between the data and the recommendation from AI is more complex than people can immediately understand on their own. Assisting people to comprehend how AI made this trip is how you establish trust with the tool.

Cultivate Understanding …

Helping users understand how AI works is the crucial first step in the process of adoption. The best way to increase understanding is to provide people insight into important variables and trends around the outputs your AI tool is targeting.

We all remember our elementary school teachers telling us to show our work when working on algebra problems. Well, the same is true when trying to develop understanding of AI. Showing is more powerful than telling to aid understanding.

There are four main ways you can do this:

  • Change variables affected by the algorithm: No human is going to be able to understand an AI algorithm completely. But to improve understanding of AI, you can show how the outputs of the algorithm are sensitive to changes with certain variables. For instance, if you remove income of the customer as a factor in detecting fraudulent account activity, the difference in results will help users to see what variables the AI tool is using to make its recommendations.
  • Change the algorithm itself. Within an algorithm, you have a complex network of many nodes. If you remove a layer of nodes and then assess the impact this has on output, you can gain understanding into how it works. You’re essentially performing a sensitivity analysis. For instance, if you change the threshold for a certain variable from 4.5 to 7.5, and the output changes significantly, you know that variable played a big role in the outcome. Or to put it more metaphorically: Think of an algorithm as a machine with a bunch of knobs. If each knob has a range of intensity, you can alter those parts of the algorithm to show users how this changes outcomes. If you turn one knob up to 11 and another to 1, you can tell from the results what variables were most important to the AI tool in making its determinations.
  • Build global surrogate models: Surrogate models are built in parallel to an AI algorithm, but are simpler and easier to understand. These could be a tree, linear model, or linear regression that mimic the more complex AI network. Now, the results will never 100% align, but if the results from the surrogate model strongly echo those from the AI tool, users will understand some of the steps involved in the AI process.
  • Build LIME models: LIME models, or local interpretable model-agnostic explanations, are surrogate models that are localized. Instead of trying to replicate the entire model in a linear way, with LIME models you generate synthetic samples around a particular event and then linearly create models just for that event in a local way. From this, you get an understanding of which features are important in doing a linear classification around the event.  

…and then Establish Trust

After using one, or a combination of these four methods to help users better understand how AI machine learning works, you then have the foundation to establish the trust that’s crucial to getting people to actually use the tool. There are three ways I’ve found that enable this trust to be built most effectively:

  • Detect events and trends that conform to human expectations: The old adage of “I don’t believe it until I see it with my own eyes” applies to AI. If you can run AI on events and trends that are part of users’ domain knowledge and context, and that then produces results that confirms their expectations, you show that the AI model can be trusted. For example, if you have a transaction that the AI model has flagged as fraud and when asked, a fraud detection expert confirms that it’s a plausible fraud event, you help to promote buy-in.
  • Event and non-event cases should use different criteria. When a human is trying to detect fraud, their brain goes through different processes when examining a case that looks like fraud and one that doesn’t. Taking this same intuitive approach with AI, in which the AI tool can show why one event triggers a fraud alert, and another doesn’t, helps to show users that the AI tool operates in a way that is familiar and trustworthy.
  • Detected outcomes remain consistent. All scientific inquiry is predicated on the idea that for results to be statistically significant, they must remain consistent over time and be replicable. The same is true for AI. When a possible fraud event is run through an AI model, it should be flagged consistently each time. Stability is key to establishing trust. Companies can build user interfaces that help to the backend of the AI tool into the daylight to illustrate to users what is occurring.

At the end of the day, if you want to make AI less of a black box, you must first make users understand how the AI tool works so that they can then have trust the results. It’s natural to view any new technology with a skeptical eye, but because the insights generated by AI can so dramatically help to reshape how a company operates, companies need to do everything they can to ensure buy-in.


Chancal ChatterjeeChanchal Chatterjee, Ph.D., has had several leadership roles focusing on machine learning, deep learning and real-time analytics. As Senior Director of AI Solutions at Teradata, he is leading AI solutions for several market verticals. His AI focus is on financial fraud, preventive maintenance, recommender systems, smart manufacturing, and financial credits. Previously, he was the Chief Architect of Dell EMC at the Office of the CTO where he helped design end-to-end deep learning and machine learning solutions for smart buildings and smart manufacturing for leading customers. He has also worked on service assurance solutions for NFV data centers, and spearheaded the deployments with large service providers in Europe and Middle East. He has also been instrumental in Industrial Internet Consortium, where he published a smart manufacturing analytics framework for large enterprise customers.

Leave a Reply

Your email address will not be published. Required fields are marked *


*