To Build Trust in AI, Focus on These Three Questions
December 4, 2020
Before the term “COVID-19” entered our lexicon, I was speaking with the Chief Information Officer of a large financial institution about the future of artificial intelligence. “I have hundreds of AI experiments underway,” he lamented, “but only a handful of models in production.”
Question 1: Is My AI Fair?
To ensure fair AI, we must make certain that the data the models are built upon is fair, and that the models themselves are designed to detect and mitigate bias as new data is introduced.
This data point could introduce bias into an algorithm. So when preparing their historical hiring data, companies will need to remove this bias or adjust the model parameters, then continually watch for “model drift” (when new biases may be introduced).
Question 2: Is My AI Explainable?
If we can’t explain why AI is making certain decisions, fears of a “black box” of mysterious algorithms can make it impossible to engender trust. But it involves using highly sensitive data to make decisions that significantly impact people’s lives. In highly regulated industries, explainability is also important for auditing and regulatory compliance.
Question 3: Is My AI Protected?
Defending AI systems from malicious attacks is more complicated now than ever—and crucial to ensuring trust. Companies must develop their AI with security built in from the start, then remain vigilant about ensuring their systems are protected during production and implementation.
Moving AI From Experimentation to Transformation
Ensuring that AI is fair, explainable and protected is not about checking boxes. When we get there, CIOs and CEOs worldwide will be able to move from AI experimentation to AI-driven transformation.
Feb 11, 2021 at 00:28