Model Explainability in a Regulated World

Today’s Artificial Intelligence, called “the new electricity”, under-delivers and underwhelms. The commonly acknowledged issues that reverberate are: "Not enough talent," "Too expensive," "Too hard," and "Just not there yet."

Even if an AI company were to solve all of the complexity issues, however, other significant factors still largely deter the benefits of AI. One such major issue is that many industries are heavily regulated, and that puts a burden on the AI solutions used. In the past, I have worked in the healthcare space, and I can recall engaging with doctors from hospitals that were more concerned with understanding the models than the quality of the models. At the time, I had built models that could predict rehospitalization of patients with a significantly higher accuracy than state of the art. However, the algorithms used were what some call "black boxes."  This resulted in doctors not being comfortable with these models and preferring to use models with greater transparency at the price of accuracy.

Other industries are facing similar regulatory challenges. For instance, in the finance industry, you cannot decide what interest rate to give someone based on someone’s age, race, gender, etc. Similarly, in the insurance space, you can not determine someone’s premium based on those as well. As you can see, these are issues that arise not only from the professionals in these fields trying to get a better understanding of the models they put in place, but also from regulations that are necessary in order to protect the public from potentially discriminatory decisions.

Regulations, therefore, create a major roadblock for true AI innovation in various businesses. When audited, many industries must explain how they arrived at certain decisions. This requires transparency in the decision-making process. While some companies are working their way towards solutions with AI, most still rely on more time-tested statistical based techniques (most of which are linear in nature). These approaches can significantly limit the accuracy of AI models. I will be writing an article about some of these simple alternatives and their potential pitfalls.

The promise of an AI model that can offer the best of both worlds: maximum accuracy on one hand, and complete model explainability on the other, remains a challenge to the industry; although, we might be closer to this than most think. Transparency may still be the single greatest barrier to the adoption of AI in regulated industries.

Related Post

Inside Ople - Gilberto Titericz

What does it take to become a data scientist?