Explainability in Operational Decisioning: Simple Is Enough

Hassan Lâasri
6 min readJan 9, 2023

--

Illustration: Courtesy of WallPaperAccess

Executive Summary

The use of data and machine learning models in AI can sometimes reproduce and amplify human biases, particularly those that affect protected groups. To prevent bias and discrimination in the data and models that use it, we recommend for operational decisions the use of simple decision models such as decision tables, decision trees, and business rules. These self-contained simple models not only provide explainability in the decision-making process, but also allow designers to identify and address biases early on. By using simple decision models, we can avoid negative consequences in terms of legal, financial, and reputational issues.

Introduction

Every day, we read about the impressive results of machine learning applied in various fields. While we are fascinated by these results, we are also concerned about the potential errors and biases in artificial systems that the designers may not even be aware of. For AI to continue advancing and automating our personal, professional, and social activities, it needs to earn our trust. Without trust, there is a risk that AI will not be used to solve problems where errors and biases could have disastrous consequences. One way to address this issue is through explainability.

Looking back

Explainability, or the ability to provide clear explanations for AI decision-making processes, has been a key concern in the field of AI for many years. In the past, explainability was often a key component of expert systems, which consisted of a knowledge base, an inference engine, and an explanation module. However, the focus was largely on developing models for knowledge representation and inference, leaving the explanation module as an afterthought. As a result, these modules were often not useful for end-users, and served more as a debugging tool for designers.

What has changed since?

Two significant developments have paved the way for the improved explainability of AI systems.

First, advances in user interface technology in the 1990s allowed for the development of intuitive graphical interfaces, such as windows, dashboards, and charts. These advancements made it possible to develop explanatory modules for AI systems. Second, the explosion of digitized data in recent years has led to the development of new data-related laws designed to protect consumers from bias and discrimination in automated processes such as segmentation, profiling, targeting, and serving. These laws, which have been implemented in Europe, China, and the United States, require models to not only protect the privacy of consumer data, but also to provide transparency and explainability in their decision-making processes.

Without transparent and explainable decision-making processes, AI will face increasing resentment from the public and stricter laws from governments. This, in turn, may lead to resistance from companies to use AI in consumer applications such as lending, insurance, and hiring. Fortunately, a new branch of AI has emerged under the name of Explainable AI (XAI). By enabling human users to understand, trust, and act on artificially generated predictions and decisions, XAI can help to build confidence in AI and its applications.

The case of operational decisions

In this article, we will focus on operational decisions, which are the decisions made by organizations daily. Examples of operational decisions include credit, insurance, and hiring decisions. For each new applicant, there are various factors to consider, such as terms and conditions, legal constraints, eligibility criteria, and risk levels. It is crucial for these decisions to be transparent, explainable, and fair to maintain trust and confidence in the decision-making process.

Operational decisions are normative in that they are based on industry regulations, internal policies, or business strategies. For example, a branch manager at a bank may use a borrower’s repayment history to decide whether to lend to them, while an insurance agent may use a policyholder’s car brand, annual mileage, and residence information to calculate their premium. These decisions are an important part of daily business operations, and it is crucial for them to be made in a transparent, explainable, and fair manner.

Simple models are enough

Unlike strategic, tactical, and technical decisions who are more complex to model, operational decisions are easier because they come as rules of law. They come in the form of documents that one can translate into automated decisions. This can be done using decision representations such as decision tables, decision trees, and business rules, which allow designers to test decisions individually and collectively. This makes it possible to detect potential biases and errors before they are put into production and allows for easy adjustments to the decision-making process to correct any biases. By representing decisions in a simple, symbolic way, simple decision models provide transparency and allow for easy explanation to those who will use the model. This can help to prevent negative consequences for candidates or organizations, such as violating legal constraints or consumer rights.

Simple models have the advantage of being more easily understood and debugged by humans. This is because they use symbols and have simpler operations, which makes them more expressive and easier to work with.

Recommendations

If you are considering developing an operational decision-making system that prioritizes explainability, these ten recommendations can help:

1. Separate the decision flow from the rest of the application, making it easier to explain decisions to stakeholders, including customers.

2. Choose a decision management solution that allows for simple decision-making, making it easy to detect bias and discrimination during the development phase. The solution should also be able to verify the absence of bias and discrimination in imported data or models.

3. Consider a solution designed for those who will be building and managing decisions on a daily basis, such as business analysts, credit analysts, and HR managers. These individuals will have the most knowledge about the company’s implementation of regulations and policies, as well as potential biases and discriminations.

4. The chosen solution should not only make it easy to write decisions, but also easy to modify them, as most decisions will need to be changed at some point.

5. Choose a solution that integrates existing data, prior knowledge, and predictive models in the same environment, allowing you to leverage all three sources for decision-making. This can make it easier to spot and correct errors, biases, and discrimination.

6. The solution should support different decision representations, such as decision tables, decision trees, and business rules, allowing the designer to use the representation that best fits the situation.

7. For complex decisions, the solution should support micro-calculations and data transformations, such as scoring, rating, and pricing calculations, as well as data movement and transformation between databases.

8. The designer should be able to choose and switch between representations on the fly, without needing to re-code anything.

9. The solution should include dashboards and data analytics, allowing the designer to monitor decisions and quickly identify errors, biases, and discrimination.

10. Your operational decision-making application will likely be part of a larger system, such as credit decisioning, candidate scoring, or insurance underwriting. Choose an open solution that can integrate with legacy applications and can be hosted on-premises or in the cloud to ensure compatibility and flexibility.

Wrap-up

Explainability in decision-making is crucial for AI to continue to deliver transparent, fair, secure, human-centered, and socially beneficial outcomes. Without explainability, AI risks alienating the public and facing increased regulations, hindering its potential to automate and improve our daily lives. By using simple models to automate regulations, strategies, and policies, we can ensure that AI systems are free from bias and discrimination and can be trusted by all stakeholders. This will pave the way for the continued development and adoption of AI, leading to a world where automation is inclusive and beneficial to all.

For operational decisioning, we recommend simple models because they make each step transparent to the designer and end-user. By using symbols, graphics, and dashboards, these models allow designers to visualize, test, measure, and change decisions before they result in bias, discrimination, and violations of regulations. This can help improve adoption, understandability, and confidence in AI operational decisioning systems. Additionally, simple models provide a way for designers to detect and address potential biases and discriminations early on, reducing the risk of negative consequences in terms of legal, financial, and reputational damage.

About the author

Hassan Lâasri is a consultant and interim executive in data strategy, data governance, and data platforms. Although based on research, consulting, and implementation work, the views, thoughts, and opinions expressed in this article belong solely to the author, and not to his current or previous clients or employers.

--

--

Hassan Lâasri
Hassan Lâasri

Written by Hassan Lâasri

Global AI Strategy, Activation, and Governance

No responses yet