BBVA NDB New Digital Businesses

NDB News

What is explainable AI (XAI) — and why is it more necessary than ever?

Netflix has transformed the way we watch TV: Whether you like Oscar-nominated dramas, stand-up comedy specials, or binge-watchable series, Netflix offers thousands of options for you, all at the touch of your remote. 

It offers these options based on recommendation algorithms fed by millions of users watching and rating the content on the platform. These artificial intelligence algorithms are fed by metadata (including genre, categories, cast, crew, and release date) and user behavior data (such as browsing, playing, searching, rating, time, and device type), based on the assumption that similar viewing patterns represent similar user preferences or tastes.

Nobody knows exactly how these recommendations work. On their website, Netflix provides a high-level description of their proprietary, complex recommendations system in plain language. They state the system does not factor demographic information (such as age or gender) into the algorithm’s decision-making process, but they won’t reveal the “secret sauce” behind their AI algorithms.

But do we really care how the system is making its recommendations as long as we get to relax in front of some enjoyable television after a tiring work day?

In general, AI algorithms are “black boxes” in the sense we know little to nothing about their inner workings and therefore can’t explain how they’re coming to their decisions.And while that may not matter much in terms of what’s on our TVs, algorithms like these are increasingly making decisions that profoundly affect our daily lives — and we should definitely care about those. 

At BBVA, we believe AI algorithms can bring longstanding benefits for society by unlocking new possibilities, provided we can understand how and why the algorithms make decisions. Here’s a look at the challenges we need to overcome and applications that will allow all of us to benefit from AI capabilities.

The problem with traditional models — and machine learning

In general, modern machine-learning models (a special group of AI algorithms that learn by example), are able to improve performance when compared to more traditional models like logistic regression, as they can identify complex nonlinear patterns and interactions in large datasets with a wide variety of data.

In Netflix, this means that the percentage match score shown next to “Star Wars Episode V: The Empire Strikes Back” would likely be better estimated by modern machine-learning models than traditional models, improving the odds that you actuallywill like that title and consequently improving your experience in Netflix.

Traditional models rely on human programmers to find interactions among the different variables and, as a result, make decisions on which variables would return a particular outcome. During the design of the algorithm using a traditional model, humans could easily write their own biases into an algorithm (a Star Wars fan might believe everyone should watch those films, for example).

When applying modern machine-learning models, decisions or predictions are based on the learning obtained by an automated process with little human intervention. But this is not without its issues, as the algorithm can be trained by an initial data set that might not represent the population it was designed to measure (such as using just Star Trek fans’ ratings for Star Wars films).

Understanding the training dataset and analyzing how the data affects results for different populations is key in identifying bias.

For financial services, this is of paramount importance. During the lending process, when an algorithm is analyzing loan applicants’ forecasted ability to repay a loan, we must ensure fairness and transparency. If bias is not properly addressed, we could end up with an algorithm that creates undesirable outcomes stemming from discrimination on gender, racial or ethnic origin, age or even proxies of these, such as people living in a specific area (see reference [1] at the bottom).

What is explainable AI?

AI explainability is comprehending how and why the algorithms make decisions or predictions — and the ability to explain the outcomes they generate. Explanations can be global or local. 

Global explanations describe how the algorithm behaves in general. In Netflix’s case, the recommendation engine wouldn’t suggest a three-hour movie just before midnight on a Tuesday. In the case of a loan, the algorithm might assume that a consumer with a very high level of indebtedness would find difficulties in incurring even more debt.

netflix

 

Local explanations describe how the algorithm behaves for a specific individual. For Netflix, if the person starts working at 1 p.m. and has a lifestyle that allows him to watch a three-hour movie at 11 p.m., the algorithm could infer this by analyzing his historic watching behavior. In the case of a loan applicant, the consumer could be bringing additional collateral or be prepared to consolidate her debt in order to pay lower interest rates. 

When evaluating an applicant’s profile, multiple factors enter into account: job status, economic sector, income, expenses, wealth, payment history, loan amount, purpose of the loan, macroeconomic environment and expectations, etc. The interdependencies among these factors make explanation harder even when using traditional algorithms. 

To bring the age of opportunity to everyone, we must be able to explain why algorithms make a decision and, even further, gain consumers’ trust by giving unbiased advice that will help them overcome their hardships and improve their financial situations.

Counterfactual predictive modeling for XAI

By applying counterfactual predictive modelling, one of the XAI approaches we’re developing within BBVA’s New Digital Businesses unit, we are able to explain why a consumer’s loan application is rejected by a model. Here’s how it works:

  • We create digital “quasi-twins” of a consumer’s profile that includes age, transaction patterns, etc., then make tiny alterations to them (decreasing the ratio of expenses to income, for example)
  • We test the twins until the loan is approved based on one of those alterations

Based on our data, we then identify the variables that can explain the decision of the model and be validated by human experts (these variables might not even be self-evident to a human programmer who designed the model!)

There’s significant value in understanding the differences between the real consumer who gets declined and her “quasi-twin” that gets approved: It allows us to advise her on behaviors that could improve her financial health and offer her a better chance of being approved for a loan in the future.

Advisory services: How AI can help

AI algorithms could help improve predictions for income and expenses, suggest products better suited to those situations, and even make recommendations on how to increase financial resilience by mitigating risk based on those predictions. All technology aside, advice should be based on unbiased recommendations and a trusted relationship between BBVA and its customers. 

Netflix gives you a bad movie recommendation, you lose two hours of your time; the risk is more significant when looking for a 30-year mortgage.

This is why it’s so important to understand the logic behind the algorithms and explain them in a clear, user-friendly way — and why it is also a regulatory requirement (the General Data Protection Regulation limits the application of automated individual decision-making by stating the right of the data subject to request information about the logic involved in any automatic personal data processing).

Explainability allows us to adapt the message to the receiver: more concise for general public and more technical for internal use and regulatory bodies.

Can XAI create a more resilient and sustainable economy?

When considering how BBVA can help society with XAI, myriad applications arise.

For instance, when looking to improve living standards via sustainable investments, AI may be able to help us discover insights for improving disadvantaged areas by identifying and learning from behavioral patterns (such as migration, access to education, retail development, etc.) that have occurred in other areas that flourished over time.

Financial institutions, nonprofit organizations and government agencies could all use these insights to create opportunities through lending or investment in sustainable city planning.

To make this a reality, we need to develop a culture for continuous knowledge development by disseminating knowledge on data science and its business applications, as well as supporting the development of widely accessible tools and standards for AI.

XAI is a great tool not only to open AI algorithms’ black box, but also a promising opportunity to increase transparency and fairness, and unlock a new world of possibilities. 

In that new world, AI will not only help us improve how we spend our leisure time watching what we really like; it will also help financial consumers understand how to achieve their life goals — with our help.

[1] Ruling institutions, such as the European Parliament and the Council of the European Union, are aware of the potential negative impact of both biases and limited explainability when applying automated processes. For instance, the General Data Protection Regulation (GDPR) limits the application of automated individual decision-making by stating the right of the data subject to request information about the logic involved in any automatic personal data processing. Also refer to the following articles for undesirable AI outcomes in big tech companies: How to keep your AI from turning into a racist monster and Amazon scraps secret AI recruiting tool that showed bias against women.

More news

Bank identities are the key to creating a digital culture of privacy and security
The Big Corporate’s recipes for creating successful startups
How Upturn uses data science to improve its product