BBVA NDB New Digital Businesses

NDB News

Opening the black box of artificial intelligence in financial services

 

BBVA | New Digital Businesses | We're Banking on Disruption
José Fernández da Ponte, Head of Beyond Core, New Digital Businesses

On Feb. 8, 2019, BBVA New Digital Businesses took the stage at Stanford University’s AI in Fintech Forum, part of the Advanced Financial Technologies Laboratory.

José Fernández da Ponte, head of NDB’s Beyond Core pillar, joined a panel discussion on explainable AI to discuss the future of artificial intelligence and machine learning in the financial services industry. Ben Saul of White & Case LLP, a global law firm, and Michael Warner of the Federal Reserve Bank of San Francisco, rounded out the panel.

On the heels of that panel discussion, we asked José to share more about BBVA NDB’s research into the future of explainable AI — including how it will benefit BBVA’s core and portfolio companies now and in years to come.

 

What is explainable AI?

In short, explainable AI (xAI for short) is a set of methodologies to ensure that decisions taken by artificial intelligence agents can be easily understood and trusted by humans.

“In AI and specifically machine learning, one of the problems we have is you build this model or algorithm, and it gets better as it keeps getting trained — but it’s a black box,” said Fernández da Ponte. “You can’t explain how it’s coming to decisions.”

You can’t develop a model then go back to your team,  internal control functions, clients or regulators— the Federal Reserve or the European Central Bank, for example — and simply tell them you don’t know how it works, he said.

Why is xAI important for financial services?

As financial institutions increasingly want to deploy emerging technologies like machine learning to do business, they need to be able to establish that the models are accurate and there’s been no algorithmic bias, Fernández da Ponte said.

This algorithmic bias can happen most frequently at three stages in the process:

  • During the design of the algorithm: Humans can easily write their own biases into an algorithm.
  • During the initial training the model: An algorithm can be trained by an initial data set that might not represent the population it was designed to measure.
  • Over the life of the model: As a model goes live and continues to learn,  the new data received is influenced by its own decisions – and it can naturally drift toward bias over time.

This need to eliminate bias is twofold: for accuracy reasons, to ensure a model is doing what it should be doing; and for transparency and fairness to consumers and other stakeholders.

“Take credit, for example — we want to make sure that the models used to make decisions about credit are unbiased,” Fernández da Ponte said.

How is New Digital Businesses looking at xAI?

The Beyond Core pillar is devoted to researching and applying science and technologies that could impact finance three to five years down the road, Fernández da Ponte said. Its approach borrows a page from DARPA, the research agency of the U.S. Department of Defense that is credited with fundamental advances such as GPS, the modern Internet or speech recognition.

“We don’t think about the technology itself,” Fernández da Ponte said. “Instead, we think about a problem and how we can apply technology to solve it. Then we assemble a team of internal and external experts for a limited amount of time, and we give them autonomy to find the best way to address the problem.”

The Beyond Core pillar is prioritizing financial-industry problems that are difficult to solve or intractable with the technology in place today, working with colleagues on topics such as modeling capabilities, decentralized protocols, fraud protection, and limits on classical processing capacity.

Counterfactual predictive models: a new way to explain AI?

Fernández da Ponte sees promise in an approach known as counterfactual predictive modeling — one of several ways xAI can be used — for BBVA and its portfolio companies in the future.

Suppose a consumer applies for a loan and gets declined by a machine learning decision model. Counterfactual models work by designing a “digital quasi-twin” that is as close as possible to the profile of that user but would get a different decision. This helps identify variables that explain the decision of the model, and that can be validated by human experts and used to refine both new and classical models.  

“It’s a way of identifying variables that could be predictive and relevant for decision making inside the model, but might not be self-evident for a human designing the model in the first place,” Fernández da Ponte said.

Currently, Beyond Core has three projects underway including both internal engineers and scientists from NDB R&D, academic collaborations with leading universities and work with external startups. All those teams are working to explore new ways to deploy advanced artificial intelligent models that are unbiased, accurate, explainable and respect privacy.

Research for a better future

Beyond Core’s work in xAI strives to find a balance between bold hypotheses and actionable findings, Fernández da Ponte said. The goal: To find ways to deploy and explain new models, but also to use xAI to improve the existing ones.

“These incremental changes can pave the way for us to do AI at scale,” he said. “If this were purely ivory-tower academia, there would be no point in us doing  it.”

 

 

More news

Four growth tactics for banks in the age of big tech
How Atom won a £10 million grant to help SMEs
Bank identities are the key to creating a digital culture of privacy and security