Kay Firth-Butterfield: “We need to ask ourselves what kind of relationship we want to have with artificial intelligence”
Kay Firth-Butterfield, CEO of Good Tech Advisory and former Head of Artificial Intelligence at the World Economic Forum, reflected at BBVA’s FinAI Summit on the challenges of responsible artificial intelligence and on the kind of relationship society wants to forge with this technology: AI is not our best buddy or our partner, she stated, even though 20 percent of men in the United States have used it at some point “as a romantic partner, the person they talk to, the person they love. Is that the future we want with this technology?”.

In the midst of the generative artificial intelligence (GenAI) era, the biases in the data used to train these systems are becoming increasingly evident. Kay Firth-Butterfield, CEO of Good Tech Advisory and former Head of AI at the World Economic Forum, took to the stage at BBVA’s FinAI Summit to reflect on how AI is becoming part of our everyday lives, how we coexist with it, and the importance of ensuring that decisions about its use are not left in the hands of a select few.
The speaker noted that while GenAI has signalled a genuine revolution in recent years, its benefits are not distributed equally. A significant portion of the population is still unaware of these tools or does not understand how they work: “47 percent of people in the United States and 42 percent in the United Kingdom had not even heard of ChatGPT as of May 2024.” With this in mind, she stressed the need to “keep in mind those around us who still do not understand the impact of these systems.”
Biases in language models
British novelist Jane Austen wrote ‘Persuasion’ back in 1817. In one scene, the protagonist and a male character debate who loves with greater constancy — men or women. He claims that men do, because all the books say so. “Where’s the mistake?” asked Kay Firth-Butterfield during her talk. “The mistake is that all those books were written by men.” In her view, this goes to the heart of the current problem with large language models: “We’re using data with historical biases that humans have introduced.”
The expert cautioned that one of the reasons behind these biases lies in the very origin and design of large language models, which have been shaped largely from a male, white, Western perspective: “Although ChatGPT has 300 million users, 46 percent are men living in the United States.” This is in stark contrast to the more than 3 billion people around the world who still do not have access to the internet: “We can’t progress as a humanity if half the population remains disconnected from the tool we all rely on.”
She also addressed the key concern of AI hallucinations, a phenomenon that undermines the reliability of the data. According to the speaker, these inconsistencies not only cause confusion, but also feed into themselves, perpetuating errors that ultimately become embedded in the systems.
In some cases, AI is used to deliberately manipulate information. One example is ‘deepfakes’ (AI-generated fake images), which the expert described as “a plague for those of us who live in the real world.” While many sectors are vulnerable to deepfakes, the business world is especially exposed. The speaker recalled a case in Hong Kong, where a woman had transferred 20 million pounds (about 23 million euros) after being tricked by a fake AI video call, laying bare the real dangers posed by these technologies when there are no proper control mechanisms in place.

Pressing forward, but doing so responsibly
One of the most common arguments cited in support of AI intelligence is its ability to drive innovation. However, the speaker stressed that this progress must be accompanied by responsibility and safety measures. To illustrate this, she shared a personal anecdote: “I drive a Porsche. I chose that car because it’s fast, but also because it has good brakes and airbags. If something goes wrong, I know I’ll be protected and so will others on the road. That’s what I call responsible innovation.”
While some companies have restricted the use of ChatGPT, many employees continue to use it on their own initiative. Firth-Butterfield described a situation in which a worker at a cybersecurity company made a mistake using this tool. The issue, she explained, wasn’t just human error, but the fact that the company had no protocol or guidelines on how to use AI: “It’s essential to think through these processes. Using AI wisely allows us to make more effective use of it.”
Governance is another key concern when addressing the challenges posed by AI. In this respect, the speaker noted that while the United States has numerous state-level AI laws, there is still no unified federal regulation. This contrasts starkly with the prevailing approach in Europe, where a single law—the AI Act—applies across all Member States: “Imagine you’re a company and you have to comply with 150 different laws,” she remarked.
AI enters the job market
AI has also burst into the job market, raising concerns among many about a potential loss of jobs. However, Firth-Butterfield invited the audience to rethink that fear, focusing instead on the increased productivity this technology is already offering: “Thanks to AI, perhaps younger people will no longer have to shoulder the responsibility for our impending retirement,” she said, adding that, “In the coming years, robots powered by AI will become available. Once that happens, you’ll be able to get support and assign tasks to these machines.”
The rapid spread of AI within the labor market will not only affect young people, but also those engaged in physical or repetitive jobs, especially those nearing retirement. Here, Firth-Butter field cites the construction sector, where the average worker is 46 years old and many will need physical support in the years to come. In this regard, examples such as the robot dog Spot show how technology can help, such as by transporting materials on-site: “AI should not be seen as a threat to employment, but as an ally that can fill the gaps left by those exiting the workforce, just when it is most necessary to sustain the pension system”.
AI can certainly enhance human capabilities, but even so, there are tasks that will still require human involvement: “Are there jobs that only human beings should carry out?” the speaker asked and on this point she pointed to the importance of preserving human interaction in emotionally vulnerable situations, such as a conversation between a cancer patient and their oncologist: “That’s what defines us as human beings: the ability to decide for ourselves how much artificial intelligence we want in our lives and how much we would rather avoid, without anyone imposing it on us.”
This ability to decide for ourselves was the key message of her talk. This right, which goes beyond individual decisions, can only be assured if the companies driving AI are not overly concentrated: “Right now, artificial intelligence is in the hands of a select few,” Firth-Butterfield warned, adding that if we don’t start talking about the issue—with friends and across all realms of society—it will remain that way: “A tool controlled by a small group of people, mostly concentrated in Silicon Valley.”