Close panel

Close panel

Close panel

Close panel

Social> Gender equality Updated: 19 Aug 2020

Freeing bots from gender bias and moving towards a fairer society

As long as bots continue to harbor gender bias, they will continue contributing to the perpetuation of gender inequality. This is why BBVA Next Technologies, BBVA’s advanced software company, is working to reduce this kind of bias, leveraging technology to bridge the gender gap.

BBVA Next Technologies believes technology should be used for the good of humankind. To this end, the company’s Innovation Laboratory team organized an event on March 5th — as part of its activities to celebrate International Women’s Day — to explain how gender bias in artificial intelligence can be reduced and bots can be designed to be more impartial.

“We need feminist technology that acts as a catalyst for equality: to question prevailing trends and biases that feed inequality and perpetuate stereotypes; to work to balance the scales and build a fairer world; to defend diversity and equal opportunity,” Marta Salas Cerezo, a designer at BBVA Next Technologies explained.

This is the objective behind the design process that was presented at the event: Feminist Bot Design Canvas. This process was conceived by the Innovation Laboratory and encourages the consideration of the following different elements when creating bots.

  • The user: One has to bear in mind the strengths and challenges of the people for whom the experience is being designed so “the design will then be much more powerful than if it is addressing overly generic, universal needs,” Salas Cerezo asserted.
  • The team bias: It is important to take the time for the team to reflect on what its vision is when it is creating the bot and how that vision affects the design. In other words, the team needs to be aware of their beliefs, their societal background and status, etc. and guard against how these factors could bias their decisions.
  • The purpose: To establish a connection between the human user and the bot, with the team asking itself what the final aim is.
  • The bot: First, its personality needs to be defined in such a way that it supports the purpose, and during this assessment, it should be determined if it is appropriate to assign the bot a gender. Then the team needs to determine how the bot will respond, keeping in mind for ethical reasons that it is not a person and that how it responds to inappropriate comments is important. Pillow, a bot designed to help people maintain positive mental well-being, is one such example. If after giving some advice about how to sleep better, Pillow receives an “inappropriate” response from the user, the bot responds as follows: “Don’t get like that; I’m only trying to help. I understand you're tired but you shouldn’t take it out on me or anyone else.”
feminist_canvas_BBVANext_design_bot

‘Feminist Bot Design Canvas’, a design process created from the Innovation Laboratory of BBVA Next Technologies.

This design process intends to fight against “gender bias, which is internalized within technology from its very beginning, making it inevitable that responses are predisposed to gender prejudice,” Ira Manzano, a BBVA Next Technologies product designer explained. She added that this situation, “mainly occurs because the data we work with is biased from the outset.”

Yolanda de la Hoz is an AI researcher at BBVA Next Technologies who has witnessed the source of the problem firsthand. She explains that it stems from the fact that “human beings have a multitude of cognitive biases of which we are unaware and they are easily transferred to intelligent systems that learn from us via data,” which means the algorithms are inevitably affected. The situation is exacerbated by the fact that the technology sector has historically been dominated by men.

These biases are currently obvious in the most popular virtual assistants — Siri, Alexa, and Cortana – all of which have female names and voices and are programmed to respond to commands. This factor can go easily unnoticed, yet it can have serious consequences as a UN agency points out. UNESCO claims that “making these assistants female helps entrench gender stereotypes.”

Reducing bias in artificial intelligence

De la Hoz explained that to reduce bias in this technology, “it is essential to pay close attention to the data,” as mentioned previously. She also commented on the importance of the training of the models so that they are able to identify and correct their own prejudices, given that normally they are designed to generalize across the data.

According to the BBVA Next Technologies researcher, the key is “to involve data scientists in the process in order to mitigate risks and also to invest in research and multidisciplinary development teams that include profiles such as psychologists and linguists.”

Providing an example to close the event, BBVA Next Technologies researcher in the area of Human Computer Interaction, Sandra Juárez Puerta, used a workshop to talk about IWomen (Important Women), a feminist chatbot developed by the team specifically for the event and whose goal is to eliminate biases in different professional settings.

This project aims to serve as a benchmark and inspire developers to create more gender neutral virtual assistants that do not reinforce gender bias via new technologies.