Close panel

Close panel

Close panel

Close panel

Ethics and artificial intelligence are not difficult words

We need to talk about artificial intelligence (AI), and not because of the hype surrounding it, but because it is all around us. Without noticing, in one way or another, AI has become a part of our daily lives, and this may be just a hint of what’s to come in the future. Alan Turing pointed out that that at one point, machines will be able to simulate human thought and that’s where we should start.

In that sense, we should not allow ourselves to give into mistrust when faced with these technical or grandiloquent concepts. For some generations, the first thing that springs to mind when they hear the term AI is HAL 9000, the evil computer in 2001: A Space Odyssey. For others, it’s the Architect, the program that designed the Matrix of the Wachowski sisters’ trilogy. But reality is less complex than fiction, at least for the time being.

AI has become an integral part of our households and workplaces, because it is basically the technology that powers the different applications or devices that make our lives easier: smartphones, virtual assistants, toys that can communicate with children… even vacuum cleaners. There are algorithms that analyze our digital actions without us noticing. Recommendation engines, for example, try to figure out our interests based on our search or shopping histories, building our user profiles in Amazon, Google or Netflix. Whether they are better or worse at emulating human thought, is a whole different story.

All it takes for a technology to qualify as AI is to emulate, in one way or another, some sort of reasoning process. That is also the idea that John McCarthy proposed, who coined the term, and whose definition comes from the concept of imitation. The organizers of the Dartmouth Conference in 1956 said that “this study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate them.

The AI applications that we have at hand today are rather basic, what Ramón López de Mantaras calls in an interview “specific intelligences”: Chess playing programs capable of beating the best human players, applications that predict our tastes or systems that can diagnose diseases faster than a doctor, “but without a general knowledge of medicine.” We could say that we have small representations of intelligence on different platforms.

The concept of intelligence is tremendously broad. To say today that someone is intelligent is not the same as in the nineteenth century. We have moved away from rational primacy. Howard Gardner’s theory of multiple intelligences provides a broad and complex framework for analyzing human understanding. Therefore, and following the definition of Dartmouth, to understand what AI is, we must first know what human intelligence is. Maybe that’s why the worlds of psychology, neuroscience and computer science need to go hand in hand. Recently, in the North American edition of Wired magazine, Lee Simmons talked about the paradoxes of the relationship between the brain and AI. The article said that computer networks have a few million nodes, very few compared with the one hundred billion neurons of the human brain. In other words, Simmons suggests, we model an AI based on something we only partially understand.

To understand what AI is, we must first know what human intelligence is

The AI platforms we have today, still in their infant stages, should also be linked to human values. If ethics is thinking about what is right or wrong, one should question actions in every area. For example, an AI ethics for finance should understand how virtual traders operate. Or worry about how to enhance bank employees’ cybersecurity skills. Or be aware that corporate values must be conveyed into the AI platforms they offer.

The questions are the beginning of the path toward moral development. Should, for example, AI powered by Big Data that allows us to pay for public services make them more expensive? To what extent is the medical data collected in medical records private? What limits should be set in the case of video games for children or minors? Even if regulations are complex, we need to start without being afraid of asking questions.

The approach to AI ethics should be pedagogical. As an instrument to solve problems, we must direct it, and know that its results must empower human capabilities. AI technologies are capable of analyzing personal data and extracting valuable statistics. But we should not stop at the mere number. We need to know more. The European Union has just announced the formation of an expert group to assess the impact of AI on society. A commission of this kind will need to devote part of its time to clarify its meaning in layman’s terms.

We should start programming by asking moral questions. Like Ben Goertzel, creator of SingularityNET, who is aware in his projects of the value of understanding. Or Microsoft researcher Timnit Gehru, whose works focus on diversity or the number of women working in AI.

We are still far from an AI capable of engaging in a profound conversation, applying common sense to make decisions, or handling irony. It would be naive to wait until we reach a certain level of AI development to subject these matters to scrutiny. Now is the right time to start making morality part of these developments. Thus these simulations will offer us greater understanding of our human nature.

Other interesting stories