Close panel

Close panel

Close panel

Close panel

Technology Updated: 21 Aug 2017

Can artificial intelligence be regulated?

US technology giants are coming together to look into the ethics of the progress in AI.

artificial-intelligence-regulated-bbva

The robot boom has brought people's fear of losing their jobs to a new peak. People are also asking whether machines will be uncontrollable and go against humans. This is a fear that, as The New York Times has reportedhas led five technology giants to want to create an ethics standard for artificial intelligence (AI). Researchers from Google, Alphabet, Amazon, Facebook, IBM and Microsoft have met to discuss the most tangible issues, such as the impact of AI on employment, transport and even wars.

For a long time, technology companies have made exaggerated promises of what artificial intelligence machines could do. Movies reinforced these dreams. But it's no longer science fiction. In recent years, AI has rapidly evolved in many areas, from cars and self-driving machines that react to the human voice, such as Amazon's Eco, to the manufacturing of new weapons that threaten to automate combat.

In July 2015, a thousand experts, including physicist Stephen Hawking, the cofounder of Apple, Steve Wozniak, Elon Musk, founder of Tesla, co-creator of PayPal, linguist Noam Chomsky, and Demis Hassabis, chief executive of Google's artificial intelligence company, signed a petition alerting of the dangers of artificial intelligence and demanded it be regulated.

Standford Project

A year later -although the details of what the industry is going to do or say is still not clear- it's obvious that the large technology companies want to ensure that research into AI is focused on benefiting people, and not causing them harm.

The importance of the industry's effort has resulted in a report published by a group of experts from Stanford University led by Eric Horvitz, a Microsoft researcher. The Stanford project, A study into one hundred years of Artificial Intelligence, traces a plan to draw up a detailed report about the impact of AI on society every five years for the next century.

"We're not saying that there shouldn't be any regulation" says Peter Stone, IT expert at the University of Texas in Austin, and one of the authors of the Stanford report. "We're saying that there's a right way and a wrong way to do things".

As the US newspaper pointed out, it's not the first time that technology giants, who are normally is fierce competition, have come to an agreement on something: in the 90's, for example, technology companies created a standard method for encrypting e-commerce transactions, laying the foundation for the decades of growth of Internet businesses.

The authors of the Stanford Artificial Intelligence and Life in 2030 report argue that it will be impossible to regulate AI. "The consensus of the study agrees that attempts to regulate AI in general would be an error, since there is no clear definition of what AI is (it's not just one thing), and the risks and considerations to bear in mind are very different for the different domains", says the report.

Increasing awareness

One of the recommendations in the report is to increase the awareness and experience of AI throughout the government, explains Stone in the US newspaper. An increase in public and private expenditure in AI is also requested.

"The Government has its role as the government and we respect that", says David Kenny, general manager of the Watson artificial intelligence division at IBM "The challenge is that politics often sets technology back".

A memorandum has been distributed among the five companies to try and announce the creation of the new organization in mid-September. One of the unresolved problems is that Google DeepMind, an affiliate of Alphabet, doesn't want to participate.

Reid Hoffman, founder of LinkedIn with experience in Artificial Intelligence, is in talks with the Media Laboratory at the Massachusetts Institute of Technology (MIT) to finance a project to explore the social and economic effects of artificial intelligence. Both the MIT initiative and the association of the industry are trying to closely link technological advances with political, social and economic issues. The MIT group has been discussing the idea of designing new AI and robotic systems with "society inside the circle".

Reid Hoffman, founder of LinkedIn with experience in Artificial Intelligence, is in talks with the Media Laboratory at the Massachusetts Institute of Technology (MIT) to finance a project to explore the social and economic effects of artificial intelligence

The phrase is a reference to a long debate about the design of IT and robotic systems which still require human interaction. For example, the Pentagon recently began a military strategy that requires the use of AI, but in which humans continue to control decisions when it comes to killing someone, instead of delegating this responsibility to machines.

"The key that I would like to point out is that IT scientists are not good at interacting with social scientists and philosophers" says Joichi Ito, director at the MIT Media Lab and member of the board of directors of The New York Times. The future will tell whether ethics is imposed on artificial intelligence.

Read the Standford report here

Source: The New York Times