The interest in artificial intelligence seems ultimately to approximate the search for self-aware systems that can act in a “human” way (known as a ‘Singularity’).
A search met by skepticism: many people think we can never manage to duplicate human intelligence… particularly as we don’t yet fully understand how the brain works, according to the IT expert Jaron Lanier writing in The New York Times in July. Others, such as the astrophysicist Stephen Hawking, rather than being skeptical about the possibility of achieving Singularity, are more concerned about its possible effects, as successfully creating AI “would be the biggest event in human history. Unfortunately, it might also be the last.”
However, just because we are a long way from achieving this goal does not mean that artificial intelligence will not be able to make a significant contribution to our daily lives, long before any computer can pass the Turing Test. Ignoring science fiction and returning to the practical, the emergence of the Internet of Things and the avalanche of data created by this will require artificial-intelligence technologies capable of processing and making sense of such information in real time.
This was discussed by researchers at the Carlos III University’ SCALAB Group back in 2011: “The possibilities for practical application will mushroom with the explosion of devices capable of capturing and processing information, growth in computing power and advances in algorithms. These could include computer programs that make our lives easier, that take decisions in complex environments or that enable us to solve problems in environments that are difficult for people.”
Or as Om Malik, the founder of GigaOM, puts it “instead of waiting for AI’s Godot -a machine we can converse with- what we really need are ways to use machine intelligence to augment our ability to understand our increasingly data rich and complex environment.”
Gradual changes and practical applications
As with many other innovations, Artificial Intelligence is accumulative, and success will consist of gradually optimizing it, one day at a time, rather than placing all our bets on achieving the Singularity. As users, we must be aware that we are interacting with Artificial Intelligence every day… and it may already have beaten us thousands of times (e.g. videogames).
Raymond Kurzweil, Google’s head of engineering, argues that whilst Watson [the IBM supercomputer] is not capable of understanding all levels of human language (if it were, we would be at the level of the Turing Test), it managed to beat human champions in the TV quiz Jeopardy.
One of the advantages of Artificial Intelligence is that decisions are emotionally neutral: they are based on facts. AI’s intuitive thinking is not based on “hunches”, but on detecting patterns in data based on the historic information loaded into the system. And there is also the precision that comes from the intelligence analyzing the data not being dependent on sleep or hunger, and avoiding potential communication breakdowns between people. Companies have much to gain by adopting AI for decisions involving large volumes of data, and they can do this with technology available now.
However, focusing exclusively on the practical applications of Artificial Intelligence could limit its development, as Marvin Minsky, one of the founders of the discipline, recently said that the major advances in Artificial Intelligence took place “between the 1960s and 1980s. I haven’t seen many advances in recent years, because the money is going more to short-term applications rather than basic research”.
Recommended for you