How should machines learn to make the most out of the data they handle? Based on how scientists answer this question, they can be grouped into five tribes. They all see themselves capable of finding a master algorithm capable of discovering knowledge from data, with huge implications for everything we do.

Humanity has already tapped into a fourth source of knowledge. For thousands of years, we’ve learned all we know from three basic sources: evolution, experience and culture (which also includes everything we learn through family/social interaction). Today, thanks to computers and Artificial Intelligence (AI), and specifically to machine learning – a subfield of computer science that gives computers the ability to learn without being explicitly programmed – we have gained access a new source of knowledge.

Traditionally, computers were programmed with an algorithm that provided them with detailed instructions that determined what they had to do, for example, to play a game of chess or issue a parking ticket. Today, computers are becoming increasingly capable of functioning without requiring the same level of external detailed programming, generating knowledge on their own.

This is a quantum leap for humanity. And the impact of this leap is already rippling across the corporate world. According to computer scientist Paco Natham, during his presentation in Big Data Spain, an event held in Pozuelo de Alarcón (Madrid) that drew thousands of professionals of this emerging industry, “it is impossible to name ten successful tech startups that don’t consider machine learning a part of their strategy.”

Paco Natham, during his presentation in Big Data Spain

The quest for the master algorithm

Algorithms already know which movies we enjoy, filter unwanted emails and allow our smartphones to organize our lives. But they are still far from realizing their full potential. The so-called master algorithm is a single algorithm capable of discovering all knowledge  from data. And this is precisely the title Pedro Domingos’s latest book. In The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. the Portuguese author, one of the world’s leading machine learning experts and professor at Washington University, defines the five different tribes of machine learning, namely: Symbolists, Connectionists, Evolutionaries, Bayesians and Analogizers.

What distinguishes these tribes? Basically, the way in which they approach machine learning from different disciplines and therefore apply different algorithmic solutions.

  • Symbolists draw on logic and philosophy, and their master algorithm is inverse deduction: they believe that the key to knowledge is teaching machines to fill gaps in knowledge through inverse deductions.
  • Connectionists consider that this approach is too narrow and formal: logic does not dictate what happens in life. Their theories are based on how the brain works, and therefore their basic theory is neuroscience and their master algorithm is backpropagation. One of the most authoritative voices in connectivism is Yann LeCun, professor at the New York University and, since 2013, Facebook’s head of AI research.

Explore more…


Through retropropagation, connectivist scientists are already making progress toward developing small-scale ‘newborn’ artificial brains. After developing mathematical models to loosely simulate neuronal learning processes and replicating them, they build artificial neural networks and train them through backpropagation. One of the most commented progresses was the creation of an artificial brain capable of singling out images in videos. They tested it in YouTube and they discovered, much to their surprise, that it responded strongly to pictures of cats, an animal that frequently appear in the online platform’s videos.

  • Evolutionaries draw from the principles of evolutionary biology. Their tools are evolutionary algorithms, based on Darwin’s principles. In simple terms, evolutionists mix possible solutions to solve a problem These alternatives compete against each other, blend and only the most apt are not discarded, thus allowing the machine to figure out progressively better solutions.
  • Bayesians owe their name to Thomas Bayes, an 18th-century British mathematician, best-known thanks to the theorem he set forth. For Bayesians statistics are essential, and believe machine learning should be a form of probabilistic inference. Their basic idea is that everything we learn, everything we know, is uncertain and therefore we need to calculate, for every piece of information, the probability of it being incorrect and keep these probabilities updated based on proofs, following Bayes’ mathematical theorem. The first anti-spam filters were developed building on these principles.
  • Many studies have shown that this analogy is basic in human reasoning: when we facing a dilemma, we look at our past, remember what happened and act accordingly, using analogies, which, according to analogists, are the cornerstone of machine learning. Their algorithms power today’s online recommendation engines, so profitable for companies. The machine learns that if A and B share similar interests, and B likes something that A is not familiar with, A will like that ‘something’ that he/she is not familiar with. One third of Amazon’s recommendation system is responsible for one third of sales.

Ramkumar Ravichandran, Director, Analytics & A/B Testing at Visa

Domingos considers that all these theories are useful and solve fundamental problems in machine learning. But at the same time, they are incomplete. In any case, and as Ramkumar Ravichandran, Director, Analytics & A/B Testing at Visa, Big Data Spain “AI is much closer than what we think. It is a systemic change that is going to make our role evolve. This does not necessarily mean that we will be rendered jobless, not also because not all jobs need AI, but because AI isn’t fit for all jobs. “So, wherein lies the key to machine learning? “The important thing is to clearly distinguish between what humans do better and what machines do better,” said Natham, whose prognosis is conclusive: “Humans will not be left without a job, because as long as there are problems, there will be jobs.”There are things that not even the master algorithm will be able to change.

*”If you liked this article and find useful content related to the topic of Big Data, visit BBVA Careers and connect with us on LinkedIn

Contact: Communications

You may be interested in these stories

Did you enjoy what you read?