We are often told that Big Data analysis uncovers contingent truths. Said truths help companies to make decisions, even in real time, to help drive efficiency and act coherently with their objectives. However, any algorithm "worth its salt", in the words of Álvaro Bedoya, Director of the Centre on Privacy and Technology at Georgetown University, will harness machine learning to absorb intangible standards that form the backbone of many world views, i.e. prejudices.
The example that Bedoya uses is an algorithm to identify the strongest applicants for a job. A company using such an algorithm to select young candidates would see older candidates eliminated from later processes due to machine learning.
And there are more ways in which we instill, often unconsciously, but not always, data processing tools with ideas that are as subjective as they are irrelevant to pure analysis of mass data. Professor Latanya Sweeney, Director of the Data Privacy Lab at Harvard, discovered a correlation between Google Adwords buys by companies that provide criminal background checks and names sociologically related with Afro- American communities in the USA.
A search for any socially "racialized" name was more likely to bring up an ad suggesting that the person in question had a criminal record. It goes without saying that such a notice could have significant repercussions, for example in the above hiring scenario.
Other voices have also warned that such practices could perpetuate discriminatory attitudes, such as sexism and racism, thus missing the opportunity to harness new technologies to correct them. The Human Rights Data Analysis Group warns that if the police concentrate on certain districts in search of drugs and weapons, driven by social prejudice, and later input arrest and seizure information into an algorithm, said algorithm will indicate that the police should continue to concentrate on the same specific districts. This would hinder the potential for such technology to uncover genuine trends.