The human error of Artificial Intelligence
What emerges clearly from the EU Commission’s Proposal for a Regulation on Artificial Intelligence in Europe is that AI systems aimed at human profiling are exposed to different types of implicit errors and biases in human profiling that can amplify inequalities in our society and can have a negative impact on human rights.
Veronica Barassi, anthropologist and Professor at the University of St.Gallen in Switzerland, argues in a longer piece on Agenda Digitale (Italian version here) that nowadays there is no boundary between consumer data, which is collected for targeted ads, and citizen data, which is collected to decide whether or not we can have access to certain rights.
Algorithms are not equipped to understand human complexity. Thus, they end up making reductionist and incorrect assumptions about the intention behind a specific digital practice (web search, a purchase, or a social media post). This is because the data that is collected from our digital practices – and used to create our digital profiles – is often stripped from the feelings, thoughts and context that produced it. […]
The problem with our AI systems today, is not only that they learn from inaccurate and de-contextualized human data, but also that they are trained with databases which are filled with biases and errors. […] Databases can’t really be corrected with ‘clean and error-free’ data, because they reflect the social and cultural context that created them, and hence are inherently biased. We have the responsibility to reduce that bias, but we can’t really fix it. This, to me, is the key if we really want to understand and limit the social impact of AI systems. We need to recognize the human error of artificial intelligence. […]
Current strategies to ‘combat algorithmic bias’ in the industry are deeply problematic because they push forward the belief that algorithms can be corrected and that they can be unbiased.
These strategies to ‘combat algorithmic bias’ are, in my opinion, not only problematic but completely miss the point. […]
Rather than trying to fix the biases of AI systems and their human error, we need to find ways to coexist with it. Anthropology can help us a lot here. Anthropologists have long sought to address the fact that individuals necessarily interpret real-life phenomena according to their cultural beliefs and experience [16], and that cultural biases necessarily translate into the systems we construct, including scientific systems [17]. From an anthropological perspective, there is nothing we can really do to ‘correct’ or combat our cultural bias, because it will always be there. The only thing we can do is to acknowledge the existence of prejudice through a self-reflective practice and admit that the systems, representations, and artifacts we construct will never be truly ‘objective’. This same understanding should be applied to our AI systems.
.