We need to save (willful) ignorance from AI

Recently the psychologist Ralph Hertwig and legal scholar Christoph Engel have published an extensive taxonomy of motives for deliberate ignorance. They identified two sets of motives, in particular, that have a particular relevance to the need for ignorance in the face of AI.

The first set of motives revolves around impartiality and fairness. Simply put, knowledge can sometimes corrupt judgment, and we often choose to remain deliberately ignorant in response. For example, peer reviews of academic papers are usually anonymous. Insurance companies in most countries are not permitted to know all the details of their client’s health before they enroll; they only know general risk factors. This type of consideration is particularly relevant to AI, because AI can produce highly prejudicial information.

The second relevant motives are emotional regulation and regret avoidance. Deliberate ignorance, Hertwig and Engel write, can help people to maintain “cherished beliefs,” and avoid “mental discomfort, fear, and cognitive dissonance.” The prevalence of deliberate ignorance is high. About 90 percent of surveyed Germans want to avoid negative feelings that may arise from “foreknowledge of negative events, such as death and divorce,” and 40 to 70 percent also do not want to know about positive events, to help maintain “positive feelings of surprise and suspense” that come from, for example, not knowing the sex of an unborn child.

These sets of motives can help us understand the need to protect ignorance in the face of AI, write Christina Leuker and Wouter van den Bos in a highly recommended essay on Nautilus.