[Paper] The Intuitive Appeal of Explainable Machines

The Intuitive Appeal of Explainable Machines
by Andrew D. Selbst (Data & Society Research Institute; Yale Information Society Project) and Solon Barocas (Cornell University)
February 19, 2018, 59 pages

Abstract

As algorithmic decision-making has become synonymous with inexplicable decision-making, we have become obsessed with opening the black box. This Article responds to a growing chorus of legal scholars and policymakers demanding explainable machines. Their instinct makes sense; what is unexplainable is usually unaccountable. But the calls for explanation are a reaction to two distinct but often conflated properties of machine-learning models: inscrutability and non intuitiveness. Inscrutability makes one unable to fully grasp the model, while non intuitiveness means one cannot understand why the model’s rules are what they are. Solving inscrutability alone will not resolve law and policy concerns; accountability relates not merely to how models work, but whether they are justified.

In this Article, we first explain what makes models inscrutable as a technical matter. We then explore two important examples of existing regulation-by-explanation and techniques within machine learning for explaining inscrutable decisions. We show that while these techniques might allow machine learning to comply with existing laws, compliance will rarely be enough to assess whether decision-making rests on a justifiable basis.

We argue that calls for explainable machines have failed to recognize the connection between intuition and evaluation and the limitations of such an approach. A belief in the value of explanation for justification assumes that if only a model is explained, problems will reveal themselves intuitively. Machine learning, however, can uncover relationships that are both non-intuitive and legitimate, frustrating this mode of normative assessment. If justification requires understanding why the model’s rules are what they are, we should seek explanations of the process behind a model’s development and use, not just explanations of the model itself. This Article illuminates the explanation-intuition dynamic and offers documentation as an alternative approach to evaluating machine learning models.

Conclusion

Daniel Kahneman has referred to the human mind as a “machine for jumping to conclusions”. Intuition is a basic component of human reasoning, and reasoning about the law is no different. It should therefore not be surprising that we are suspicious of strange relationships in models that admit of no intuitive explanation at all. The natural inclination at this point is to regulate machine learning such that its outputs comport with intuition.

This has led to calls for regulation by explanation. Inscrutability is the property of machine learning models that is seen as the problem, and the target of the majority of proposed remedies. The legal and technical work addressing the problem of inscrutability has been motivated by different beliefs about the utility of explanations: inherent value, enabling action, and providing a way to evaluate the basis of decision-making. While the first two rationales may have their own merits, the law has more substantial and concrete concerns that must be addressed. But those that believe solving inscrutability provides a path to normative evaluation also fall short of the goal because they fail to recognize the role of intuition.

Solving inscrutability is a necessary step, but the limitations of intuition will prevent such assessment in many cases. Where intuition fails us, the task should be to find new ways to regulate machine learning so that it remains accountable. Otherwise, if we maintain an affirmative requirement for intuitive relationships, we will potentially lose out on many discoveries and opportunities that machine learning can offer, including those that would reduce bias and discrimination.

Just as restricting our evaluation to intuition will be costly, so would abandoning it entirely. Intuition serves as an important check that cannot be provided by quantitative modes of validation. But while there will always be a role for intuition, we will not always be able to use intuition to bypass the question of why the rules are the rules. Sometimes we need the developers to show their work.

Documentation can relate the subjective choices involved in applying machine learning to the normative goals of substantive law. Much of the discussion surrounding models implicates important policy discussions, but does so indirectly. Often, when models are employed to change our way of making decisions, we tend to focus too much on the technology itself, when we should be focused on the policy changes that either led to the adoption of the technology or were wrought by the adoption. Quite aside from correcting one failure mode of intuition, then, the documentation has a separate worth in laying bare the kinds of value judgments that go into designing these systems, and allowing society to engage in a clearer normative debate in the future.

We cannot and should not abandon intuition. But only by recognizing the role intuition plays in our normative reasoning can we recognize that there are other ways. To complement intuition, we need to ask whether people have made reasonable judgements about competing values under their real-world constraints. Only humans know the answer.