24 February 2018

[Paper] The Intuitive Appeal of Explainable Machines

Be the first to share
Media
Business
Services
Social Issues

The Intuitive Appeal of Explainable Machines
by Andrew D. Selbst (Data & Society Research Institute; Yale Information Society Project) and Solon Barocas (Cornell University)
February 19, 2018, 59 pages

Abstract

As algorithmic decision-making has become synonymous with inexplicable decision-making, we have become obsessed with opening the black box. This Article responds to a growing chorus of legal scholars and policymakers demanding explainable machines. Their instinct makes sense; what is unexplainable is usually unaccountable. But the calls for explanation are a reaction to two distinct but often conflated properties of machine-learning models: inscrutability and non intuitiveness. Inscrutability makes one unable to fully grasp the model, while non intuitiveness means one cannot understand why the model’s rules are what they are. Solving inscrutability alone will not resolve law and policy concerns; accountability relates not merely to how models work, but whether they are justified.

In this Article, we first explain what makes models inscrutable as a technical matter. We then explore two important examples of existing regulation-by-explanation and techniques within machine learning for explaining inscrutable decisions. We show that while these techniques might allow machine learning to comply with existing laws, compliance will rarely be enough to assess whether decision-making rests on a justifiable basis.

We argue that calls for explainable machines have failed to recognize the connection between intuition and evaluation and the limitations of such an approach. A belief in the value of explanation for justification assumes that if only a model is explained, problems will reveal themselves intuitively. Machine learning, however, can uncover relationships that are both non-intuitive and legitimate, frustrating this mode of normative assessment. If justification requires understanding why the model’s rules are what they are, we should seek explanations of the process behind a model’s development and use, not just explanations of the model itself. This Article illuminates the explanation-intuition dynamic and offers documentation as an alternative approach to evaluating machine learning models.

Conclusion

Daniel Kahneman has referred to the human mind as a “machine for jumping to conclusions.” Intuition is a basic component of human reasoning, and reasoning about the law is no different. It should therefore not be surprising that we are suspicious of strange relationships in models that admit of no intuitive explanation at all. The natural inclination at this point is to regulate machine learning such that its outputs comport with intuition.

This has led to calls for regulation by explanation. Inscrutability is the property of machine learning models that is seen as the problem, and the target of the majority of proposed remedies. The legal and technical work addressing the problem of inscrutability has been motivated by different beliefs about the utility of explanations: inherent value, enabling action, and providing a way to evaluate the basis of decision-making. While the first two rationales may have their own merits, the law has more substantial and concrete concerns that must be addressed. But those that believe solving inscrutability provides a path to normative evaluation also fall short of the goal because they fail to recognize the role of intuition.

Solving inscrutability is a necessary step, but the limitations of intuition will prevent such assessment in many cases. Where intuition fails us, the task should be to find new ways to regulate machine learning so that it remains accountable. Otherwise, if we maintain an affirmative requirement for intuitive relationships, we will potentially lose out on many discoveries and opportunities that machine learning can offer, including those that would reduce bias and discrimination.

Just as restricting our evaluation to intuition will be costly, so would abandoning it entirely. Intuition serves as an important check that cannot be provided by quantitative modes of validation. But while there will always be a role for intuition, we will not always be able to use intuition to bypass the question of why the rules are the rules. Sometimes we need the developers to show their work.

Documentation can relate the subjective choices involved in applying machine learning to the normative goals of substantive law. Much of the discussion surrounding models implicates important policy discussions, but does so indirectly. Often, when models are employed to change our way of making decisions, we tend to focus too much on the technology itself, when we should be focused on the policy changes that either led to the adoption of the technology or were wrought by the adoption. Quite aside from correcting one failure mode of intuition, then, the documentation has a separate worth in laying bare the kinds of value judgments that go into designing these systems, and allowing society to engage in a clearer normative debate in the future.

We cannot and should not abandon intuition. But only by recognizing the role intuition plays in our normative reasoning can we recognize that there are other ways. To complement intuition, we need to ask whether people have made reasonable judgements about competing values under their real-world constraints. Only humans know the answer.

Be the first to share
17 July 2019
The digital lives of refugees [Research report]
There is growing recognition among donors and humanitarian organisations that mobile technology and mobile network operators (MNOs) have an important role to play in the delivery of dignified aid. This includes providing digital tools that …
4 January 2019
Why data is never raw
"Except in divine revelation, data is never simply given, nor should it be accepted on faith," writes Nick Barrowman in The New Atlantis. "How data are construed, recorded, and collected is the result of human …
1 January 2019
Daniel Kahneman on cutting through the noise
You might be surprised by what occupies Daniel Kahneman’s thoughts. “You seem to think that I think of bias all the time,” he tells esteemed economist Tyler Cowen. “I really don’t think of bias that …
19 November 2018
What are Europe’s rules for democratic artificial intelligence?
Vincenzo Tiani has written an excellent summary in Wired Italia of the recently published "Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations", by AI4People, a task force of European experts. With the …
18 November 2018
Consumer behaviour and the circular economy
Report: Behavioural Study on Consumers’ Engagement in the Circular Economy London Economics for the European Commission – Consumers Health and Food Executive Agency (CHAFEA) 23 October 2018, 202 pages The objective of this study was to provide policy-relevant …
24 October 2018
[Book] Challenging the City Scale
Challenging The City Scale: Journeys in People-Centred Design Ed. by Cité du Design (Saint-Etienne) and Clear Village (London) Birkhäuser, 2018, 176 pages (free ebook) This 176 pages book is released by the famous international publisher Birkhäuser, and co-edited …
8 October 2018
Designing for positive impact and human flourishing
A few days ago Experientia attended the Service Design Days in Barcelona. One of the surprise presentations was by Anna Pohlmeyer, who co-directs the Delft Institute for Positive Design. Although the title seemed a bit airy …
25 April 2018
Two reports on people’s attitudes and understanding of digital technologies
Doteveryone, a UK think tank that champions responsible technology for the good of everyone in society [similar to Milan's newly founded Digital Culture Center], published two reports this year: the first one, The Digital Attitudes …

We are an international experience design consultancy helping companies and organisations to innovate their products, services and processes by putting people and their experiences first.

5 July 2019
Experientia on addressing vaccine hesitancy

Vaccine hesitancy is a top10 global health threat. Dealing with it successfully requires understanding it as a behaviour and generating a holistic view of people’s perspectives & ecosystems. This way we can identify the best opportunities for intervention. Here is Experientia’s position on vaccine hesitancy and our tailor-made support for the vaccine industry.

2 May 2019
MacArthur Foundation’s “100&Change” competition now calling for applications

100&Change is a MacArthur Foundation competition for a $100 million grant to fund a single proposal that will make measurable progress toward solving a significant problem. 100&Change will select a bold proposal that promises real progress toward solving a critical problem of our time. And it will award a $100 million grant to help make […]

19 March 2019
Interested in a career with us?

Then we are interested in hearing from you. We have several positions available for talented UX/UI and service designers who are passionate about creating world-class user experiences. Please see the job descriptions on our website for more information, and send us your CV with a cover letter statement about yourself, your experience, and what UX […]

22 December 2018
Our very best wishes for 2019! / Auguri da Experientia!
4 October 2018
Bringing​ ​patients into focus at the Roche Innovation Summit

Experientia is proud to have been a key participant at the Roche Innovation Summit, held at Roche headquarters in Basel Switzerland on 19 June 2018. Themed “​Transforming the Healthcare Experience Together​”, the summit aimed to galvanize the Roche community around the future transformation of healthcare and diagnostics. With 800 attendees from Roche and Genentech global […]

13 June 2018
Invito: DesAlps design thinking workshop per la tua startup

Hai una startup? Hai mai pensato ai benefici che potrebbe trarre dal Design Thinking? Questa è l’opportunità per scoprirlo! DesAlps Workshop #2: Il Design Thinking per la tua startup! Giovedì 28 giugno 2018 – dalle 9:30 alle 17:00 @ I3P | Corso Castelfidardo 30/a, Torino —– Nell’ambito del progetto europeo DesAlps, un team di esperti […]

See all articles