rightsretained \isbn978-1-4503-6819-3/20/04 https://doi.org/10.1145/3334480.XXXXXXX \copyrightinfo\acmcopyright
Fair and Responsible AI: A Focus on the Ability to Contest
Abstract
As the use of artificial intelligence (AI) in high-stakes decision-making increases, the ability to contest such decisions is being recognised in AI ethics guidelines as an important safeguard for individuals. Yet, there is little guidance on how AI systems can be designed to support contestation. In this paper we explain that the design of a contestation process is important due to its impact on perceptions of fairness and satisfaction. We also consider design challenges, including a lack of transparency as well as the numerous design options that decision-making entities will be faced with. We argue for a human-centred approach to designing for contestability to ensure that the needs of decision subjects, and the community, are met.
keywords:
Contestability; explainability; algorithmic fairness, ethics.<ccs2012> <concept> <concept_id>10003120.10003121.10003126</concept_id> <concept_desc>Human-centered computing HCI theory, concepts and models</concept_desc> <concept_significance>500</concept_significance> </concept>https://www.overleaf.com/project/5e38ee54d5d8de000138fdd7 </ccs2012>
[500]Human-centered computing HCI theory, concepts and models
1 Introduction
There is great potential for Artificial Intelligence (AI) to enhance decision-making, by making it more accurate, efficient, and scalable than human decision-making [5, 13]. To harness these benefits, AI systems should be designed responsibly, to ensure that they are fair, accountable, and transparent [7]. This is particularly important given the increasing use of AI in high-stakes decision-making, including sentencing, hiring, and loan application determination [13].
In response to calls for AI systems to be designed, developed, and deployed responsibly, numerous AI ethics guidelines have been produced. One ‘safeguard’ that is gaining traction within these guidelines is the ability to contest AI decisions (see sidebar for examples) [12]. Article 22(3) of the European Union’s General Data Protection Regulation provides a legal ‘right to contest’ decisions made using solely automated processes. However, none of these documents provide guidance on how AI systems should be designed to enable contestation. In this paper, we outline why design is important, what the design challenges are, and our human-centred approach to designing for contestability.
2 The importance of design
The importance of designing AI systems to enable ‘contestability’ has been acknowledged in HCI and Algorithmic Accountability literature (e.g. [11, 3]). Using a legal lens, the Algorithmic Accountability work has taken a theoretical approach to proposing requirements of a contestation scheme [4], and design requirements that enable contestation [3]. Within HCI, the focus of contestability research has been on the ability of expert users to work interactively with a system to contest its output [11].
HCI researchers [13, 5, 10] have also drawn on organisational psychology literature to study how the design of AI systems used in decision-making impact human perceptions of procedural fairness, ‘procedural justice’ [19]. The procedural justice literature indicates that having a legitimate way to contest a decision increases a person’s perception of procedural fairness, which can impact their perception of the fairness of the decision (‘distributive justice’), their choice to accept or contest a decision, and their attitude towards the entity making the decision [19, 14]. In line with this literature [14], Lee et al [13] found that having ‘outcome control’, the ability to correct or appeal a decision, in a cooperative group allocation task improved participants’ perceptions of the fairness of the outcome.
The procedural justice literature also indicates that design of a contestation process (not just its availability) impacts perceptions of procedural fairness. For example, having the same decision-maker assess the decision on appeal is seen as less fair than having a new decision-maker [14]. In addition, in a study of content moderation across social media platforms, Myers West [15] found that users were dissatisfied with contestation processes, reporting a lack of clear instruction about how to lodge an appeal, receiving no reply or resolution having lodged a challenge, and no access to human intervention. These findings indicate that the design of a contestation process matters.
3 Design challenges
To meaningfully challenge a decision, a decision subject requires some form of information in order to understand the decision, decide whether to contest, and to use as grounds for contestation. Many AI systems used in decision-making are effectively “black boxes" [18]; their decision-making processes are hidden due to the use of complex algorithms or techniques (e.g. deep learning) or intentionally by companies to protect trade secrets [6]. This opacity makes it difficult to understand why a decision was made, and consequently, to contest it in any meaningful way. In contrast, with human decision-making a person can generally seek an explanation from the decision maker as to why a decision was made. Often, in high-stakes decisions, reasons must be documented during the decision-making process to mitigate the issue of an inaccurate post-hoc explanation. Promisingly, the field of explainable artificial intelligence (XAI) is progressing work into explainability [20]. To date, XAI has not focused on providing explanations for contestation specifically, which offers a new avenue of research.
A second design challenge is that there are many ways to contest a decision [14]. For example, existing contestation processes for human decisions (e.g. internal review, complaints mechanisms, external review via tribunal or court) could be adapted for decisions made using AI. However, with decisions made at scale, leaving appeal processes to a court to determine would overwhelm an already pressured system. Low perceptions of fairness are associated with procedures that are time consuming, costly and resource intensive [14]. An alternative contestation process might involve a decision subject directly contesting a decision within AI system via an interface. However, the novelty of this approach coupled with a lack of human touch could negatively impact perceptions of fairness. With an abundance of design choices, it is difficult to know where to begin.
We suggest that taking a human-centred approach to explore how people conceptualise contestability in relation to AI systems is a key first step in designing for meaningful contestation. To understand the needs of decision subjects, and the expectations of the community more generally, we are currently conducting a thematic analysis of submissions made to Australia’s ‘Artificial Intelligence: Australia’s Ethics Framework’, a discussion paper that proposed ‘contestability’ as a core ethical principle [16]. The sidebar contains a sample of our preliminary findings.
4 Conclusion
The increasing use of AI in high-stakes decision-making, which has been deployed without appropriate safeguards like procedural fairness, has had a significant, negative impact on thousands of people, from teachers losing their jobs [2] to the erroneous loss of medical benefits [8]. To reduce negative consequences, AI systems must be responsibly designed, developed, and deployed [7]. Though the ability to contest decisions is not the only mechanism required to ensure that AI systems are ‘fair’, it is a crucial safeguard, and in some circumstances, a legal requirement. How access to contestation, and the contestation process itself, is designed is important given the impact on perceptions of fairness and satisfaction. Yet, there are many design challenges including opacity, and an abundance of design options. A key first step in designing for meaningful contestation is to explore and understand the needs of decision subjects as well as the community more generally.
5 Acknowledgements
Henrietta Lyons is supported by the Melbourne School of Engineering Ingenium scholarship program. This research was partly funded by Australian Research Council Discovery Grant DP190103414 Explanation in Artificial Intelligence: A Human-Centred Approach. Eduardo Velloso is the recipient of an Australian Research Council Discovery Early Career Researcher Award (Project Number: DE180100315) funded by the Australian Government.
References
- [1]
- [2] Houston Federation of Teachers, Local 2415, et al v Houston Independent School District, 251 F.Supp.3d 116 (2017).
- [3] Marco Almada. 2019. Human intervention in automated decision-making: Toward the construction of contestable systems. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law. 2–11.
- [4] Emre Bayamlioglu. 2018. Contesting Automated Decisions. European Data Protection Law Review 4 (2018), 433–446.
- [5] Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. It’s Reducing a Human Being to a Percentage. Proc of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18 (2018).
- [6] Jenna Burrell. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data and Society (2016), 1–12.
- [7] Virginia Dignum. 2019. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer.
- [8] Virginia Eubanks. 2018. Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. St Martin’s Publishing Group, Hillsdale, NJ.
- [9] Organisation for Economic Co-operation and Development. 2019. OECD Principles on Artificial Intelligence. (2019). Retrieved 30 January 2020 from https://www.oecd.org/going-digital/ai/principles/.
- [10] Nina Grgic-Hlaca, Muhammad Bilal Zafar, Krishna P. Gummadi, and Adrian Weller. 2018. Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning. In Proc of Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). 51–60.
- [11] Tad Hirsch, Kritzia Merced, Shrikanth Narayanan, Zac E Imel, and David C Atkins. 2017. Designing contestability: Interaction design, machine learning, and mental health. In Proceedings of the 2017 Conference on Designing Interactive Systems. 95–99.
- [12] Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1 (2019), 389–399.
- [13] Min Kyung Lee, Anuraag Jain, Hea Jin Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–26.
- [14] Gerald S Leventhal. 1980. What should be done with equity theory? In Social exchange. Springer, 27–55.
- [15] Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society 2, 11 (2018), 4366–4383.
- [16] Australian Government Department of Industry Innovation and Science. 2019. AI Ethics Framework. (2019). Retrieved 30 January 2020 from https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework.
- [17] Independent High-Level Expert Group on Artificial Intelligence. 2019. Ethics Guidelines for Trustworthy AI. (2019). Retrieved on January 30, 2020 from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.
- [18] Cynthia Rudin. 2018. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. (2018).
- [19] John Thibaut and Laurens Walker. 1975. Procedural Justice: A Psychological Analysis. Lawrence Erlbaum Associates, Hillsdale, NJ.
- [20] Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2018. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law and Technology 31, 2 (2018), 841–887.