Designing for human rights in AI

human rights, Design for Values, Value Sensitive Design, ethics, stakeholders, Artificial Intelligence

Authors: Evgeni Aizenberg and Jeroen van den Hoven

Year: 2020

Published in: Big Data & Society.

Read me: DOI: http://dx.doi.org/10.1177/2053951720949566. Website.

Abstract: In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. Artificial intelligence (AI) systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident that these technological developments are consequential to people’s fundamental human rights. Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context and the critical input of societal stakeholders who are impacted by the technology. On the other hand, calls for more ethically and socially aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. Bridging these socio-technical gaps and the deep divide between abstract value language and design requirements is essential to facilitate nuanced, context-dependent design choices that will support moral and social values. In this paper, we bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process.

Bibtex (copy):
@article{aizenberg-2020,
  author = {Aizenberg, Evgeni and van den Hoven, Jeroen},
  doi = {10.1177/2053951720949566},
  journal = {Big Data & Society},
  number = {2},
  title = {{Designing for human rights in AI}},
  volume = {7},
  year = {2020}
}

Annotation

By Christie Bavelaar, Lars van Koetsveld van Ankeren. 🪧Slides.

This paper discusses their way to structure the design process for AI in a way that honours the fundamental human rights. Technological developments have the ability to infer with fundamental human rights. This happens when technical solutions are implemented without empirical study of societal context. Calls for more ethical AI stress the importance of transparency, however do not provide practical solutions. This creates a socio-technical gap that needs to be bridged.

The paper stresses the importance of a democratic design process where stakeholders are involved. This design process is to be structured using the tripartite methodology. One, the stakeholders and values need to be specified. Second, the needs and experiences of these stakeholders have to be explored. Third, the implementation and evaluation of technical solutions can be defined. These three types of investigations do not exist in isolation, but rather influence and enhance each other.

The authors make an explicit choice to ground their work in the human rights expressed by the EU Charter of Fundamental Rights. They explore different human rights such as dignity, freedom, equality and solidarity. Using an hierarchical approach norms can be derived from values and these norms result in specific design requirement. Fundamental human values and norms are most easily defined by the ways in which they can be violated. This is why the authors provide examples where AI may violate these norms and values and how these violations can be avoided. Users need to be aware that they are being subjected to AI and need to be able to contest the AI’s decisions. Stakeholders need to reflect on which data is justifiably necessary for the system to use. Sometimes the conclusion may even be that AI is not the solution to the presented problem.

The paper concludes that technology can not be the solution to complex societal problems, since technology is not as ethically neutral or objective as it is often perceived. To this end the authors presented their value design approach so that institutions and societies can ensure AI contributes positively to the enjoyment of human rights. These principles do not apply only to AI, since different technologies can have a similar impact on human rights. Lastly, the authors conclude that designing for human values does not hinder technological innovation, instead leading to long-term benefits to individuals in society and developers, having gained a higher amount of trust.

– 📖 –