Applying a Principle of Explicability to AI Research in Africa: Should we do it?

Abstract

Developing and implementing artificial intelligence (AI) systems in an ethical manner faces several challenges specific to the kind of technology at hand, including ensuring that decision-making systems making use of machine learning are just, fair, and intelligible, and are aligned with our human values. Given that values vary across cultures, an additional ethical challenge is to ensure that these AI systems are not developed according to some unquestioned but questionable assumption of universal norms but are in fact compatible with the societies in which they operate. This is particularly pertinent for AI research and implementation across Africa, a ground where AI systems are and will be used but also a place with a history of imposition of outside values. In this paper, we thus critically examine one proposal for ensuring that decision-making systems are just, fair, and intelligible—that we adopt a principle of explicability to generate specific recommendations—to assess whether the principle should be adopted in an African research context. We argue that a principle of explicability not only can contribute to responsible and thoughtful development of AI that is sensitive to African interests and values, but can also advance tackling some of the computational challenges in machine learning research. In this way, the motivation for ensuring that a machine learning-based system is just, fair, and intelligible is not only to meet ethical requirements, but also to make effective progress in the field itself.

Publication
Ethics and Information Technology
Benjamin Rosman
Benjamin Rosman
Lab Director

I am a Professor in the School of Computer Science and Applied Mathematics at the University of the Witwatersrand in Johannesburg. I work in robotics, artificial intelligence, decision theory and machine learning.