Fairness and accountability of AI in disaster risk management: Opportunities and challenges

Abstract

Disaster risk management (DRM) seeks to help societies prepare for, mitigate, or recover from the adverse impacts of disasters and climate change. Core to DRM are disaster risk models that rely heavily on geospatial data about the natural and built environments. Developers are increasingly turning to artificial intelligence (AI) to improve the quality of these models. Yet, there is still little understanding of how the extent of hidden geospatial biases affects disaster risk models and how accountability relationships are affected by these emerging actors and methods. In many cases, there is also a disconnect between the algorithm designers and the communities where the research is conducted or algorithms are implemented. This perspective highlights emerging concerns about the use of AI in DRM. We discuss potential concerns and illustrate what must be considered from a data science, ethical, and social perspective to ensure the responsible usage of AI in this field.

Publication
Patterns (Elsevier)
Benjamin Rosman
Benjamin Rosman
Lab Director

I am a Professor in the School of Computer Science and Applied Mathematics at the University of the Witwatersrand in Johannesburg. I work in robotics, artificial intelligence, decision theory and machine learning.