Confident in the Crowd: Bayesian Inference to Improve Data Labelling in Crowdsourcing

Abstract

With the increased interest in machine learning and big data problems, the need for large amounts of labelled data has also grown. However, it is often infeasible to get experts to label all of this data, which leads many practitioners to crowdsourcing solutions. In this paper, we present new techniques to improve the quality of the labels while attempting to reduce the cost. The naive approach to assigning labels is to adopt a majority vote method, however, in the context of data labelling, this is not always ideal as data labellers are not equally reliable. One might, instead, give higher priority to certain labellers through some kind of weighted vote based on past performance. This paper investigates the use of more sophisticated methods, such as Bayesian inference, to measure the performance of the labellers as well as the confidence of each label. The methods we propose follow an iterative improvement algorithm which attempts to use the least amount of workers necessary to achieve the desired confidence in the inferred label. This paper explores simulated binary classification problems with simulated workers and questions to test the proposed methods. Our methods outperform the standard voting methods in both cost and accuracy while maintaining higher reliability when there is disagreement within the crowd.

Publication
International SAUPEC/RobMech/PRASA Conference
Pierce Burke
Pierce Burke

My work mostly focuses on computer vision, generative AI, and the application of machine learning models in various domains. However, I have an interest in using machine learning techniques for addressing real-world challenges and improving our understanding of the world around us.

Richard Klein
Richard Klein
PRIME Lab Director

I am an Associate Professor in the School of Computer Science and Applied Mathematics at the University of the Witwatersrand in Johannesburg, and a co-PI of the PRIME lab.