Despite the obvious advantages, automation driven by machine learning and artificial intelligence carries pitfalls for the lives of millions of people. The pitfalls include disappearance of many well-established mass professions and increasing consumption of labeled data produced by humans. Those data suppliers are often managed by old fashioned approach and have to work full-time on routine pre-assigned task types. Crowdsourcing methodology can be considered as a modern and effective way to overcome these issues since it provides flexibility and freedom for task executors in terms of place, time and the task type they want to work on. However, many potential stakeholders of crowdsourcing processes hesitate to use this technology due to a series of doubts (that have not been removed during the past decade). In order to overcome this, we organize this workshop which will focus research and industry communities on three important aspects: Remoteness, Fairness, and Mechanisms.
Remoteness. Data labeling requesters (data consumers for ML systems) doubt the effectiveness and efficiency of remote work. They need trustworthy quality control techniques and ways to guarantee reliable results on time. Crowdsourcing is one of the viable solutions for effective remote work. However, in spite of the rapid growth and the body of literature on the topic, crowdsourcing is in its infancy and, to a large extent, is still an art. It lacks clear guidelines and accepted practices for both the requester and the performers (also known as workers) side, which significantly impedes the opportunity to realize the full potential of crowdsourcing. We intend to end this trend and achieve a breakthrough in this direction.
Fairness. Crowd workers (data suppliers) doubt the availability and choice of tasks. They need fair and ethical task assignment, fair compensation, and growth opportunities. We believe that a working environment (e.g., a crowdsourcing platform) may help here since it should provide flexibility in choosing/switching tasks and working hours, as well as act fairly and ethically in task assignment. We also aim to address bias in the task design and execution that can skew results in ways that had not been anticipated by data requesters.
Since quality, fairness and growth opportunities for performers are central to our workshop, we will invite a diverse group of performers from a global public crowdsourcing platform to our panel-led discussion.
Mechanisms. Matchmakers (the side of the working environment, usually represented by a crowdsourcing platform) doubt the effectiveness of economic mechanisms that underlie their two-sided market. They need such mechanism design that guarantees proper incentives for both sides to provide flexibility and fairness for workers, while quality and efficiency for data requesters. We stress that the economic mechanisms are the key to successfully address the issues of remoteness and fairness. Hence, we intend to deepen the interaction of communities that work on mechanisms and crowdsourcing.