Understanding Potential Microtask Workers for Paid Crowdsourcing

Ming-Hung Wang, Kuan-Ta Chen, Shuo-Yang Wang,, Chin-Laung Lei
Department of Electrical Engineering, National Taiwan University
Institute of Information Science, Academia Sinica

PDF Version | Contact Us

Abstract

More and more people leverage the power of crowds to obtain solutions of their problems, and the number of microtask workers also increases rapidly on paid crowdsourcing marketplaces. However, there is an order of magnitude discrepancy between the population of Internet users ( ≈ 2 billions) and that of microtask workers ( ≈ 0.5 millions); we believe that a large number of potential workers are interested in microtasks but have not involved in paid crowdsourcing activities yet.
In this paper, we conduct an Internet survey to investigate potential workers regarding their demographics, experiences in microtasks, preferences on tasks and rewards. We hope that our findings from this research are helpful for establishing a prosperous paid crowdsourcing environment.

Introduction

Nowadays, platforms such as Amazon Mechanical Turk (MTurk) provide matchmaking environments between microtask workers and crowdsourcers. According to a report1 from MTurk, there were 0.5 million workers from 190 countries and over 0.2 million tasks available on MTurk in 2011.
Though more and more people have involved in crowdsourcing activities, if we compare the population of Internet users2 ( ≈ 2 billions) and that of current microtask workers ( ≈ 0.5 millions), only a tiny portion ( ≈ 0.025% according to the above statistics) of Internet users have experiences in microtasks. Therefore, we consider that a huge number of Internet users may have willingness in earning money online but have not exposed to microtask information, and we call them potential workers. We believe the crowdsourcing marketplaces would be much more prosperous if we can reach, attract, and entice those potential workers.
However, how to reach and attract the potential workers is a big challenge, which we call the worker recruiting problem. The problem comprises the following core issues:
  1. How to deliver task information to potential workers?
  2. How to attract and entice potential workers to take tasks?
  3. How to reward task workers and meet their expectations?
We conduct an Internet survey to understand potential workers who may or may not have experiences in microtasks. The subjects of our survey are recruited via Internet advertisement. From the results, the major findings are:
  1. Most of the subjects did not know about microtasks before, but they are willing to earn money in this way.
  2. Fun and challenging levels of the task, as well as financial rewards, are all important factors for workers in deciding whether to take a task or not.
  3. Rewarding by the time spent on tasks is preferred by workers than rewarding by the quality of task outcomes, which is usually adopted by crowdsourcers.

Related Work

With crowdsourcing activities becoming flourishing, there are research papers concentrating on user studies of crowdsourcing marketplaces. The papers include researches about users' demographics [ Ross et al . 2010], motivations [ Hossain2012], behaviors [ Kittur, Chi, and Suh2008], and so on. However, different from previous works focusing on the workers who have involved in crowdsourcing activities, in this paper, we investigate the workers who have no experiences in microtasks before.

Research Methodology

Our questionnaire asks the subjects about their demographics, experiences in microtasks, preferences on different types of microtasks and rewards.
The subjects of our survey are recruited via Internet advertisement. Since we focus on potential workers who have no experiences in microtasks, we do not post our survey information on crowdsourcing platforms such as MTurk. We consider that Internet advertisement is a good channel for disseminating information to potential workers among all the media channels, since it has high exposure rates to Internet users. We choose Google AdWords as our advertising platform for its high exposures in Internet advertisement.
Table 1: Summary of our advertising campaign
Duration 8 days
Total cost (USD) $307.1
# clicks 7,424
# complete surveys237
Cost/response (USD) $1.3
We choose the advertisement keywords as follows. We first query "online earning" on Google Search since we believe that online and earning are two essential components of paid crowdsourcing activities. We then crawl the top 50 websites from the query results returned by Google Search. We count the frequencies of all the unigrams and bigrams of the crawled content after removing the stop words. Finally, we choose the top 25 frequent bigrams as our keywords as the unigrams are subsets of the bigrams. Our keywords include "make money," "earn cash," "online earning," "online working," "work home," "paid survey," and so on.
A summary of our advertisement campaign and our survey is shown in Table 1.

Survey Results

Demographics

Two thirds of our subjects are male. Two thirds of our subjects are under 35, while only 2% of them are over 55. As to the annual income, 74% of our subjects earned less than $10,000 last year, and only 9% of them earned more than $50,000 last year.

Experience in Microtasks

Only 33% of our subjects heard about microtasks before; 88% of the subjects do not have experiences in microtasks. After seeing an example microtask about writing an essay of a picture, more than 85% of the subjects are interested in taking microtasks. These results indicate that a large number of Internet users have no idea about microtasks, but they are willing or eager to join crowdsourcing activities for earning rewards. We consider that the results support our conjecture that a large amount of online human resources is available on crowdsourcing marketplaces.

Preference on Tasks and Rewards

Task type.

We investigate the subjects' preferences on different types of tasks and their preferences on different types of devices. From the results, our findings include 1) open-ended tasks such as writing and tagging tasks are more popular than close-ended tasks such as labor works (e.g., audio transcriptions); 2) workers like to complete microtasks on their computers rather than on their mobile devices; 3) workers prefer to perform tasks on computers rather than tasks in real world such as taking a photo of a particular statue.

Factor for selecting tasks.

In Figure 1, it is interesting that over 40% of the subjects choose tasks according to the fun and the challenging levels of tasks, even though our subjects are recruited by the keywords related to online earning. The results show that both non-financial factors and financial rewards play important roles in workers' choices on tasks.
criteria.png
Figure 1: Factor for choosing a task to do

Rewarding strategy.

strategy.png
Figure 2: Rewarding strategy preferences
Potential workers would like to earn money according to how long they have spent on tasks rather than the quality of task outcomes, as shown in Figure 2. This may be due to workers are afraid of giving away their works to the crowdsourcers if their outcomes are not approved. However, crowdsourcers are willing to pay only for qualified works [ Wu et al . 2013], so a hybrid rewarding strategy is worthy of being studied in the future.

Conclusion and Future Work

In this paper, we pose the worker recruiting problem, which refers to how to recruit a large number of people (e.g., over 10,000 workers) to take microtasks. To address this problem, we conduct an Internet survey to understand potential workers on crowdsourcing marketplaces. Our major findings include 1) most of our subjects do not know about microtasks, but they like to earn money from microtasks; 2) workers prefer to take open-ended tasks such as writing tasks rather than close-ended tasks such as transcribing an audio tape; 3) Both financial and non-financial factors are important when workers choosing tasks; 4) workers would like to be rewarded according to the time they invest in tasks rather than the quality of task outcomes.

References

[ Hossain2012] Hossain, M. 2012. Users' motivation to participate in online crowdsourcing platforms. In 2012 International Conference on Innovation Management and Technology Research, 310-315. IEEE.
[ Kittur, Chi, and Suh2008] Kittur, A.; Chi, E. H.; and Suh, B. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 453-456. ACM.
[ Ross et al . 2010] Ross, J.; Irani, L.; Silberman, M.; Zaldivar, A.; and Tomlinson, B. 2010. Who are the crowdworkers?: shifting demographics in mechanical turk. In Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems, 2863-2872. ACM.
[ Wu et al . 2013] Wu, C.-C.; Chen, K.-T.; Chang, Y.-C.; and Lei, C.-L. 2013. Crowdsourcing Multimedia QoE Evaluation: A Trusted Framework. IEEE Transactions on Multimedia.

Footnotes:

1. https://forums.aws.amazon.com/thread.jspa?threadID=58891
2. http://www.internetworldstats.com/stats.htm


Sheng-Wei Chen (also known as Kuan-Ta Chen)
http://www.iis.sinica.edu.tw/~swc 
Last Update September 28, 2019