Abstract-The increasing adoption of crowdsourcing for commercial and industrial purposes rises the need for creating sophisticated mechanisms in crowd-based digital platforms for efficient worker management. One of the main challenges in this area is worker motivation and skill set control and its impact on the output quality. The quality delivered by the workers in the crowd depends on different aspects such as their skills, experience, commitment, etc. The lack of generic and detailed proposals to incentive workers and the need for creating ad-hoc solutions depending on the domain make it difficult to evaluate the best rewarding functions in each scenario. In this paper, we make a step further in this direction and propose the use of aggregation functions to evaluate the professional skills of crowd-workers based on the quality of their past tasks. Additionally, we present a real industrial crowdsourcing solution for software localisation in which the proposed solutions are put into practice with real text translations quality measures.
Several business-to-business and business-to-consumer services are provided as a human-to-human conversation in which the provider representative guides the conversation towards its resolution based on her experience, following internal guidelines. Several attempts to automatize these services are becoming popular, but they are currently limited to procedures and objectives set during design step. Process discovery techniques could provide the necessary mechanisms to monitor event logs derived from textual conversations and expand the capabilities of conversational bots. Still, variability of textual messages hinders the utility of process discovery techniques by producing non-understandable unstructured process models. In this paper, we propose the usage of word embedding for combining events that have a semantically similar name.
Summary. The automated comparison of process models has received increasing attention in the last decade, due to the growing existence of process models and repositories, and the consequent need to assess similarities between the underlying processes. Current techniques for process model comparison are either structural (based on graph edit distances), or behavioural (through activity profiles or the analysis of the execution semantics). Accordingly, there is a gap between the quality of the information provided by these two families, i.e., structural techniques may be fast but inaccurate, whilst behavioural are accurate but complex. In this paper we present a novel technique, that is based on a well-known technique to compare labeled trees through the notion of Cophenetic distance. The technique lays between the two families of methods for comparing a process model: it has an structural nature, but can provide accurate information on the differences/similarities of two process models. The experimental evaluation on various benchmarks sets are reported, that position the proposed technique as a valuable tool for process model comparison.
Although crowdsourcing has been proven efficient as a mechanism to solve independent tasks for on-line production, it is still unclear how to define and manage workflows in complex tasks that require the participation and coordination of different workers. Despite the existence of different frameworks to define workflows, we still lack a commonly accepted solution that is able to describe the most common workflows in current and future platforms. In this paper, we propose CrowdWON, a new graphical framework to describe and monitor crowd processes, the proposed language is able to represent the workflow of most well-known existing applications, extend previous modelling frameworks, and assist in the future generation of crowdsourcing platforms. Beyond previous proposals, CrowdWON allows for the formal definition of adaptative workflows, that depend on the skills of the crowd workers and/or process deadlines. CrowdWON also allows expressing constraints on workers based on previous individual contributions. Finally, we show how our proposal can be used to describe well known crowdsourcing workflows.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.