2021
DOI: 10.48550/arxiv.2103.16816
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

QUEST: Queue Simulation for Content Moderation at Scale

Abstract: Moderating content in social media platforms is a formidable challenge due to the unprecedented scale of such systems, which typically handle billions of posts per day. Some of the largest platforms such as Facebook blend machine learning with manual review of platform content by thousands of reviewers. Operating a large-scale human review system poses interesting and challenging methodological questions that can be addressed with operations research techniques. We investigate the problem of optimally operatin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 6 publications
(6 reference statements)
0
2
0
Order By: Relevance
“…Nguyen et al [2020] developed a statistical framework that increases the accuracy for detecting harmful content, by combining the decisions from multiple reviewers as well as those from ML algorithms. Makhijani et al [2021] built simulation models for the large-scale review systems to understand and optimize the human review process and guide the platforms for operational decisions (e.g., hiring additional reviewers). Garcelon et al [2021] investigated a multi-arm bandit framework to calibrate the severity predictions of content based on various ML models.…”
Section: Related Literaturementioning
confidence: 99%
See 1 more Smart Citation
“…Nguyen et al [2020] developed a statistical framework that increases the accuracy for detecting harmful content, by combining the decisions from multiple reviewers as well as those from ML algorithms. Makhijani et al [2021] built simulation models for the large-scale review systems to understand and optimize the human review process and guide the platforms for operational decisions (e.g., hiring additional reviewers). Garcelon et al [2021] investigated a multi-arm bandit framework to calibrate the severity predictions of content based on various ML models.…”
Section: Related Literaturementioning
confidence: 99%
“…A second practical consideration is that of time. As discussed in prior work on other aspects of the content moderation problem, there is an important time component to this problem [Nguyen et al, 2020, Makhijani et al, 2021. The reason for this is that the time between the creation of a piece of content and its review can have a large impact on the platform quality.…”
Section: Introductionmentioning
confidence: 99%