2019
DOI: 10.1145/3359283
|View full text |Cite
|
Sign up to set email alerts
|

WeBuildAI

Abstract: Algorithms increasingly govern societal functions, impacting multiple stakeholders and social groups. How can we design these algorithms to balance varying interests in a moral, legitimate way? As one answer to this question, we present WeBuildAI, a collective participatory framework that enables people to build algorithmic policy for their communities. The key idea of the framework is to enable stakeholders to construct a computational model that represents their views and to have those models vote on their b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
47
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 91 publications
(52 citation statements)
references
References 50 publications
0
47
0
Order By: Relevance
“…While all these works provide useful insights into the FR operation, the existing literature largely misses the volunteer side of the process. Among the few pieces of work that explicitly consider the volunteer crowdsourcing aspect of food rescue, Lee et al developed a participatory democracy framework to allow volunteers and other stakeholders to decide on the matching of donations and recipients, which is orthogonal to our work [24]. Shi et al developed a machine learning model to predict whether a rescue trip will be claimed and an optimization model to find the best intervention scheme [38].…”
Section: Related Workmentioning
confidence: 99%
“…While all these works provide useful insights into the FR operation, the existing literature largely misses the volunteer side of the process. Among the few pieces of work that explicitly consider the volunteer crowdsourcing aspect of food rescue, Lee et al developed a participatory democracy framework to allow volunteers and other stakeholders to decide on the matching of donations and recipients, which is orthogonal to our work [24]. Shi et al developed a machine learning model to predict whether a rescue trip will be claimed and an optimization model to find the best intervention scheme [38].…”
Section: Related Workmentioning
confidence: 99%
“…The layer generates such an interface and delivers it to the legitimate participants as authorized by Participant Discovery and Authorization Layer. In a larger view, the layer, in connection with Aggregation and Consensus Layer, should help the participants lead to a consensus in an efficient and acceptable manner [58,78]. It would also maintain the list of currently authorized participants for each device.…”
Section: -Layered Architecturementioning
confidence: 99%
“…On the whole, content moderation by AI may affect social welfare, with no sufficient mechanism for ensuring that such systems reflect our social contract and comply with the rule of law. Several scholars have suggested ways to fix this problem from within, proposing a participatory framework which involves different stakeholders either in the design process (Lee et al, 2019) or in monitoring compliance with social values (Rahwan, 2018). Some proposals seek to shape the outcome of content moderation by holding platforms liable for harmful content (Perry and Zarsky, 2015).…”
Section: Contesting Algorithmsmentioning
confidence: 99%