Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 2020
DOI: 10.1145/3313831.3376428
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourcing the Perception of Machine Teaching

Abstract: Teachable interfaces can empower end-users to attune machine learning systems to their idiosyncratic characteristics and environment by explicitly providing pertinent training examples. While facilitating control, their effectiveness can be hindered by the lack of expertise or misconceptions. We investigate how users may conceptualize, experience, and reflect on their engagement in machine teaching by deploying a mobile teachable testbed in Amazon Mechanical Turk. Using a performance-based payment scheme, Mech… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(20 citation statements)
references
References 53 publications
0
20
0
Order By: Relevance
“…In a completely different attempt to mix a UX team with an AI team, Kayacik et al [65] present a study on how the teams from two distinct domains interacted to create an AI-Music application. Hong et al [66] investigated how users conceptualize, experience, and reflect on their engagement in machine teaching.…”
Section: User Studiesmentioning
confidence: 99%
“…In a completely different attempt to mix a UX team with an AI team, Kayacik et al [65] present a study on how the teams from two distinct domains interacted to create an AI-Music application. Hong et al [66] investigated how users conceptualize, experience, and reflect on their engagement in machine teaching.…”
Section: User Studiesmentioning
confidence: 99%
“…It can be seen as a "chicken or egg" problem for an initial dataset that allows for more contextualised data collection in the context of machine teaching applications. Other approaches are to use non-disabled crowd-workers as in [26] to build an initial base for applications relevant to the disability community. This is in opposition to initial views that motivated this work to take a disability-first perspective.…”
Section: Engaging Data Collectors With Disabilitiesmentioning
confidence: 99%
“…We required access to a laptop or desktop computer, a stable internet connection for video conferencing, and a smartphone with 150MB in free storage for recording sounds during the feld study. Informed by Hong et al [33], we asked participants to rate their familiarity with ML on a four-point scale: three reported never having heard of it (not familiar), fve had heard of it but did not know what it does (slightly familiar), and six reported being somewhat familiar with what it is and what it does. No participant reported having extensive knowledge of ML (extremely familiar)-indicating our participants were non-experts.…”
Section: Participantsmentioning
confidence: 99%
“…Participants also considered the diversity of samples within each sound class-common among non-experts (e.g., [33,58,78]). Many decided to limit diversity by producing the sound the same way in each sample: "I want the sounds to be relatively consistent, just so the machine learning device isn't like, 'You have three diferent weird noises, but you say they're all the same'" (P4).…”
Section: Considering Decision Boundaries and Diversitymentioning
confidence: 99%