Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work &Amp; Social Computing 2016
DOI: 10.1145/2818048.2819953
|View full text |Cite
|
Sign up to set email alerts
|

Almost an Expert

Abstract: Expert feedback is valuable but hard to obtain for many designers. Online crowds can provide fast and affordable feedback, but workers may lack relevant domain knowledge and experience. Can expert rubrics address this issue and help novices provide expert-level feedback? To evaluate this, we conducted an experiment with a 2×2 factorial design. Student designers received feedback on a visual design from both experts and novices, who produced feedback using either an expert rubric or no rubric. We found that rub… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
47
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 76 publications
(47 citation statements)
references
References 40 publications
0
47
0
Order By: Relevance
“…However, positively written and emotional critiques received higher average ratings than negative or neutral critiques [33]. To counter this, Robb et al [28] found that designers seeking feedback tended to shut out negative textual feedback, but responded well to visual feedback that showed critics did not understand their designs.…”
Section: Related Workmentioning
confidence: 99%
“…However, positively written and emotional critiques received higher average ratings than negative or neutral critiques [33]. To counter this, Robb et al [28] found that designers seeking feedback tended to shut out negative textual feedback, but responded well to visual feedback that showed critics did not understand their designs.…”
Section: Related Workmentioning
confidence: 99%
“…Answerers did not always recognize the utility of raters in providing an outsider assessment. Criteria-based, aggregated crowd feedback approached expert feedback in both our study and prior work [12,20]. Designs should establish the legitimacy of raters.…”
Section: Designs Should Raise the Credibility Of Feedback Giversmentioning
confidence: 98%
“…Luther et al [12], in a study of the crowdworker-sourced critique system CrowdCrit, found that aggregated crowd critique approached expert critique, and designers who received crowd feedback perceived that it improved their design process. In a study of design feedback by Yuan et al [20], crowdworkers providing feedback without criteria tended to focus on surface-level features, while with criteria, their reviews were as valuable as expert reviews. In our study, we hope to effectively deploy crowd feedback in a setting where peer feedback is rare, through the use of aggregated crowdworker feedback with criteria targeted toward promoting effective help-giving.…”
Section: Crowd Feedback In Learning Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…As such, this breaks down the geographical and cost barriers between designers and clients. Traditionally, designers used to receive critiques and feedback through studio critique sessions, where they would present their work to peers and mentors who would in turn provide comments and suggestions (Yuan et al, 2016). Crowdsourcing has proved to be a readily available method for effective feedback which has led some researchers to explore crowdsourcing as a potential solution to design problems (Luther et al, 2015).…”
Section: Crowdsourcing In Designmentioning
confidence: 99%