2012
DOI: 10.1109/tvcg.2012.199
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the Effect of Visualizations on Bayesian Reasoning through Crowdsourcing

Abstract: People have difficulty understanding statistical information and are unaware of their wrong judgments, particularly in Bayesian reasoning. Psychology studies suggest that the way Bayesian problems are represented can impact comprehension, but few visual designs have been evaluated and only populations with a specific background have been involved. In this study, a textual and six visual representations for three classic problems were compared using a diverse subject pool through crowdsourcing. Visualizations i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

11
218
1

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 150 publications
(230 citation statements)
references
References 44 publications
(105 reference statements)
11
218
1
Order By: Relevance
“…Combining techniques that have previously been tested independently, our results show an increase in the accuracy of the mammography problem from the previously reported 6% [29] to 42%. Our findings demonstrate how the phrasing of a Bayesian problem can partially explain the poor or inconsistent results of prior work and provide a baseline text-only representation for future work.…”
Section: Introductionsupporting
confidence: 46%
See 1 more Smart Citation
“…Combining techniques that have previously been tested independently, our results show an increase in the accuracy of the mammography problem from the previously reported 6% [29] to 42%. Our findings demonstrate how the phrasing of a Bayesian problem can partially explain the poor or inconsistent results of prior work and provide a baseline text-only representation for future work.…”
Section: Introductionsupporting
confidence: 46%
“…Researchers have also explored visualizations such as decision trees [13,28], contingency tables [7], "beam cut" diagrams [17] and probability curves [7], and have shown improvements over textonly representations. However, when researchers in the visualization community extended this work to a more diverse sampling of the general population, they found that adding visualizations to existing text representations did not significantly increase accuracy [29,32].…”
Section: Introductionmentioning
confidence: 99%
“…The MTurk HITs were based on the templates provided by Micallef et al [24], at http://www.aviz.fr/bayes. Every question, in both the training and the main study, was displayed on a separate page of the HIT.…”
Section: Data Collection Methodsmentioning
confidence: 99%
“…A larger number of participants, first of all, result in larger samples (e.g., 480 participants in [74], 550 in [32]). Having more samples makes the data analysis more robust to outliers, since outliers can be removed while maintaining a large number of "good" samples.…”
Section: Participantsmentioning
confidence: 99%
“…For example, finding paths in an abstract graph visualization is less likely engaging than finding the friends that connect two people in a social network. Micallef et al [74] report on participants commenting on their interest and engagement in the study, and on things they learned while participating. Section 5 provides a few suggestions on how this could be achieved.…”
Section: Study Design Considerations In Crowdsourced Environmentsmentioning
confidence: 99%