2021
DOI: 10.31219/osf.io/cbw36
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

reVISit: Looking Under the Hood of Interactive Visualization Studies

Abstract: Quantifying user performance with metrics such as time and accuracy does not show the whole picture when researchers evaluate complex, interactive visualization tools. In such systems, performance is often influenced by different analysis strategies that statistical analysis methods cannot account for. To remedy this lack of nuance, we propose a novel analysis methodology for evaluating complex interactive visualizations at scale. We implement our analysis methods in reVISit, which enables analysts to explore … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…The original study had 128 participants and the follow-up outlier study had 130 participants. After reviewing the provenance data using the reVISit system, 65 we realized that participants sometimes chose not to use predictions in the computer-supported condition. Since our goal was to measure the effects of using predictions, we removed trials that were not completed using predictions in the computer-supported mode.…”
Section: Resultsmentioning
confidence: 99%
“…The original study had 128 participants and the follow-up outlier study had 130 participants. After reviewing the provenance data using the reVISit system, 65 we realized that participants sometimes chose not to use predictions in the computer-supported condition. Since our goal was to measure the effects of using predictions, we removed trials that were not completed using predictions in the computer-supported mode.…”
Section: Resultsmentioning
confidence: 99%
“…More recently, Cao et al [9] designed a system that allowed the user to label visualizations with personal preferences; then, the system would generate a sequence of visualizations by predicting user preferences based on those labels. User interactions with these visualization systems can be captured using reVISit, a platform that logs navigation behaviors for usability studies [20]. While these approaches are innovative in their own right, they shed little light on how expertise may influence effectiveness of visualization types of sequences.…”
Section: Related Workmentioning
confidence: 99%
“…3) Visualization and Insight Study Design: Classic studies of cognitive fit theory and visualizations typically were created with benchmark tasks where the insight types and insights themselves were pre-determined by the experimenter [e.g., [20]]. Such classic studies often reported better performance in terms of accuracy or reaction time when the problem was presented in a visual format that fit the specific task.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation