Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems 2017
DOI: 10.1145/3027063.3053259
|View full text |Cite
|
Sign up to set email alerts
|

popHistory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…Little support is typically provided to show high-level information to entice users to reflect upon and make sense of their past exploration. This type of information, called provenance (North et al, 2011;Bors et al, 2019;Madanagopal et al, 2019), could provide opportunities to review and share insights, but importantly, it can potentially improve user exploration practices and strategies (Carrasco et al, 2017).…”
Section: Figure 6 About Herementioning
confidence: 99%
See 1 more Smart Citation
“…Little support is typically provided to show high-level information to entice users to reflect upon and make sense of their past exploration. This type of information, called provenance (North et al, 2011;Bors et al, 2019;Madanagopal et al, 2019), could provide opportunities to review and share insights, but importantly, it can potentially improve user exploration practices and strategies (Carrasco et al, 2017).…”
Section: Figure 6 About Herementioning
confidence: 99%
“…User interaction logs are often analyzed not only to evaluate how tools are operated by end-users, but also to help the end-users themselves reflect and track their progress. For example, in the context of web browsing, Carrasco et al (Carrasco et al, 2017) showed that when high level semantic information is shown, users tend to reflect on their browsing habits and are able to infer areas of improvement. Guo et al (Guo, 2015) found that visualization of interaction logs improves analysts' performance in finding insights.…”
Section: Characterizing Exploratory Data Analysismentioning
confidence: 99%
“…In contrast, QueryCrumbs do not account for any semantics of a query, but define query similarity solely by the similarity of retrieved results. Carrasco et al showed that animated browsing history visualizations can help users to better reflect on their browsing habits [38]. TrackThink [39] aim to track the thought process on Web Search, whereas QueryCrumbs accounts for this thought process only implicitly through the underlying human querying model.…”
Section: Search History Visualizationsmentioning
confidence: 99%