Abstract:Fig. 1: Graphical summaries of bookmarks are used to record and browse the analytical process, here ordered (row-by-row) in the sequence in which they were bookmarked. Each can be used to access the live data, enabling analysts to revisit parts of the analytical process and helping verify past interpretations. A legend describing the encodings is provided in Fig. 6.Abstract-We describe and demonstrate an extensible framework that supports data exploration and provenance in the context of Human Terrain Analysis… Show more
“…However, this will not work if the method introduces considerable distraction or does not offer any benefits. Allowing user annotation is one of the most common forms [27,38]: the user creates notes or annotations that record comments, findings, or hypotheses. Those notes can be associated with the visualization, allowing users returning to the states when the notes were made [30,35] to re-examine the context or investigate further.…”
Section: Capturementioning
confidence: 99%
“…For example, users are more likely to record the findings they made than the process or approach that led them there. To encourage user to write richer notes, a visual analytic system needs to provide additional benefits such as the ability to create visual narratives [38] that reveals the reasoning process and help users review and plan exploratory analysis for complex sensemaking task after recording the current progress [26].…”
Section: Capturementioning
confidence: 99%
“…Existing approaches can be broadly categorized into manual and automatic capture methods. The manual methods [27,38] largely rely on users to record their analysis process through note taking, whereas the automatic methods so far can identify a group of actions that are likely to be part of the same sub-task without knowing what the sub-task actually is [17]. There is limited success of automated inference of sub-tasks and tasks from lower level events and actions [42].…”
Fig. 1: Four linked views of SensePath. A:The timeline view shows all captured sensemaking actions in temporal order. B: The browser view displays the web page where an action was performed. C: The replay view shows the screen capture video and can automatically jump to the starting time of an action when it is selected in another view. D: The transcription view displays detailed information of selected actions (the highlighted ones in the timeline view). Abstract-Sensemaking is described as the process of comprehension, finding meaning and gaining insight from information, producing new knowledge and informing further action. Understanding the sensemaking process allows building effective visual analytics tools to make sense of large and complex datasets. Currently, it is often a manual and time-consuming undertaking to comprehend this: researchers collect observation data, transcribe screen capture videos and think-aloud recordings, identify recurring patterns, and eventually abstract the sensemaking process into a general model. In this paper, we propose a general approach to facilitate such a qualitative analysis process, and introduce a prototype, SensePath, to demonstrate the application of this approach with a focus on browser-based online sensemaking. The approach is based on a study of a number of qualitative research sessions including observations of users performing sensemaking tasks and post hoc analyses to uncover their sensemaking processes. Based on the study results and a follow-up participatory design session with HCI researchers, we decided to focus on the transcription and coding stages of thematic analysis. SensePath automatically captures user's sensemaking actions, i.e., analytic provenance, and provides multi-linked views to support their further analysis. A number of other requirements elicited from the design session are also implemented in SensePath, such as easy integration with existing qualitative analysis workflow and non-intrusive for participants. The tool was used by an experienced HCI researcher to analyze two sensemaking sessions. The researcher found the tool intuitive and considerably reduced analysis time, allowing better understanding of the sensemaking process.
“…However, this will not work if the method introduces considerable distraction or does not offer any benefits. Allowing user annotation is one of the most common forms [27,38]: the user creates notes or annotations that record comments, findings, or hypotheses. Those notes can be associated with the visualization, allowing users returning to the states when the notes were made [30,35] to re-examine the context or investigate further.…”
Section: Capturementioning
confidence: 99%
“…For example, users are more likely to record the findings they made than the process or approach that led them there. To encourage user to write richer notes, a visual analytic system needs to provide additional benefits such as the ability to create visual narratives [38] that reveals the reasoning process and help users review and plan exploratory analysis for complex sensemaking task after recording the current progress [26].…”
Section: Capturementioning
confidence: 99%
“…Existing approaches can be broadly categorized into manual and automatic capture methods. The manual methods [27,38] largely rely on users to record their analysis process through note taking, whereas the automatic methods so far can identify a group of actions that are likely to be part of the same sub-task without knowing what the sub-task actually is [17]. There is limited success of automated inference of sub-tasks and tasks from lower level events and actions [42].…”
Fig. 1: Four linked views of SensePath. A:The timeline view shows all captured sensemaking actions in temporal order. B: The browser view displays the web page where an action was performed. C: The replay view shows the screen capture video and can automatically jump to the starting time of an action when it is selected in another view. D: The transcription view displays detailed information of selected actions (the highlighted ones in the timeline view). Abstract-Sensemaking is described as the process of comprehension, finding meaning and gaining insight from information, producing new knowledge and informing further action. Understanding the sensemaking process allows building effective visual analytics tools to make sense of large and complex datasets. Currently, it is often a manual and time-consuming undertaking to comprehend this: researchers collect observation data, transcribe screen capture videos and think-aloud recordings, identify recurring patterns, and eventually abstract the sensemaking process into a general model. In this paper, we propose a general approach to facilitate such a qualitative analysis process, and introduce a prototype, SensePath, to demonstrate the application of this approach with a focus on browser-based online sensemaking. The approach is based on a study of a number of qualitative research sessions including observations of users performing sensemaking tasks and post hoc analyses to uncover their sensemaking processes. Based on the study results and a follow-up participatory design session with HCI researchers, we decided to focus on the transcription and coding stages of thematic analysis. SensePath automatically captures user's sensemaking actions, i.e., analytic provenance, and provides multi-linked views to support their further analysis. A number of other requirements elicited from the design session are also implemented in SensePath, such as easy integration with existing qualitative analysis workflow and non-intrusive for participants. The tool was used by an experienced HCI researcher to analyze two sensemaking sessions. The researcher found the tool intuitive and considerably reduced analysis time, allowing better understanding of the sensemaking process.
“…A narrative can include provenance information at different levels: an analysis result, user notes, visualizations and raw data. DIVA [37] allows users to create a narrative based on user annotations and captured visualization states, and makes it possible to revisit the visualizations as when they were captured. SchemaLine [21] enables narrative construction by grouping user notes along the timeline.…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.