Existing enhancement methods are empirically expected to help the high-level end computer vision task: however, that is observed to not always be the case in practice. We focus on object or face detection in poor visibility enhancements caused by bad weathers (haze, rain) and low light conditions. To provide a more thorough examination and fair comparison, we introduce three benchmark sets collected in real-world hazy, rainy, and lowlight conditions, respectively, with annotated objects/faces. We launched the UG 2+ challenge Track 2 competition in IEEE CVPR 2019, aiming to evoke a comprehensive discussion and exploration about whether and how low-level vision techniques can benefit the high-level automatic visual recognition in various scenarios. To our best knowledge, this is the first and currently largest effort of its kind. Baseline results by cascading existing enhancement and detection models are reported, indicating the highly challenging nature of our new data as well as the large room for further technical innovations. Thanks to a large participation from the research community, we are able to analyze representative team solutions, striving to better identify the strengths and limitations of existing mindsets as well as the future directions.Index Terms-Poor visibility environment, object detection, face detection, haze, rain, low-light conditions *The first two authors Wenhan Yang and Ye Yuan contributed equally. Ye Yuan and Wenhan Yang helped prepare the dataset proposed for the UG2+ Challenges, and were the main responsible members for UG2+ Challenge 2019 (Track 2) platform setup and technical support. Wenqi Ren, Jiaying Liu, Walter J. Scheirer, and Zhangyang Wang were the main organizers of the challenge and helped prepare the dataset, raise sponsors, set up evaluation environment, and improve the technical submission. Other authors are the group members of winner teams in UG2+ challenge Track 2 contributing to the winning methods.
Multi-sensory feedback can potentially improve user experience and performance in virtual environments. As it is complicated to study the effect of multi-sensory feedback as a single factor, we created a design space with these diverse cues, categorizing them into an appropriate granularity based on their origin and use cases. To examine the effects of tactile cues during non-fatiguing walking in immersive virtual environments, we selected certain tactile cues from the design space, movement wind, directional wind and footstep vibration, and another cue, footstep sounds, and investigated their influence and interaction with each other in more detail. We developed a virtual reality system with non-fatiguing walking interaction and low-latency, multi-sensory feedback, and then used it to conduct two successive experiments measuring user experience and performance through a triangle-completion task. We noticed some effects due to the addition of footstep vibration on task performance, and saw significant improvement due to the added tactile cues in reported user experience.
The diverse and vibrant ecosystem of interactive visualizations on the web presents an opportunity for researchers and practitioners to observe and analyze how everyday people interact with data visualizations. However, existing metrics of visualization interaction behavior used in research do not fully reveal the breadth of peoples' open-ended explorations with visualizations. One possible way to address this challenge is to determine high-level goals for visualization interaction metrics, and infer corresponding features from user interaction data that characterize different aspects of peoples' explorations of visualizations. In this paper, we identify needs for visualization behavior measurement, and develop corresponding candidate features that can be inferred from users' interaction data. We then propose metrics that capture novel aspects of peoples' open-ended explorations, including exploration uniqueness and exploration pacing. We evaluate these metrics along with four other metrics recently proposed in visualization literature by applying them to interaction data from prior visualization studies. The results of these evaluations suggest that these new metrics 1) reveal new characteristics of peoples' use of visualizations, 2) can be used to evaluate statistical differences between visualization designs, and 3) are statistically independent of prior metrics used in visualization research. We discuss implications of these results for future studies, including the potential for applying these metrics in visualization interaction analysis, as well as emerging challenges in developing and selecting metrics depicting visualization explorations.
Physical and digital objects often leave markers of our use. Website links turn purple after we visit them, for example, showing us information we have yet to explore. These "footprints" of interaction offer substantial benefits in information saturated environments - they enable us to easily revisit old information, systematically explore new information, and quickly resume tasks after interruption. While applying these design principles have been successful in HCI contexts, direct encodings of personal interaction history have received scarce attention in data visualization. One reason is that there is little guidance for integrating history into visualizations where many visual channels are already occupied by data. More importantly, there is not firm evidence that making users aware of their interaction history results in benefits with regards to exploration or insights. Following these observations, we propose HindSight - an umbrella term for the design space of representing interaction history directly in existing data visualizations. In this paper, we examine the value of HindSight principles by augmenting existing visualizations with visual indicators of user interaction history (e.g. How the Recession Shaped the Economy in 255 Charts, NYTimes). In controlled experiments of over 400 participants, we found that HindSight designs generally encouraged people to visit more data and recall different insights after interaction. The results of our experiments suggest that simple additions to visualizations can make users aware of their interaction history, and that these additions significantly impact users' exploration and insights.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.