We conducted two studies that investigated display characteristics related to color (hue, saturation, brightness, and transparency) and contrast with a background for displaying information qualifiers (termed meta-information) such as uncertainty, age, and source quality. Level of detail (or granularity) of the meta-information and task demands were also manipulated. Participants were asked to rank and rate colored regions overlaid on different map backgrounds based on the level of meta-information the regions displayed. Results from Study 1 indicated that participants could appropriately rank and rate levels of meta-information across saturation, brightness, and transparency conditions, and results from Study 1 and Study 2 showed that the natural direction of ordering is complex and dependent on the relevance of different information to the task and the contrast of the overlay region with the background.
Experiments were designed to investigate the effects of set size and variation in the chromaticity of distractor stimuli on thresholds for detecting a target stimulus that differed from distractors only in chromaticity. Distractor chromaticities were selected from a line in the isoluminant color plane and targets were selected from lines approximately orthogonal to the distractor line. With uniform distractors thresholds increased with set size as predicted by a signal detection model. When targets and distractors were selected from lines parallel to the Cardinal directions in color space, thresholds were lower with variable distractors than with uniform distractors and variations in the location of the target along the distractor line had no effect on threshold. Results with diagonally oriented distractor lines were similar. Results suggest that many pairs of orthogonal directions in the isoluminant color plane represent independent color coding mechanisms that mediate search. Results also show that information in independent color coding mechanisms tuned to orthogonal directions in the isoluminant plane can be combined to facilitate detection of the target.
This paper presents some experimental results on the comparison of users performance for different kinds of 3D interaction tasks (travel, manipulation), when using either a standard desktop display or a large immersive display. The main results of our experimentation are the following: first, not all users benefit similarly from the use of large displays, and second, the gains of performance strongly depend on the nature of the interaction task. To explain these results, we borrow some tools from cognitive science in order to identify one cognitive factor (visual attention) that is involved in the difference of performance that can be observed.
In order to populate virtual cities, it is necessary to specify the behaviour of dynamic entities such as pedestrians or car drivers. Since a complete mental model based on vision and image processing cannot be constructed in real time using purely geometrical information, higher levels of information are needed in a model of the virtual environment. For example, the autonomous actors of a virtual world would exploit the knowledge of the environment topology to navigate through it. In this article, we present a model of virtual urban environments using structures and information suitable for behavioural animations. Thanks to this knowledge, autonomous virtual actors can behave like pedestrians or car drivers in a complex city environment. A city modeler has been designed, using this model of urban environment, and enables complex urban environments for behavioural animation to be automatically produced.
Figure 1: Production pipeline of stroke-based buildings. The edges of facade images are extracted and vectorized. This vector data is used instead of textures to represent the facade in the final 3D model.
AbstractIn this paper, we present a new approach for remote visualization of large 3D cities. Our approach is based on expressive rendering (also known as Non-Photorealistic Rendering), and more precisely, on feature lines. By focusing on characteristic features, this solution brings a more legible visualization and reduces the amount of data transmitted on the network. We also introduce a client-server system for remote rendering, as well as the involved pre-processing stage that is required for optimization. Based on the presented system, we perform a study on the usability of such an approach in the context of mobile devices.
A better understanding of how users perform virtual reality tasks may help to build better virtual reality interfaces. In this study, we concentrate on the impact of large displays in virtual reality depending on the tasks and users' characteristics. The two virtual reality tasks studied are the objects manipulation and the navigation in an environment. The users' characteristics studied are the visual attention abilities. Forty subjects participated in the experimentation composed of cognitive tests used to evaluate visual attentional abilities and a set of virtual reality tasks. Our study exhibits two main conclusions. (i) Large displays positively impact on performances for some kinds of virtual reality tasks. (ii) Users with low level of attentional abilities take more advantage of large displays. We conclude that large displays can be considered as cognitive aids depending on the tasks and users' characteristics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.