Abstract. We approach the design of ubiquitous computing systems in the urban environment as integral to urban design. To understand the city as a system encompassing physical and digital forms and their relationships with people's behaviours, we are developing, applying and refining methods of observing, recording, modelling and analysing the city, physically, digitally and socially. We draw on established methods used in the space syntax approach to urban design. Here we describe how we have combined scanning for discoverable Bluetooth devices with two such methods, gatecounts and static snapshots. We report our experiences in developing, field testing and refining these augmented methods. We present initial findings on the Bluetooth landscape in a city in terms of patterns of Bluetooth presence and Bluetooth naming practices.
Digital augmentation dissolves many of the physical barriers to learning by offering tools to integrate data and discoveries that travel with students as they explore new terrain.
Mobile sensing and mapping applications are becoming more prevalent because sensing hardware is becoming more portable and more affordable. However, most of the hardware uses small numbers of fixed sensors that report and share multiple sets of environmental data which raises privacy concerns. Instead, these systems can be decentralized and managed by individuals in their public and private spaces. This paper describes a robust system called MobGeoSens which enables individuals to monitor their local environment (e.g. pollution and temperature) and their private spaces (e.g. activities and health) by using mobile phones in their day to day life.The MobGeoSen is a combination of software components that facilitates the phone's internal sensing devices (e.g. microphone and camera) and external wireless sensors (e.g. data loggers and GPS receivers) for data collection. It also adds a new dimension of spatial localization to the data collection process and provides the user with both textual and spatial cartographic displays. While collecting the data, individuals can interactively add annotations and photos which are automatically added and integrated in the visualization file/log. This makes it easy to visualize the data, photos and annotations on a spatial and temporal visualization tool. In addition, the paper will present ways in which mobile phones can be used as noise sensors using an on-device microphone. Finally, we present our experiences with school children using the above mentioned system to measure their exposure to environmental pollution.
Groups of older and younger participants explored a virtual shopping mall composed of more than 60 retail outlets on 2 levels. They were then compared with guessing controls for their understanding of the spatial layout of the real equivalent building. Experimental groups showed greater accuracy in making pointing judgments toward targets not visible from the pointing site, took shorter times to perform route tasks on foot, made better left-right directional judgments, and sketched better maps of the mall. Of the older participants, 2 out of 8 performed at chance throughout. Younger experimental participants remembered better than did older ones on which level targets were located. The study shows that many older people remain spatially competent and that age is not a barrier to the effective use of virtual environment technology, which may be used in the future to increase inclusion of older populations by encouraging their confident use of public buildings.
Neurological disorders are a leading cause of death and disability worldwide. Can virtual reality (VR) based intervention, a novel technology-driven change of paradigm in rehabilitation, reduce impairments, activity limitations, and participation restrictions? This question is directly addressed here for the first time using an umbrella review that assessed the effectiveness and quality of evidence of VR interventions in the physical and cognitive rehabilitation of patients with stroke, traumatic brain injury and cerebral palsy, identified factors that can enhance rehabilitation outcomes and addressed safety concerns. Forty-one meta-analyses were included. The data synthesis found mostly low- or very low-quality evidence that supports the effectiveness of VR interventions. Only a limited number of comparisons were rated as having moderate and high quality of evidence, but overall, results highlight potential benefits of VR for improving the ambulation function of children with cerebral palsy, mobility, balance, upper limb function, and body structure/function and activity of people with stroke, and upper limb function of people with acquired brain injury. Customization of VR systems is one important factor linked with improved outcomes. Most studies do not address safety concerns, as only nine reviews reported adverse effects. The results provide critical recommendations for the design and implementation of future VR programs, trials and systematic reviews, including the need for high quality randomized controlled trials to test principles and mechanisms, in primary studies and in meta-analyses, in order to formulate evidence-based guidelines for designing VR-based rehabilitation interventions.
It is well-established finding that people find maps easier to use when they are aligned so that "up" on the map corresponds to the user's forward direction. With mapbased applications on handheld mobile devices, this forward/up correspondence can be maintained in several ways: the device can be physically rotated within the user's hands or the user can manually operate buttons to digitally rotate the map; alternatively, the map can be rotated automatically using data from an electronic compass. This paper examines all three options. In a field experiment, each method is compared against a baseline north-up condition. The study provides strong evidence that physical rotation is the most effective with applications that present the user with a wider map. The paper concludes with some suggestions for design improvements.
Virtual Reality nonfiction (VRNF) is an emerging form of immersive media experience created for consumption using panoramic "Virtual Reality" headsets. VRNF promises nonfiction content producers the potential to create new ways for audiences to experience "the real"; allowing viewers to transition from passive spectators to active participants. Our current project is exploring VRNF through a series of ethnographic and experimental studies. In order to document the content available, we embarked on an analysis of VR documentaries produced to date. In this paper, we present an analysis of a representative sample of 150 VRNF titles released between 2012-2018. We identify and quantify 64 characteristics of the medium over this period, discuss how producers are exploiting the affordances of VR, and shed light on new audience roles. Our findings provide insight into the current state of the art in VRNF and provide a digital resource for other researchers in this area.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.