While traffic signals, signs, and road markings provide explicit guidelines for those operating in and around the roadways, some decisions, such as determinations of “who will go first,” are made by implicit negotiations between road users. In such situations, pedestrians are today often dependent on cues in drivers’ behavior such as eye contact, postures, and gestures. With the introduction of more automated functions and the transfer of control from the driver to the vehicle, pedestrians cannot rely on such non-verbal cues anymore. To study how the interaction between pedestrians and automated vehicles (AVs) might look like in the future, and how this might be affected if AVs were to communicate their intent to pedestrians, we designed an external vehicle interface called automated vehicle interaction principle (AVIP) that communicates vehicles’ mode and intent to pedestrians. The interaction was explored in two experiments using a Wizard of Oz approach to simulate automated driving. The first experiment was carried out at a zebra crossing and involved nine pedestrians. While it focused mainly on assessing the usability of the interface, it also revealed initial indications related to pedestrians’ emotions and perceived safety when encountering an AV with/without the interface. The second experiment was carried out in a parking lot and involved 24 pedestrians, which enabled a more detailed assessment of pedestrians’ perceived safety when encountering an AV, both with and without the interface. For comparison purposes, these pedestrians also encountered a conventional vehicle. After a short training course, the interface was deemed easy for the pedestrians to interpret. The pedestrians stated that they felt significantly less safe when they encountered the AV without the interface, compared to the conventional vehicle and the AV with the interface. This suggests that the interface could contribute to a positive experience and improved perceived safety in pedestrian encounters with AVs – something that might be important for general acceptance of AVs. As such, this topic should be further investigated in future studies involving a larger sample and more dynamic conditions.
Mycobacteria owe their success as pathogens to their ability to persist for long periods within host cells in asymptomatic, latent forms before they opportunistically switch to the virulent state. The molecular mechanisms underlying the transition into dormancy and emergence from it are not clear. Here we show that old cultures of Mycobacterium marinum contained spores that, upon exposure to fresh medium, germinated into vegetative cells and reappeared again in stationary phase via endospore formation. They showed many of the usual characteristics of well-known endospores. Homologues of well-known sporulation genes of Bacillus subtilis and Streptomyces coelicolor were detected in mycobacteria genomes, some of which were verified to be transcribed during appropriate life-cycle stages. We also provide data indicating that it is likely that old Mycobacterium bovis bacillus Calmette-Gué rin cultures form spores. Together, our data show sporulation as a lifestyle adapted by mycobacteria under stress and tempt us to suggest this as a possible mechanism for dormancy and/or persistent infection. If so, this might lead to new prophylactic strategies.Mycobacterium marinum ͉ cell division ͉ DNA replication ͉ cell cycle ͉ endosporulation
When people hear a sound (a "sound object" or a "sound event") the perceived auditory space around them might modulate their emotional responses to it. Spaces can affect both the acoustic properties of the sound event itself and may also impose boundaries to the actions one can take with respect to this event. Virtual acoustic rooms of different sizes were used in a subjective and psychophysiological experiment that evaluated the influence of the auditory space perception on emotional responses to various sound sources. Participants (N = 20) were exposed to acoustic spaces with sound source positions and room acoustic properties varying across the experimental conditions. The results suggest that, overall, small rooms were considered more pleasant, calmer, and safer than big rooms, although this effect of size seems to disappear when listening to threatening sound sources. Sounds heard behind the listeners tended to be more arousing, and elicited larger physiological changes than sources in front of the listeners. These effects were more pronounced for natural, compared to artificial, sound sources, as confirmed by subjective and physiological measures.
Emotions are experienced both in real and virtual environments (VEs). Most research to date have focused on the content that causes emotional reactions, but noncontent features of a VE (such as the realism and quality of object rendering) may also influence emotional reactions to the mediated object. The present research studied how noncontent features (different reverberation times) of an auditory VE influenced 76 participants' ratings of emotional reactions and expressed emotional qualities of the sounds. The results showed that the two emotion dimensions of pleasantness and arousal were systematically affected if the same musical piece was rendered with different reverberation times. Overall, it was found that high reverberation time was perceived as most unpleasant. Taken together, the results suggested that noncontent features of a VE influence emotional reactions to mediated objects. Moreover, the study suggests that emotional reactions may be a important aspect of the VE experience that can help complementing standard presence questionnaires and quality evaluations.
Sound is an important, but often neglected, component for creating a self-motion illusion (vection) in Virtual Reality applications, for example, motion simulators. Apart from auditory motion cues, sound can provide contextual information representing self-motion in a virtual environment. In two experiments we investigated the benefits of hearing an engine sound when presenting auditory (Experiment 1) or auditory-vibrotactile (Experiment 2) virtual environments inducing linear vection. The addition of the engine sound to the auditory scene significantly enhanced subjective ratings of vection intensity in Experiment 1 and vection onset times but not subjective ratings in Experiment 2. Further analysis using individual imagery vividness scores showed that this disparity between vection measures was created by participants with higher kinesthetic imagery. On the other hand, for participants with lower kinesthetic imagery scores, the engine sound enhanced vection sensation in both experiments. A high correlation with participants' kinesthetic imagery vividness scores suggests the influence of a first person perspective in the perception of the engine sound. We hypothesize that self-motion sounds (e.g., the sound of footsteps, engine sound) represent a specific type of acoustic body-centered feedback in virtual environments. Therefore, the results may contribute to a better understanding of the role of self-representation sounds (sonic self-avatars), in virtual and augmented environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.