Experimental studies show that automobile drivers adjust their speed in curves so that maximum vehicle lateral accelerations decrease at high speeds. This pattern of lateral accelerations is described by a new driver model, assuming drivers control a variable safety margin of perceived lateral acceleration according to their anticipated steering deviations. Compared with a minimum time-to-lane-crossing (H. Godthelp, 1986) speed modulation strategy, this model, based on nonvisual cues, predicts that extreme values of lateral acceleration in curves decrease quadratically with speed, in accordance with experimental data obtained in a vehicle driven on a test track and in a motion-based driving simulator. Variations of model parameters can characterize "normal" or "fast" driving styles on the test track. On the simulator, it was found that the upper limits of lateral acceleration decreased less steeply when the motion cuing system was deactivated, although drivers maintained a consistent driving style. This is interpreted per the model as an underestimation of curvilinear speed due to the lack of inertial stimuli. Actual or potential applications of this research include a method to assess driving simulators as well as to identify driving styles for on-board driver aid systems.
Accurate tracking and analysis of animal behavior is crucial for modern systems neuroscience. Animals can be easily monitored in confined, well-lit spaces or virtual-reality setups. However, tracking freely moving behavior through naturalistic, three-dimensional (3D) environments remains a major challenge. A closed-loop control that provides behavior-triggered stimuli and thus structures a behavioral task, is also more complicated in free-range settings. Here, we present EthoLoop: a framework for studying the neuroethology of freely roaming animals, including examples with rodents and primates. Combining real-time optical tracking, "on the fly" behavioral analysis with remote-controlled stimulus-reward boxes, allows us to directly interact with free-ranging animals in their habitat. Assembled with off-the-shelf and wireless hardware, we show that this closed-loop optical tracking system can be used to follow the 3D spatial position of multiple subjects in real time, continuously provide close-up views, condition behavioral patterns detected online with deep learning methods and be synchronized with wirelessly acquired neuronal recordings or with optogenetic feedback. Reward or stimulus feedback is provided by battery-powered and remote-controlled boxes that communicate with the tracking system and can be distributed at multiple locations in the environment. The EthoLoop framework enables a new generation of interactive, but well-controlled and reproducible neuroethological studies in large-field naturalistic settings.
Advanced driving simulators aim at rendering the motion of a vehicle with maximum fidelity, which requires increased mechanical travel, size, and cost of the system. Motion cueing algorithms reduce the motion envelope by taking advantage of limitations in human motion perception, and the most commonly employed method is just to scale down the physical motion. However, little is known on the effects of motion scaling on motion perception and on actual driving performance. This paper presents the results of a European collaborative project, which explored different motion scale factors in a slalom driving task. Three state-of-the-art simulator systems were used, which were capable of generating displacements of several meters. The results of four comparable driving experiments, which were obtained with a total of 65 participants, indicate a preference for motion scale factors below 1, within a wide range of acceptable values (0.4-0.75). Very reduced or absent motion cues significantly degrade driving performance. Applications of this research are discussed for the design of motion systems and cueing algorithms for driving simulation.
This article describes a computational model for the sensory perception of self-motion, considered as a compromise between sensory information and physical coherence constraints. This compromise is realized by a dynamic optimization process minimizing a set of cost functions. Measure constraints are expressed as quadratic errors between motion estimates and corresponding sensory signals, using internal models of sensor transfer functions. Coherence constraints are expressed as quadratic errors between motion estimates, and their prediction is based on internal models of the physical laws governing the corresponding physical stimuli. This general scheme leads to a straightforward representation of fundamental sensory interactions (fusion of visual and canal rotational inputs, identification of the gravity component from the otolithic input, otolithic contribution to the perception of rotations, and influence of vection on the subjective vertical). The model is tuned and assessed using a range of well-known psychophysical results, including off-vertical axis rotations and centrifuge experiments. The ability of the model to predict and help analyze new situations is illustrated by a study of the vestibular contributions to self-motion perception during automobile driving and during acceleration cueing in driving simulators. The extendable structure of the model allows for further developments and applications, by using other cost functions representing additional sensory interactions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.