When displayed in slow motion, actions are often perceived longer compared with original speed. However, it remains to be determined why this bias exists. Is it possible that the bias emerges because participants underestimate the factor by which a video was slowed down and hence arrive at erroneous conclusions about the original duration? If true, providing explicit information about the respective video speed should eliminate this slow motion effect. To scrutinize the nature of this bias, participants rated the original duration of sports actions displayed at original speed or slow motion. Results revealed the expected overestimation bias consisting in longer ratings with increasing slow motion. However, the bias disappeared when information about the current video speed was provided. The observations suggest an influence of knowledge about video playback speed on cognitive-evaluative processes which may hold important implications for future research and practice.
The project was supported by the German Research Foundation (DFG) with two grants awarded to Markus Raab (RA 940/15-2) and Rouwen Cañal-Bruland (CA 635/2-2).
Temporal and spatial representations are not independent of each other. Two conflicting theories provide alternative hypotheses concerning the specific interrelations between temporal and spatial representations. The asymmetry hypothesis (based on the conceptual metaphor theory, Lakoff and Johnson, 1980) predicts that temporal and spatial representations are asymmetrically interrelated such that spatial representations have a stronger impact on temporal representations than vice versa. In contrast, the symmetry hypothesis (based on a theory of magnitude, Walsh, 2003) predicts that temporal and spatial representations are symmetrically interrelated. Both theoretical approaches have received empirical support. From an embodied cognition perspective, we argue that taking sensorimotor processes into account may be a promising steppingstone to explain the contradictory findings. Notably, different modalities are differently sensitive to the processing of time and space. For instance, auditory information processing is more sensitive to temporal than spatial information, whereas visual information processing is more sensitive to spatial than temporal information. Consequently, we hypothesized that different sensorimotor tasks addressing different modalities may account for the contradictory findings. To test this, we critically reviewed relevant literature to examine which modalities were addressed in time-space mapping studies. Results indicate that the majority of the studies supporting the asymmetry hypothesis applied visual tasks for both temporal and spatial representations. Studies supporting the symmetry hypothesis applied mainly auditory tasks for the temporal domain, but visual tasks for the spatial domain. We conclude that the use of different tasks addressing different modalities may be the primary reason for (a)symmetric effects of space on time, instead of a genuine (a)symmetric mapping.
The visual system is said to be especially sensitive towards spatial but lesser so towards temporal information. To test this, in two experiments, we systematically reduced the acuity and contrast of a visual stimulus and examined the impact on spatial and temporal precision (and accuracy) in a manual interception task. In Experiment 1, we blurred a virtual, to-be-intercepted moving circle (ball). Participants were asked to indicate (i.e., finger tap) on a touchscreen where and when the virtual ball crossed a ground line. As a measure of spatial and temporal accuracy and precision, we analyzed the constant and variable errors, respectively. With increasing blur, the spatial and temporal variable error, as well as the spatial constant error increased, while the temporal constant error decreased. Because in the first experiment, blur was potentially confounded with contrast, in Experiment 2, we re-ran the experiment with one difference: instead of blur, we included five levels of contrast matched to the blur levels. We found no systematic effects of contrast. Our findings confirm that blurring vision decreases spatial precision and accuracy and that the effects were not mediated by concomitant changes in contrast. However, blurring vision also affected temporal precision and accuracy, thereby questioning the generalizability of the theoretical predictions to the applied interception task.
Batting and catching are real-life examples of interception. Due to latencies between the processing of sensory input and the corresponding motor response, successful interception requires accurate spatiotemporal prediction. However, spatiotemporal predictions can be subject to bias. For instance, the more spatially distant two sequentially presented objects are, the longer the interval between their presentations is perceived (kappa effect) and vice versa (tau effect). In this study, we deployed these phenomena to test in two sensory modalities whether temporal representations depend asymmetrically on spatial representations, or whether both are symmetrically interrelated. We adapted the tau and kappa paradigms to an interception task by presenting four stimuli (visually or auditorily) one after another on four locations, from left to right, with constant spatial and temporal intervals in between. In two experiments, participants were asked to touch the screen where and when they predicted a fifth stimulus to appear. In Exp. 2, additional predictive gaze measures were examined. Across experiments, auditory but not visual stimuli produced a tau effect for interception, supporting the idea that the relationship between space and time is moderated by the sensory modality. Results did not reveal classical auditory or visual kappa effects and no visual tau effects. Gaze data in Exp. 2 showed that the (spatial) gaze orientation depended on temporal intervals while the timing of fixations was modulated by spatial intervals, thereby indicating tau and kappa effects across modalities. Together, the results suggest that sensory modality plays an important role in spatiotemporal predictions in interception.
Valentine’s influential norm-based multidimensional face-space model (nMDFS) predicts that perceived distinctiveness increases with distance to the norm. Occipito-temporal event-related potentials (ERPs) have been recently shown to respond selectively to variations in distance-to-norm (P200) or familiarity (N250, late negativity), respectively (Wuttke & Schweinberger, 2019). Despite growing evidence on interindividual differences in face perception skills at the behavioral level, little research has focused on their electrophysiological correlates. To reveal potential interindividual differences in face spaces, we contrasted high and low performers in face recognition in regards to distance-to-norm (P200) and familiarity (N250). We replicated both the P200 distance-to-norm and the N250 familiarity effect. Importantly, we observed: i) reduced responses in low compared to high performers of face recognition, especially in terms of smaller distance-to-norm effects in the P200, possibly indicating less ‘expanded’ face spaces in low compared to high performers; ii) increased N250 responses to familiar original faces in high performers, suggesting more robust face identity representations. In summary, these findings suggest the contribution of both early norm-based face coding and robust face representations to individual face recognition skills, and indicate that ERPs can offer a promising route to understand individual differences in face perception and their neurocognitive correlates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.