Several studies have shown that the precision of smooth pursuit eye speed can match perceptual speed discrimination thresholds during the steady-state phase of pursuit [Kowler, E., & McKee, S. (1987). Sensitivity of smooth eye movement to small differences in target velocity. Vision Research, 27, 993-1015; Gegenfurtner, K., Xing, D., Scott, B., & Hawken, M. (2003). A comparison of pursuit eye movement and perceptual performance in speed discrimination. Journal of Vision, 3, 865-876]. Recently, Osborne et al. [Osborne, L. C., Lisberger, S. G., & Bialek, W. (2005). A sensory source for motor variation. Nature, 437, 412-416; Osborne, L. C., Hohl, S. S., Bialek, W., & Lisberger S. G. (2007). Time course of precision in smooth-pursuit eye movements of monkeys. Journal of Neuroscience, 27, 2987-2998] claimed that pursuit precision during the initiation phase of pursuit also matches the sensory variability, implying that there is no motor noise added during pursuit initiation. However, these results were derived from a comparison of monkey pursuit data to human perceptual data from the literature, which were obtained with different stimuli. To directly compare precision for perception and pursuit, we measured pursuit and perceptual variability in the same human observers using the same stimuli. Subjects had to pursue a Gaussian blob in a step-ramp paradigm and give speed judgments on the same or in different trials. Speed discrimination thresholds were determined for different presentation durations. The analysis of pursuit precision was performed for short intervals containing the initiation period only and also for longer intervals including steady-state pursuit. In agreement with published studies, we found that the Weber fractions for psychophysical speed discrimination were fairly constant for different presentation durations, even for the shortest presentation duration of 150ms. Pursuit variability was 3-4 times as high for the analysis interval (300ms) containing the open-loop phase only. For pursuit analysis intervals of 400-500ms, pursuit variability approached perceptual variability. Our results show that, for the stimuli we used, the motor system contributes at least 50% to the total variability of smooth pursuit eye movements during the initiation phase.
Braun DI, Mennie N, Rasche C, Schü tz AC, Hawken MJ, Gegenfurtner KR. Smooth pursuit eye movements to isoluminant targets. J Neurophysiol 100: 1287-1300, 2008. First published July 9, 2008 doi:10.1152/jn.00747.2007. At slow speeds, chromatic isoluminant stimuli are perceived to move much slower than comparable luminance stimuli. We investigated whether smooth pursuit eye movements to isoluminant stimuli show an analogous slowing. Beside pursuit speed and latency, we studied speed judgments to the same stimuli during fixation and pursuit. Stimuli were either large sine wave gratings or small Gaussians blobs moving horizontally at speeds between 1 and 11°/s. Targets were defined by luminance contrast or color. Confirming prior studies, we found that speed judgments of isoluminant stimuli during fixation showed a substantial slowing when compared with luminance stimuli. A similarly strong and significant effect of isoluminance was found for pursuit initiation: compared with luminance targets of matched contrasts, latencies of pursuit initiation were delayed by 50 ms at all speeds and eye accelerations were reduced for isoluminant targets. A small difference was found between steady-state eye velocities of luminance and isoluminant targets. For comparison, we measured latencies of saccades to luminance and isoluminant stimuli under similar conditions, but the effect of isoluminance was only found for pursuit. Parallel psychophysical experiments revealed that different from speed judgments of moving isoluminant stimuli made during fixation, judgments during pursuit are veridical for the same stimuli at all speeds. Therefore information about target speed seems to be available for pursuit eye movements and speed judgments during pursuit but is degraded for perceptual speed judgments during fixation and for pursuit initiation.
Contemporary theoretical accounts of perceptual learning typically assume that observers are either unbiased or stably biased across the course of learning. However, standard methods for estimating thresholds, as they are typically used, do not allow this assumption to be tested. We present an approach that allows for this test specific to perceptual learning for contrast detection. We show that reliable decreases in detection thresholds and increases in hit rates are not uniformly accompanied by reliable increases in sensitivity (d ), but are regularly accompanied by reliable liberal shifts in response criteria (c). In addition, we estimate the extent to which sensitivity could have increased in the absence of these liberal shifts. The results pose a challenge to the assumption that perceptual learning has limited or no impact on response criteria.
The recent quantitative description of activity-dependent depression in the synaptic transmission between cortical neurons has lead to many interesting suggestions of possible computational implications. Based on a simple biological model, we have constructed an analog circuit that emulates the properties of short-term depressing synapses. The circuit comprises only seven transistors and two capacitors per synapse, and is able to reproduce computational features of depressing synapses such as the 1/F law, the detection of long intervals of presynaptic silence and the sensitivity to redistribution of presynaptic firing rates. It provides a useful basis for implementing neural networks with dynamical synapses.
A decomposition is described, which parameterizes the geometry and appearance of contours and regions of gray-scale images with the goal of fast categorization. To express the contour geometry, a contour is transformed into a local/global space, from which parameters are derived classifying its global geometry (arc, inflexion or alternating) and describing its local aspects (degree of curvature, edginess, symmetry). Regions are parameterized based on their symmetric axes, which are evolved with a wave-propagation process enabling to generate the distance map for fragmented contour images. The methodology is evaluated on three image sets, the Caltech 101 set and two sets drawn from the Corel collection. The performance nearly reaches the one of other categorization systems for unsupervised learning.
Abstract. In this paper, we propose an audio-visual approach to video genre categorization. Audio information is extracted at block-level, which has the advantage of capturing local temporal information. At temporal structural level, we asses action contents with respect to human perception. Further, color perception is quantified with statistics of color distribution, elementary hues, color properties and relationship of color. The last category of descriptors determines statistics of contour geometry. An extensive evaluation of this multi-modal approach based on on more than 91 hours of video footage is presented. We obtain average precision and recall ratios within [87% − 100%] and [77% − 100%], respectively, while average correct classification is up to 97%. Additionally, movies displayed according to feature-based coordinates in a virtual 3D browsing environment tend to regroup with respect to genre, which has potential application with real content-based browsing systems.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.