A review of the neurophysiological literature suggests that the magnocellular pathway has adequate spatial-frequency and contrast sensitivity to perceive text under normal contrast conditions (>10%) and also is suppressed by red light. Results from three experiments involving color and reading show that red light impairs reading performance under normal luminance contrast conditions. However in a fourth experiment, isoluminant color text, designed to selectively activate the parvocellular pathway, is easier to read under red light. These discrepant results suggest that the magnocellular pathway is the dominant visual pathway for text perception. Implications for reading models and developmental dyslexia are discussed.
This study explores the relationship between attentional processing mediated by visual magnocellular (MC) processing and reading ability. Reading ability in a group of primary school children was compared to performance on a visual cued coherent motion detection task. The results showed that a brief spatial cue was more effective in drawing attention either away or towards a visual target in the group of readers ranked in the upper 25% of the sample compared to lower ranked readers. Regression analysis showed a signi¢cant relationship between attentional processing and reading when the e¡ects of age and intellectual ability were removed. Results suggested a stronger relationship between visual attentional and non-word reading compared to irregular word reading. NeuroReport15:000^000
An increasing number of neuroimaging studies are concerned with the identification of interactions or statistical dependencies between brain areas. Dependencies between the activities of different brain regions can be quantified with functional connectivity measures such as the cross-correlation coefficient. An important factor limiting the accuracy of such measures is the amount of empirical data available. For event-related protocols, the amount of data also affects the temporal resolution of the analysis. We use analytical expressions to calculate the amount of empirical data needed to establish whether a certain level of dependency is significant when the time series are autocorrelated, as is the case for biological signals. These analytical results are then contrasted with estimates from simulations based on real data recorded with magnetoencephalography during a resting-state paradigm and during the presentation of visual stimuli. Results indicate that, for broadband signals, 50-100 s of data is required to detect a true underlying cross-correlations coefficient of 0.05. This corresponds to a resolution of a few hundred milliseconds for typical event-related recordings. The required time window increases for narrow band signals as frequency decreases. For instance, approximately 3 times as much data is necessary for signals in the alpha band. Important implications can be derived for the design and interpretation of experiments to characterize weak interactions, which are potentially important for brain processing.
People are not infallible consistent "oracles": their confidence in decisionmaking may vary significantly between tasks and over time. We have previously reported the benefits of using an interface and algorithms that explicitly captured and exploited users' confidence: error rates were reduced by up to 50% for an industrial multi-class learning problem; and the number of interactions required in a design optimisation context was reduced by 33%. Having access to users' confidence judgements could significantly benefit intelligent interactive systems in industry, in areas such as Intelligent Tutoring systems, and in healthcare. There are many reasons for wanting to capture information about confidence implicitly. Some are ergonomic, but others are more 'social'-such as wishing to understand (and possibly take account of) users' cognitive state without interrupting them. We investigate the hypothesis that users' confidence can be accurately predicted from measurements of their behaviour. Eye-tracking systems were used to capture users' gaze patterns as they undertook a series of visual decision tasks, after each of which they reported their confidence on a 5-point Likert scale. Subsequently, predictive models were built using "conventional" Machine Learning approaches for numerical summary features derived from users' behaviour. We also investigate the extent to which the deep learning paradigm can reduce the need to design features specific to each application, by creating "gazemaps"-visual representations of the trajectories and durations of users' gaze fixations-and then training deep convolutional networks on these images. Treating the prediction of user confidence as a two-class problem (confident/not confident), we attained classification accuracy of 88% for the scenario of new users on known tasks, and 87% for known users on new tasks. Considering the confidence as an ordinal variable, we produced regression models with a mean absolute error of ≈0.7 in both cases. Capturing just a simple subset of non-task-specific numerical features gave slightly worse, but still quite high accuracy (eg. MAE ≈1.0). Results obtained with gazemaps and convolutional networks are competitive, despite not having access to longer-term information about users and tasks, which was vital for the 'summary' feature sets. This suggests that the gazemapbased approach forms a viable, transferable, alternative to hand-crafting features for each different application. These results provide significant evidence to confirm our hypothesis, and offer a way of substantially improving many interactive artificial intelligence applications via the addition of cheap non-intrusive hardware and computationally cheap prediction algorithms.
Several studies have indicated a key role for dorsal stream processing in lexical decoding.To examine this relationship further, performance on orthographic and phonological reading tests was compared with both steady-state visual evoked potentials and a putative behavioral measure of dorsal stream processing, coherent motion detection. Frequency analysis of the visual evoked potential data showed power at the second harmonic to be largely con¢ned to dorsal stream regions, and signi¢cantly correlated with motion detection thresholds. Regression analyses showed that orthographic processing was signi¢cantly associated with the second harmonic power. Although consistent with previous reports, there remains a question as to why the orthographic visual evoked potential power relationship did not extend to include the coherent motion detection measures. NeuroReport 17:335^339
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.