Behavioral experiments and a connectionist model were used to explore the use of featural representations in the computation of word meaning. The research focused on the role of correlations among features, and differences between speeded and untimed tasks with respect to the use of featural information. The results indicate that featural representations are used in the initial computation of word meaning (as in an attractor network), patterns of feature correlations differ between artifacts and living things, and the degree to which features are intercorrelated plays an important role in the organization of semantic memory. The studies also suggest that it may be possible to predict semantic priming effects from independently motivated featural theories of semantic relatedness. Implications for related behavioral phenomena such as the semantic impairments associated with Alzheimer's disease (AD) are discussed.
We introduce two new low-level computational models of brightness perception that account for a wide range of brightness illusions, including many variations on White's Effect [Perception, 8, 1979, 413]. Our models extend Blakeslee and McCourt's ODOG model [Vision Research, 39, 1999, 4361], which combines multiscale oriented difference-of-Gaussian filters and response normalization. We extend the response normalization to be more neurally plausible by constraining normalization to nearby receptive fields (models 1 and 2) and spatial frequencies (model 2), and show that both of these changes increase the effectiveness of the models at predicting brightness illusions.
The role of feature correlations in semantic memory is a central issue in conceptual representation. In two versions of the feature verification task, subjects were faster to verify that a feature () is part of a concept (grapefruit) if it is strongly rather than weakly intercorrelated with the other features of that concept. Contrasting interactions between feature correlations and SOA were found when the concept versus the feature was presented first. An attractor network model of word meaning that naturally learns and uses feature correlations predicted those interactions. This research provides further evidence that semantic memory includes implicitly-learned statistical knowledge of feature relationships, in contrast to theories such as spreading activation networks, in which feature correlations play no role.
We show that it is possible to successfully predict subsequent memory performance based on single-trial EEG activity before and during item presentation in the study phase. Two-class classification was conducted to predict subsequently remembered vs. forgotten trials based on subjects’ responses in the recognition phase. The overall accuracy across 18 subjects was 59.6 % by combining pre- and during-stimulus information. The single-trial classification analysis provides a dimensionality reduction method to project the high-dimensional EEG data onto a discriminative space. These projections revealed novel findings in the pre- and during-stimulus period related to levels of encoding. It was observed that the pre-stimulus information (specifically oscillatory activity between 25–35Hz) −300 to 0 ms before stimulus presentation and during-stimulus alpha (7–12 Hz) information between 1000–1400 ms after stimulus onset distinguished between recollection and familiarity while the during-stimulus alpha information and temporal information between 400–800 ms after stimulus onset mapped these two states to similar values.
In this paper, we show that externally recorded electroencephalogram (EEG) signals contain sufficient information to decode target location during a reach (Experiment 1) and during the planning period before a reach (Experiment 2). We discuss the application of independent component analysis and dipole fitting for removing movement artifacts. With this technique we get similar classification accuracy for classifying EEG signals during a reach (Experiment 1) and during the planning period before a reach (Experiment 2). To the best of our knowledge, this is the first demonstration of decoding (planned) reach targets from EEG. These results lay the foundation for future EEG-based brain-computer interfaces (BCIs) based on decoding of planned reaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.