Linguistic labels (e.g., "chair") seem to activate visual properties of the objects to which they refer. Here we investigated whether language-based activation of visual representations can affect the ability to simply detect the presence of an object. We used continuous flash suppression to suppress visual awareness of familiar objects while they were continuously presented to one eye. Participants made simple detection decisions, indicating whether they saw any image. Hearing a verbal label before the simple detection task changed performance relative to an uninformative cue baseline. Valid labels improved performance relative to no-label baseline trials. Invalid labels decreased performance. Labels affected both sensitivity (d′) and response times. In addition, we found that the effectiveness of labels varied predictably as a function of the match between the shape of the stimulus and the shape denoted by the label. Together, the findings suggest that facilitated detection of invisible objects due to language occurs at a perceptual rather than semantic locus. We hypothesize that when information associated with verbal labels matches stimulus-driven activity, language can provide a boost to perception, propelling an otherwise invisible image into awareness.vision | top-down effects | CFS | penetrability of perception
Dynamic FM should be considered for use with persons with CIs to improve speech recognition in noise. At default CI settings, FM performance is better for Advanced Bionics recipients when compared to Cochlear Corporation recipients, but use of Autosensitivity by Cochlear Corporation users results in equivalent group performance.
The parahippocampal place area (PPA) is a region of human cortex that responds more strongly to visual scenes (e.g., landscapes or cityscapes) than to other visual stimuli. It has been proposed that the primary function of the PPA is encoding of contextual information about object co-occurrence. Supporting this context hypothesis are reports that the PPA responds more strongly to strong-context than to weak-context objects and more strongly to famous faces (for which contextual associations are available) than to nonfamous faces. We reexamined the reliability of these 2 effects by scanning subjects with functional magnetic resonance imaging while they viewed strong- and weak-context objects, scrambled versions of these objects, and famous and nonfamous faces. "Contextual" effects for objects were observed to be reliable in the PPA at slow presentation rates but not at faster presentation rates intended to discourage scene imagery. We were unable to replicate the earlier finding of preferential PPA response to famous versus nonfamous faces. These results are difficult to reconcile with the hypothesis that the PPA encodes contextual associations but are consistent with a competing hypothesis that the PPA encodes scenic layout.
In their PNAS article, Joel et al.(1) demonstrate extensive overlap between the distributions of females and males for many brain characteristics, measured across multiple neuroimaging modalities and datasets. They pose two requirements for categorizing brains into distinct male/female classes: (i) gender differences should appear as dimorphic form differences between male and female brains, and (ii) there should be internal consistency in the degree of "maleness-femaleness" of different elements within a single brain. Based on these criteria, the authors convincingly establish that there is little evidence for this strict sexually dimorphic view of human brains, counter to the popular lay conception of a "male" and "female" brain. This finding has broad implications not only for the ontology of gender, but also for the statistical treatment of sex in morphometric analyses.Critically, however, the conclusion that human brains cannot be categorized into two distinct classes depends largely on the level of analysis. Although the set of properties that distinguish one category from another is rich and flexible, there is rarely a diagnostic form (e.g., what singular physical characteristic reliably distinguishes cats from dogs?) and there is often substantial within-category variability (e.g., breeds of dogs) (2). The failure of the brain to meet these two requirements does not mean that "human brains cannot be categorized into two distinct classes: male brain/female brain." In fact, an individual's biological sex can be classified with extremely high accuracy by considering the brain mosaic as a whole.To demonstrate this, we acquired T1-weighted structural MRI scans for 1,566 individuals, aged 19-35 y (57.7% female), from the freely available Brain Genomics Superstruct Project (3). Cortical thickness and subcortical volume estimates were calculated using the FreeSurfer automatic segmentation algorithm (v5.3; surfer.nmr.mgh.harvard.edu/fswiki). First, 400 subjects were retained as a held-out validation set. Next, penalized logistic regression [elastic net (4, 5)] was used to predict the sex of each individual based on their mosaic, or pattern, of morphometric brain data. Within the training set (n = 1,166), a regression model was built using three repeats of 10-fold cross-validation. The model was then used, without modification, to predict the sex of each individual in the held-out sample. Classification accuracy was extremely high [accuracy: 93%, 95% confidence interval (CI) 89.5-94.9%, P < 10] and remained significant if head-size-related measurements were excluded [92% (CI 88.9-94.5%), P < 10 ]. To borrow the framing of Joel et al. (1), the human brain may be a mosaic, but it is one with predictable patterns.Despite the absence of dimorphic differences and lack of internal consistency observed by Joel et al. (1), multivariate analyses of whole-brain patterns in brain morphometry can reliably discriminate sex. These two results are not mutually inconsistent. We wholly agree that a strict dichotomy between male/fe...
Repeated exposure to a visual stimulus is associated with corresponding reductions in neural activity, particularly within visual cortical areas. It has been argued that this phenomenon of repetition suppression is related to increases in processing fluency or implicit memory. However, repetition of a visual stimulus can also be considered in terms of the similarity of the pattern of neural activity elicited at each exposure-a measure that has recently been linked to explicit memory. Despite the popularity of each of these measures, direct comparisons between the two have been limited, and the extent to which they differentially (or similarly) relate to behavioral measures of memory has not been clearly established. In the present study, we compared repetition suppression and pattern similarity as predictors of both implicit and explicit memory. Using functional magnetic resonance imaging, we scanned 20 participants while they viewed and categorized repeated presentations of scenes. Repetition priming (facilitated categorization across repetitions) was used as a measure of implicit memory, and subsequent scene recognition was used as a measure of explicit memory. We found that repetition priming was predicted by repetition suppression in prefrontal, parietal, and occipitotemporal regions; however, repetition priming was not predicted by pattern similarity. In contrast, subsequent explicit memory was predicted by pattern similarity (across repetitions) in some of the same occipitotemporal regions that exhibited a relationship between priming and repetition suppression; however, explicit memory was not related to repetition suppression. This striking double dissociation indicates that repetition suppression and pattern similarity differentially track implicit and explicit learning.
Perhaps the most striking phenomenon of visual awareness is inattentional blindness (IB), in which a surprisingly salient event right in front of you may go completely unseen when unattended. Does IB reflect a failure of perception, or only of subsequent memory? Previous work has been unable to answer this question, due to a seemingly intractable dilemma: ruling out memory requires immediate perceptual reports, but soliciting such reports fuels an expectation that eliminates IB. Here we introduce a way of evoking repeated IB in the same subjects and the same session: we show that observers fail to report seeing salient events' not only when they have no expectation, but also when they have the wrong expectations about the events nature. This occurs when observers must immediately report seeing anything unexpected, even mid-event. Repeated IB thus demonstrates that IB is aptly named: it reflects a genuine deficit in moment-bymoment conscious perception, rather than a form of inattentional amnesia.
For over a century, viruses have left a long trail of evidence implicating them as frequent suspects in the development of type 1 diabetes. Through vigorous interrogation of viral infections in individuals with islet autoimmunity and type 1 diabetes using serological and molecular virus detection methods, as well as mechanistic studies of virus-infected human pancreatic β-cells, the prime suspects have been narrowed down to predominantly human enteroviruses. Here, we provide a comprehensive overview of evidence supporting the hypothesised role of enteroviruses in the development of islet autoimmunity and type 1 diabetes. We also discuss concerns over the historical focus and investigation bias toward enteroviruses and summarise current unbiased efforts aimed at characterising the complete population of viruses (the “virome”) contributing early in life to the development of islet autoimmunity and type 1 diabetes. Finally, we review the range of vaccine and antiviral drug candidates currently being evaluated in clinical trials for the prevention and potential treatment of type 1 diabetes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.