A recent paper [31] claims to classify brain processing evoked in subjects watching ImageNet stimuli as measured with EEG and to employ a representation derived from this processing to construct a novel object classifier. That paper, together with a series of subsequent papers [11, 18, 20, 24, 25, 30, 34], claims to achieve successful results on a wide variety of computer-vision tasks, including object classification, transfer learning, and generation of images depicting human perception and thought using brain-derived representations measured through EEG. Our novel experiments and analyses demonstrate that their results crucially depend on the block design that they employ, where all stimuli of a given class are presented together, and fail with a rapid-event design, where stimuli of different classes are randomly intermixed. The block design leads to classification of arbitrary brain states based on block-level temporal correlations that are known to exist in all EEG data, rather than stimulus-related activity. Because every trial in their test sets comes from the same block as many trials in the corresponding training sets, their block design thus leads to classifying arbitrary temporal artifacts of the data instead of stimulus-related activity. This invalidates all subsequent analyses performed on this data in multiple published papers and calls into question all of the reported results. We further show that a novel object classifier constructed with a random codebook performs as well as or better than a novel object classifier constructed with the representation extracted from EEG data, suggesting that the performance of their classifier constructed with a representation extracted from EEG data does not benefit from the brain-derived representation. Together, our results illustrate the far-reaching implications of the temporal autocorrelations that exist in all neuroimaging data for classification experiments. Further, our results calibrate the underlying difficulty of the tasks involved and caution against overly optimistic, but incorrect, claims to the contrary.
The study of signed languages provides an opportunity to identify those characteristics of language that are universal and to investigate the effect of production modality (signed vs. spoken) on the grammar. Over time, American Sign Language (ASL) has accommodated itself to the production and perception requirements of the manual/visual modality, resulting in a prosodic system that is comparable in function to spoken languages but different in means of expression. The present focus is on phrasal prominence in ASL. I review the marking of stress and phrase boundaries in ASL, and discuss prominence assignment at the phrasal level, with brief mention of lexical stress. At the kinematic level, there is a modality effect in marking of linguistic prominence but no modality effect with respect to marking phrase position. Of significance is the fact that ASL lacks phrasal prominence plasticity, that is the ability to move prominence to mark focus in a sentence location other than phrase final. I review the typological implications of how ASL handles prominence as compared to other languages.
The present report attempts to formulate an appropriate linguistic generalization for the occurrence of inhibited periodic eyeblinking by fluent ASL signers. There are three components to our investigation. In the first component, Observation, we take several signing sources, transcribe significant nonmanuals, and analyze where eyeblinks occur with respect to the signed signal and other nonmanuals. In the second component, Prediction, we formulate a generalization concerning the possible locations of eyeblinks and test this generalization by making predictions on a sample of signing. In the third component, Confirmation, we reconsider Baker and Padden’s observation that signers do not blink after the conditional clause before a question, provide data to the contrary, and provide a possible explanation of why they were led to the conclusion they reached. Overall, we show that signers’ eyeblinks are sensitive to syntactic structure, from which Intonational Phrases may be derived. These findings help to establish how intonational information, carried by pitch in spoken languages, can be provided in a signed language.
There has been a scarcity of studies exploring the influence of students' American Sign Language (ASL) proficiency on their academic achievement in ASL/English bilingual programs. The aim of this study was to determine the effects of ASL proficiency on reading comprehension skills and academic achievement of 85 deaf or hard-of-hearing signing students. Two subgroups, differing in ASL proficiency, were compared on the Northwest Evaluation Association Measures of Academic Progress and the reading comprehension subtest of the Stanford Achievement Test, 10th edition. Findings suggested that students highly proficient in ASL outperformed their less proficient peers in nationally standardized measures of reading comprehension, English language use, and mathematics. Moreover, a regression model consisting of 5 predictors including variables regarding education, hearing devices, and secondary disabilities as well as ASL proficiency and home language showed that ASL proficiency was the single variable significantly predicting results on all outcome measures. This study calls for a paradigm shift in thinking about deaf education by focusing on characteristics shared among successful deaf signing readers, specifically ASL fluency.
Research on spoken languages has identified a "subject preference" processing strategy for tackling input that is syntactically ambiguous as to whether a sentence-initial NP is a subject or object. The present study documents that the "subject preference" strategy is also seen in the processing of a sign language, supporting the hypothesis that the "subject"-first strategy is universal and not dependent on the language modality (spoken vs. signed). Deaf signers of Austrian Sign Language (ÖGS) were shown videos of locally ambiguous signed sentences in SOV and OSV word orders. Electroencephalogram (EEG) data indicated higher cognitive load in response to OSV stimuli (i.e. a negativity for OSV compared to SOV), indicative of syntactic reanalysis cost. A finding that is specific to the visual modality is that the ERP (event-related potential) effect reflecting linguistic reanalysis occurred earlier than might have been expected, that is, before the time point when the path movement of the disambiguating sign was visible. We suggest that in the visual modality, transitional movement of the articulators prior to the disambiguating verb position or co-occurring non-manual (face/body) markings were used in resolving the local ambiguity in ÖGS. Thus, whereas the processing strategy of "subject preference" is cross-modal at the linguistic level, the cues that enable the processor to apply that strategy differ in signing as compared to speech.
Previous approaches to explaining brow raise behavior in American Sign Language (ASL) have claimed that it performs a semantic or pragmatic function, such as indicating that information is presupposed, given, or otherwise not asserted. However we show that this explanation cannot be extended to all the data. The commonality among all the structures that have 'br' marking is that the 'br' shows up in A′-positions associated with [−wh] operator features. These operators are semantically restrictive. Furthermore, the domain of 'br' spreading is the checking domain of the [−wh] feature, in contrast with c-command domain associated with [+wh] and [+neg] features. The three distinctive ASL brow positions, raised, furrowed, and neutral, are each associated with a different operator situation, [−wh], [+wh], and none, respectively. In sum, 'br'-marking is clearly associated with syntactic structures that are related only indirectly with specific semantic, pragmatic, or discourse factors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.