SUMMARYBackgroundPeople with chronic tetraplegia due to high cervical spinal cord injury (SCI) can regain limb movements through coordinated electrical stimulation of peripheral muscles and nerves, known as Functional Electrical Stimulation (FES). Users typically command FES systems through other preserved, but limited and unrelated, volitional movements (e.g. facial muscle activity, head movements). We demonstrate an individual with traumatic high cervical SCI performing coordinated reaching and grasping movements using his own paralyzed arm and hand, reanimated through FES, and commanded using his own cortical signals through an intracortical brain-computer-interface (iBCI).MethodsThe study participant (53 years old, C4, ASIA A) received two intracortical microelectrode arrays in the hand area of motor cortex, and 36 percutaneous electrodes for electrically stimulating hand, elbow, and shoulder muscles. The participant used a motorized mobile arm support for gravitational assistance and to provide humeral ab/adduction under cortical control. We assessed the participant’s ability to cortically command his paralyzed arm to perform simple single-joint arm/hand movements and functionally meaningful multi-joint movements. We compared iBCI control of his paralyzed arm to that of a virtual 3D arm. This study is registered with ClinicalTrials.gov, NCT00912041.FindingsThe participant successfully cortically commanded single-joint and coordinated multi-joint arm movements for point-to-point target acquisitions (80% – 100% accuracy) using first a virtual arm, and second his own arm animated by FES. Using his paralyzed arm, the participant volitionally performed self-paced reaches to drink a mug of coffee (successfully completing 11 of 12 attempts within a single session) and feed himself.InterpretationThis is the first demonstration of a combined FES+iBCI neuroprosthesis for both reaching and grasping for people with SCI resulting in chronic tetraplegia, and represents a major advance, with a clear translational path, for clinically viable neuroprostheses for restoring reaching and grasping post-paralysis.
Story understanding involves many perceptual and cognitive subprocesses, from perceiving individual words, to parsing sentences, to understanding the relationships among the story characters. We present an integrated computational model of reading that incorporates these and additional subprocesses, simultaneously discovering their fMRI signatures. Our model predicts the fMRI activity associated with reading arbitrary text passages, well enough to distinguish which of two story segments is being read with 74% accuracy. This approach is the first to simultaneously track diverse reading subprocesses during complex story processing and predict the detailed neural representation of diverse story features, ranging from visual word properties to the mention of different story characters and different actions they perform. We construct brain representation maps that replicate many results from a wide range of classical studies that focus each on one aspect of language processing and offer new insights on which type of information is processed by different areas involved in language processing. Additionally, this approach is promising for studying individual differences: it can be used to create single subject maps that may potentially be used to measure reading comprehension and diagnose reading disorders.
Computational models of meaning trained on naturally occurring text successfully model human performance on tasks involving simple similarity measures, but they characterize meaning in terms of undifferentiated bags of words or topical dimensions. This has led some to question their psychological plausibility (Murphy, 2002;Schunn, 1999). We present here a fully automatic method for extracting a structured and comprehensive set of concept descriptions directly from an English part-of-speech-tagged corpus. Concepts are characterized by weighted properties, enriched with concept-property types that approximate classical relations such as hypernymy and function. Our model outperforms comparable algorithms in cognitive tasks pertaining not only to concept-internal structures (discovering properties of concepts, grouping properties by property type) but also to inter-concept relations (clustering into superordinates), suggesting the empirical validity of the property-based approach.
These results demonstrate the potential for an intracortical BCI to be used immediately after deployment by people with paralysis, without the need for user learning or extensive system calibration.
Abstract. This paper reports on the factorization of the 768-bit number RSA-768 by the number field sieve factoring method and discusses some implications for RSA.
Objective When using an intracortical BCI (iBCI), users modulate their neural population activity to move an effector towards a target, stop accurately, and correct for movement errors. We call the rules that govern this modulation a “feedback control policy”. A better understanding of these policies may inform the design of higher-performing neural decoders. Approach We studied how three participants in the BrainGate2 pilot clinical trial used an iBCI to control a cursor in a two-dimensional target acquisition task. Participants used a velocity decoder with exponential smoothing dynamics. Through offline analyses, we characterized the users’ feedback control policies by modeling their neural activity as a function of cursor state and target position. We also tested whether users could adapt their policy to different decoder dynamics by varying the gain (speed scaling) and temporal smoothing parameters of the iBCI. Main results We demonstrate that control policy assumptions made in previous studies do not fully describe the policies of our participants. To account for these discrepancies, we propose a new model that captures (1) how the user's neural population activity gradually declines as the cursor approaches the target from afar, then decreases more sharply as the cursor comes into contact with the target, (2) how the user makes constant feedback corrections even when the cursor is on top of the target, and (3) how the user actively accounts for the cursor's current velocity to avoid overshooting the target. Further, we show that users can adapt their control policy to decoder dynamics by attenuating neural modulation when the cursor gain is high and by damping the cursor velocity more strongly when the smoothing dynamics are high. Significance Our control policy model may help to build better decoders, understand how neural activity varies during active iBCI control, and produce better simulations of closed-loop iBCI movements.
Achieving a clearer picture of categorial distinctions in the brain is essential for our understanding of the conceptual lexicon, but much more fine-grained investigations are required in order for this evidence to contribute to lexical research. Here we present a collection of advanced data-mining techniques that allows the category of individual concepts to be decoded from single trials of EEG data. Neural activity was recorded while participants silently named images of mammals and tools, and category could be detected in single trials with an accuracy well above chance, both when considering data from single participants, and when group-training across participants. By aggregating across all trials, single concepts could be correctly assigned to their category with an accuracy of 98%. The pattern of classifications made by the algorithm confirmed that the neural patterns identified are due to conceptual category, and not any of a series of processing-related confounds. The time intervals, frequency bands and scalp locations that proved most informative for prediction permit physiological interpretation: the widespread activation shortly after appearance of the stimulus (from 100 ms) is consistent both with accounts of multi-pass processing, and distributed representations of categories. These methods provide an alternative to fMRI for fine-grained, large-scale investigations of the conceptual lexicon.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.