Event-related potentials from visual stimuli that were presented after voluntary actions were recorded to examine how people expect their action effects. Participants pressed a button in response to a cue stimulus (L or R) either in the fixed condition where participants always pressed a center button or in the choice condition where they selectively pressed the corresponding left or right button. Immediately after the button press, a second stimulus (left or right) was presented visually to inform that their action was registered. When the second stimulus did not match the cue stimulus (p=.20), a late positive potential (LPP) with a posterior scalp distribution occurred in a latency range of 500-700 ms. The amplitude of this mismatch-related LPP was larger in the choice condition than in the fixed condition. The results suggest that the cognitive mismatch between the expected and actual action effects is reflected in the LPP, and the selection of a specific action strengthens the expectation of its action effect.
Objective and quantitative assessment methods are needed for the fitting of hearing aid parameters. This paper proposes a novel speech discrimination assessment method using electroencephalograms (EEGs). The method utilizes event-related potentials (ERPs) to visual stimuli instead of the conventionally used auditory stimuli. A spoken letter is played through a speaker as an initial auditory stimulus. The same letter can then be visually displayed on a screen as a match condition, or a different letter is displayed (mismatch condition). The participant determines whether the two stimuli represent the same letter or not. The P3 component or late positive potential (LPP) component are elicited when a participant detects either a match or mismatch between the auditory and visual stimuli, respectively. The hearing ability of each participant can be estimated objectively via analysis of these ERP components.
Monitoring biosignals has become more peoplefriendly since the development of wearable electro-conductive cloth. This paper proposes an application using wearable EMG sensors that estimates the fall risk of elderly people based on cocontraction of the lower limb during walking. Fifty-two healthy elderly people participated in a gait test with continuous EMG of the thigh (rectus femoris and biceps femoris) and cruris (tibialis anterior and gastrocnemius). The participants were categorized into a "faller group" and a "non-faller group" based on their experience of falls in the previous year. Co-contraction of thigh and cruris were determined based on EMG, and were then used to estimate the experience of falling using a linear discrimination method. The results showed that thigh co-contraction during the stance phase can be predictive of falling experience with 65% accuracy.
In this paper, we present our cross-wire assist concept, for assisting a single joint in multiple degrees of freedom. It is comprised of four motor driven Bowden cable actuators (wires) per assisted joint, with the wires crossed over each other at the front and rear. Simulation results show that selectively actuating a subset of these wires allows torque to be generated in 6 directions, with the torque magnitude dependent on joint angle. We have built a fully wearable prototype of our assistance device for both hip joints, with 8 high-speed and independently controllable actuators each providing force up to 100 N. The prototype has a total mass of 9.3 kg, and is shown in motion capture testing to generate movement in 6 directions around the users joint, including internal and external rotation. Mobile, multi-degree of freedom assistance cross-wire system will enable assistive devices to better match human movement, allowing support and rehabilitation in tasks beyond straight line walking.
When a voluntary action is followed by an unexpected stimulus, a late positive potential (LPP) with a posterior scalp distribution is elicited in a latency range of 500-700 ms. In the present study, we examined what type of mismatch between expectations and action outcomes was reflected by the LPP. Twelve student volunteers participated in a task simulating choice of TV programs. After choosing one of three options displayed as a cue stimulus, they viewed a second stimulus (still TV image). To manipulate the type of expectation, three kinds of cue conditions were used: thumbnail image condition (three small TV images), category label condition (three words), and no cue condition (three question marks). Over trials, the second stimulus either matched (p = .80) or mismatched (p = .20) the chosen option. As compared to matched TV images, mismatched TV images elicited a larger LPP (500-700 ms) in the thumbnail image and category label conditions. In addition, a larger centroparietal P3 (400-450 ms) was elicited to mismatched TV images in the thumbnail image condition alone. LPP reflects a conceptual mismatch between a category-based expectation and an ensuing action outcome, whereas P3 reflects a perceptual mismatch between an image-based expectation and an action outcome.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.