Quantifying accurate functional magnetic resonance imaging (fMRI) activation maps can be dampened by spatio‐temporally varying task‐correlated motion (TCM) artifacts in certain task paradigms (e.g., overt speech). Such real‐world tasks are relevant to characterize longitudinal brain reorganization poststroke, and removal of TCM artifacts is vital for improved clinical interpretation and translation. In this study, we developed a novel independent component analysis (ICA)‐based approach to denoise spatio‐temporally varying TCM artifacts in 14 persons with aphasia who participated in an overt language fMRI paradigm. We compared the new methodology with other existing approaches such as “standard” volume registration, nonselective motion correction ICA packages (i.e., AROMA), and combining the novel approach with AROMA. Results show that the proposed methodology outperforms other approaches in removing TCM‐related false positive activity (i.e., improved detectability power) with high spatial specificity. The proposed method was also effective in maintaining a balance between removal of TCM‐related trial‐by‐trial variability and signal retention. Finally, we show that the TCM artifact is related to clinical metrics, such as speech fluency and aphasia severity, and the implication of TCM denoising on such relationship is also discussed. Overall, our work suggests that routine bulkhead motion based denoising packages cannot effectively account for spatio‐temporally varying TCM. Further, the proposed TCM denoising approach requires a one‐time front‐end effort to hand label and train the classifiers that can be cost‐effectively utilized to denoise large clinical data sets.
Tools and objects are associated with numerous action possibilities that are reduced depending on the task-related internal and external constraints presented to the observer. Action hierarchies propose that goals represent higher levels of the hierarchy while kinematic patterns represent lower levels of the hierarchy. Prior work suggests that tool-object perception is heavily influenced by grasp and action context. The current study sought to evaluate whether the presence of action hierarchy can be perceptually identified using eye tracking during tool-object observation. We hypothesize that gaze patterns will reveal a perceptual hierarchy based on the observed task context and grasp constraints. Participants viewed tool-objects scenes with two types of constraints: task-context and grasp constraints. Task-context constraints consisted of correct (e.g., frying pan-spatula) and incorrect tool-object pairings (e.g., stapler-spatula). Grasp constraints involved modified tool orientations, which requires participants to understand how initially awkward grasp postures can help achieve the task. The visual scene contained three areas of interests (AOIs): the object, the functional tool-end (e.g., spoon handle) and the manipulative tool-end (e.g., spoon bowl). Results revealed two distinct processes based on stimuli constraints. Goal-oriented encoding, the attentional bias towards the object and manipulative tool-end, was demonstrated when grasp did not lead to meaningful tool-use. In images where grasp postures were critical to action performance, attentional bias was primarily between the object and functional tool-end, which suggests means-related encoding of the graspable properties of the object. This study expands from previous work and demonstrates a flexible constraint hierarchy depending on the observed task constraints.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.