Summary Individuals constantly encounter feedback from others and process this feedback in various ways to maintain positive situational state self-esteem in relation to semantic-based or trait self-esteem. Individuals may utilize episodic or semantic-driven processes that modulate feedback in two different ways to maintain general self-esteem levels. To date, it is unclear how these processes work while individuals receive social feedback to modulate state self-esteem. Utilizing neural regions associated with semantic self-oriented and basic encoding processes (medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC), respectively), in addition to time-frequency and Granger causality analyses to assess mPFC and PCC interactions, this study examined how the encoding of social feedback modulated individuals' (N = 45) post-task state self-esteem in relation to their trait self-esteem. Findings highlight the dynamic interplay between mPFC and PCC that modulate state self-esteem in relation to trait self-esteem, to maintain high self-esteem in general in the moment and over time.
Recent work within neuroimaging consortia have aimed to identify reproducible, and often subtle, brain signatures of psychiatric or neurological conditions. To allow for high‐powered brain imaging analyses, it is often necessary to pool MR images that were acquired with different protocols across multiple scanners. Current retrospective harmonization techniques have shown promise in removing site‐related image variation. However, most statistical approaches may over‐correct for technical, scanning‐related, variation as they cannot distinguish between confounded image‐acquisition based variability and site‐related population variability. Such statistical methods often require that datasets contain subjects or patient groups with similar clinical or demographic information to isolate the acquisition‐based variability. To overcome this limitation, we consider site‐related magnetic resonance (MR) imaging harmonization as a style transfer problem rather than a domain transfer problem. Using a fully unsupervised deep‐learning framework based on a generative adversarial network (GAN), we show that MR images can be harmonized by inserting the style information encoded from a single reference image, without knowing their site/scanner labels a priori. We trained our model using data from five large‐scale multisite datasets with varied demographics. Results demonstrated that our style‐encoding model can harmonize MR images, and match intensity profiles, without relying on traveling subjects. This model also avoids the need to control for clinical, diagnostic, or demographic information. We highlight the effectiveness of our method for clinical research by comparing extracted cortical and subcortical features, brain‐age estimates, and case–control effect sizes before and after the harmonization. We showed that our harmonization removed the site‐related variances, while preserving the anatomical information and clinical meaningful patterns. We further demonstrated that with a diverse training set, our method successfully harmonized MR images collected from unseen scanners and protocols, suggesting a promising tool for ongoing collaborative studies. Source code is released in USC‐IGC/style_transfer_harmonization (github.com).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.