2022
DOI: 10.48550/arxiv.2204.09225
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Disentangling Spatial-Temporal Functional Brain Networks via Twin-Transformers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…They proved that the Transformer contributed to the early diagnosis of ASD, AD, depression, and other neurological diseases. Yu et al [184] proposed a twin-Transformer framework that built pairwise Transformers to analyze spatial and temporal information. The twin-Transformer framework deduced the functional network using self-supervised training, and was successfully applied in motor task.…”
Section: Fmri Modelingmentioning
confidence: 99%
“…They proved that the Transformer contributed to the early diagnosis of ASD, AD, depression, and other neurological diseases. Yu et al [184] proposed a twin-Transformer framework that built pairwise Transformers to analyze spatial and temporal information. The twin-Transformer framework deduced the functional network using self-supervised training, and was successfully applied in motor task.…”
Section: Fmri Modelingmentioning
confidence: 99%
“…These visual models are pre-trained on massive image datasets and possess the ability to understand the content of images and extract rich semantic information. Examples of pre-trained visual models include ViT [29], Swin Transformer [30], VideoMAE V2 [31] and others [32][33][34][35][36][37][38][39][40][41]. By learning representations and features from a large amount of data, these models enable computers to more effectively comprehend and analyze images for diverse downstream applications [42][43][44][45][46][47][48].…”
Section: Introductionmentioning
confidence: 99%