2019
DOI: 10.1007/978-3-030-32692-0_31
|View full text |Cite
|
Sign up to set email alerts
|

Improving Whole-Brain Neural Decoding of fMRI with Domain Adaptation

Abstract: This is a repository copy of Improving whole-brain neural decoding of fMRI with domain adaptation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…Among the surveyed 31 symmetric approaches, direct approaches operated on the feature representations across domains by minimizing their differences (via mutual information [ 63 ], maximum mean discrepancy [ 46 , 49 , 64 ], Euclidean distance [ 65 , 66 , 67 , 68 , 69 , 70 , 71 ], Wasserstein distance [ 72 ], and average likelihood [ 73 ]), maximizing their correlation [ 74 , 75 ] or covariance [ 36 ], and introducing sparsity with L1/L2 norms [ 42 , 76 ]. On the other hand, indirect approaches were applied via adversarial training [ 28 , 41 , 54 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 ], and knowledge distillation [ 86 ].…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Among the surveyed 31 symmetric approaches, direct approaches operated on the feature representations across domains by minimizing their differences (via mutual information [ 63 ], maximum mean discrepancy [ 46 , 49 , 64 ], Euclidean distance [ 65 , 66 , 67 , 68 , 69 , 70 , 71 ], Wasserstein distance [ 72 ], and average likelihood [ 73 ]), maximizing their correlation [ 74 , 75 ] or covariance [ 36 ], and introducing sparsity with L1/L2 norms [ 42 , 76 ]. On the other hand, indirect approaches were applied via adversarial training [ 28 , 41 , 54 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 ], and knowledge distillation [ 86 ].…”
Section: Resultsmentioning
confidence: 99%
“…The most common approach to apply a prior-sharing strategy—and, in general, transfer learning—was fine-tuning all the parameters of a pretrained CNN [ 29 , 31 , 32 , 33 , 35 , 39 , 71 , 87 , 88 , 89 , 90 , 91 , 92 , 93 , 94 , 95 , 96 , 97 , 98 , 99 , 100 , 101 , 102 , 103 , 104 , 105 , 106 , 107 , 108 , 109 , 110 , 111 , 112 , 113 , 114 , 115 , 116 , 117 , 118 , 119 ] (80% of all prior-sharing methods). Other approaches utilized Bayesian graphical models [ 37 , 38 , 120 , 121 ], graph neural networks [ 122 ], kernel methods [ 64 , 123 ], multilayer perceptrons [ 124 ], and Pearson-correlation methods [ 125 ]. Additionally, Sato et al […”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Insufficient training data and domain-adaptation [17] are the focus of many recent machine learning works, including transfer learning [18], unsupervised and self-supervised learning [19,20,21,22], semi-supervised and transductive learning [23,24,25,26]. Nevertheless, these approaches often assume that there is sufficient labeled data in the 'source domain', and that the problem is the generalization to the unlabeled 'target domain'.…”
Section: Introductionmentioning
confidence: 99%