2022
DOI: 10.1016/j.compbiomed.2022.106206
|View full text |Cite
|
Sign up to set email alerts
|

E-DU: Deep neural network for multimodal medical image segmentation based on semantic gap compensation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…As depicted in Fig. 3, the first category, known as data-replay methods, involves storing a portion of past training data as exemplar memory such as [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36]. The second category, termed datafree methods, includes methods like [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49].…”
Section: Semantic Driftmentioning
confidence: 99%
See 2 more Smart Citations
“…As depicted in Fig. 3, the first category, known as data-replay methods, involves storing a portion of past training data as exemplar memory such as [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36]. The second category, termed datafree methods, includes methods like [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49].…”
Section: Semantic Driftmentioning
confidence: 99%
“…Here we would like to discuss the advantage and necessity of continual learning based on specified models during the period of emerging large models. Although recent [26], [34], [35], [50], [51] Generative-replay Generative-data Generative-feature without storing real data, customized data replay heavy reliance on generative quality, high space complexity [28], [29], [32], [36], [52] Self-supervised Contrastive-learning Pseudo-labeling Foundation-model Driven strong adaptability, exemplar-memory free high training cost, hard to convergence [27], [41], [48], [53], [54] Regularization-based [39], [40], [43], [44], [55] Dynamic-architecture Parameter Allocation Architecture Decomposition Modular Network high model flexibility, high adaptability to diverse data network parameters gradually increases, high space complexity [30], [46], [56], [57], [58] large-model forms [59], [60] achieve fair zero-shot learning ability, they often lack the ability to classify targets with semantic understanding like humans. Another significant concern is cost.…”
Section: Semantic Driftmentioning
confidence: 99%
See 1 more Smart Citation