2023
DOI: 10.1016/j.ijleo.2022.170277
|View full text |Cite
|
Sign up to set email alerts
|

MLKCA-Unet: Multiscale large-kernel convolution and attention in Unet for spine MRI segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…An attention module enhances important features and suppresses unimportant features to improve the representational ability of the network [27,28,40,41]. In order to effectively combine the global features extracted by Transformer with the local features extracted by the U-Net encoder, we design an attention mechanism to fuse the features of the two branches.…”
Section: Attention Branching Fusion Modulementioning
confidence: 99%
“…An attention module enhances important features and suppresses unimportant features to improve the representational ability of the network [27,28,40,41]. In order to effectively combine the global features extracted by Transformer with the local features extracted by the U-Net encoder, we design an attention mechanism to fuse the features of the two branches.…”
Section: Attention Branching Fusion Modulementioning
confidence: 99%
“…For coarse blood vessels, the distance between target pixels is remote due to their large size. To acquire long-range contextual information with lower computational overhead, group convolution (GConv) with a large kernel [44] is used in the coarse branch. However, for fine vessels, using a large convolution kernel may introduce unnecessary background information and noise, resulting in the loss of capillary details.…”
Section: Coarse and Fine Feature Aggregationmentioning
confidence: 99%
“…However, given that the extracted multimodal features vary in importance, it is necessary to consider the weight effect, so many scholars have introduced an attention mechanism in the part of the jump connection to emphasize the key features and weaken the redundant features [20,21], to improve the validity and reliability of the model. Wang et al [22] proposed a multi-scale convolutional kernel UNet network for pattern recognition by adding a CBAM attention mechanism to the jump connection section. Dhiraj et al [23] proposed an attention ResUNet to automatically segment critical images by adding attention gates to the jump connection part and filtering redundant features.…”
Section: Introductionmentioning
confidence: 99%