2023
DOI: 10.1109/access.2023.3234997
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Scale Deep Information and Adaptive Attention Mechanism Based Coronary Reconstruction of Superior Mesenteric Artery

Abstract: Vascular images contain a lot of key information, such as length, diameter and distribution. Thus reconstruction of vessels such as the Superior Mesenteric Artery is critical for the diagnosis of some abdominal diseases. However automatic segmentation of abdominal vessels is extremely challenging due to the multi-scale nature of vessels, boundary-blurring, low contrast, artifact disturbance and vascular cracks in Maximum Intensity Projection images. In this work, we propose a dual attention guided method where… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 43 publications
1
1
0
Order By: Relevance
“…The CT scans used in the experiment were obtained from a Siemens dual-source CT scanner (Somatom Force, Siemens Healthcare, Forchheim, Germany). The specific CT acquisition parameters were consistent with our previous work (Zhang et al, 2023).…”
Section: Datasetsupporting
confidence: 79%
See 1 more Smart Citation
“…The CT scans used in the experiment were obtained from a Siemens dual-source CT scanner (Somatom Force, Siemens Healthcare, Forchheim, Germany). The specific CT acquisition parameters were consistent with our previous work (Zhang et al, 2023).…”
Section: Datasetsupporting
confidence: 79%
“… Overview of the structure of the proposed PE-Net: Two encoding layers consisting of CNN and TAGT, a skip connection layer consisting of EFC and FFB, and one decoding layer. The parallelly encoded features are fused using the Channel Attention-based method, as introduced in our previous work ( Zhang et al, 2023 )—a block named Feature Fusion Module (FFM). Before the operation, the features generated by TAGT are broadcasted to align their channel dimensions with the convolutional features.…”
Section: Methodsmentioning
confidence: 99%