2023 46th MIPRO ICT and Electronics Convention (MIPRO) 2023
DOI: 10.23919/mipro57284.2023.10159914
|View full text |Cite
|
Sign up to set email alerts
|

Attention-based U-net: Joint Segmentation of Layers and Fluids from Retinal OCT Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 18 publications
0
3
0
Order By: Relevance
“…Unlike the conventional U-Net framework, which processes all regions of an input image indiscriminately, the Attention U-Net introduces attention gates. These gates implement a selective prioritization strategy, focusing on areas containing the left ventricle, thus improving segmentation accuracy [27], [30]This focused approach is particularly beneficial when image clarity is compromised or when the left ventricle's boundaries are not clearly visible. Additionally, it enhances the model's ability to accurately define complex cardiac structures.…”
Section: Resultsmentioning
confidence: 99%
“…Unlike the conventional U-Net framework, which processes all regions of an input image indiscriminately, the Attention U-Net introduces attention gates. These gates implement a selective prioritization strategy, focusing on areas containing the left ventricle, thus improving segmentation accuracy [27], [30]This focused approach is particularly beneficial when image clarity is compromised or when the left ventricle's boundaries are not clearly visible. Additionally, it enhances the model's ability to accurately define complex cardiac structures.…”
Section: Resultsmentioning
confidence: 99%
“…Melinščak 45 employed an attention‐based U‐Net model to perform the segmentation of retinal layers and retinal fluids using the AROI dataset. The achieved dice scores for PED, SRF, and IRF were 0.674, 0.600, and 0.563, respectively.…”
Section: Resultsmentioning
confidence: 99%
“…Attention mechanisms empower models to selectively focus on crucial features or regions, enhancing performance in tasks like image segmentation. Various studies have suggested incorporating attention blocks or modules into the U-Net’s encoder or decoder [ 35 , 36 , 37 , 38 ]. These attention mechanisms may utilise techniques such as channel attention, spatial attention, or self-attention.…”
Section: Related Workmentioning
confidence: 99%