2021
DOI: 10.1016/j.bbe.2021.08.010
|View full text |Cite
|
Sign up to set email alerts
|

A deep attention network via high-resolution representation for liver and liver tumor segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…A Dice score of 80% was achieved in this study, demonstrating that the CNN methods performed better than the classical machine learning methods AdaBoost, Random Forests, and support vector machine. Li et al [ 58 ] proposed a deep attention neural network including a high-resolution branch that can maintain input image resolution and thus preserve spatial details, as well as multiscale feature aggregation for cascading liver and tumor segmentation from CT images. This model achieved a Dice score of 76.3% for lesions and 96.0% for the liver when LiTS datasets were used for the evaluation.…”
Section: Discussionmentioning
confidence: 99%
“…A Dice score of 80% was achieved in this study, demonstrating that the CNN methods performed better than the classical machine learning methods AdaBoost, Random Forests, and support vector machine. Li et al [ 58 ] proposed a deep attention neural network including a high-resolution branch that can maintain input image resolution and thus preserve spatial details, as well as multiscale feature aggregation for cascading liver and tumor segmentation from CT images. This model achieved a Dice score of 76.3% for lesions and 96.0% for the liver when LiTS datasets were used for the evaluation.…”
Section: Discussionmentioning
confidence: 99%
“…In [23], the author proposes a model based on a self-attention module and a deep attention neural network. A high-resolution branch is also used to preserve spatial features.…”
Section: Literature Review and Related Workmentioning
confidence: 99%
“…HRNet learns strong highresolution representations by connecting high-and-low resolution convolutions in parallel, where there are repeated multi-scale fusions across parallel convolutions [26]. HRNet has demonstrated great segmentation and detection performance in real-world pictures such as Cityscapes and PASCAL dataset [24] as well as in medical images [27][28][29]. The nnU-Net utilizes the U-Net architecture with automatic configuration of preprocessing, network architecture, training and post-processing for any new task which known to be one of the state-of-the-art approaches for various medical image segmentation tasks.…”
Section: Introductionmentioning
confidence: 99%