2023
DOI: 10.1038/s41598-022-27331-3
|View full text |Cite
|
Sign up to set email alerts
|

Tubule-U-Net: a novel dataset and deep learning-based tubule segmentation framework in whole slide images of breast cancer

Abstract: The tubule index is a vital prognostic measure in breast cancer tumor grading and is visually evaluated by pathologists. In this paper, a computer-aided patch-based deep learning tubule segmentation framework, named Tubule-U-Net, is developed and proposed to segment tubules in Whole Slide Images (WSI) of breast cancer. Moreover, this paper presents a new tubule segmentation dataset consisting of 30820 polygonal annotated tubules in 8225 patches. The Tubule-U-Net framework first uses a patch enhancement techniq… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…Table 3 provides a comparison of recently developed DL methods in mammography for breast lesion segmentation. These methods include the Conditional Random Field model (CRF) ( 59 ), Adversarial Deep Structured Net ( 60 ), Deep Learning using You-Only-Look-Once ( 61 ), Conditional Residual U-Net (CRU-Net) ( 62 ), Mixed-Supervision-Guided (MS-ResCU-Net) and Residual-Aided Classification U-Net Model (ResCU-Net) ( 63 ), Dense U-Net with Attention Gates (AGs) ( 64 ), Residual Attention U-Net Model (RU-Net) ( 65 ), Modified U-Net ( 66 ), Mask RCNN ( 67 ), Full-Resolution Convolutional Network (FrCN) ( 68 ), U-Net ( 69 ), Conditional Generative Adversarial Networks (cGAN) ( 70 , 71 ), DeepLab ( 72 ), Attention-Guided Dense-Upsampling Network (AUNet) ( 73 ), FPN ( 74 ), modified CNN based on U-Net Model ( 76 ), deeply supervised U-Net ( 77 ), modified U-Net ( 78 ), and Tubule-U-Net ( 79 ). Among these DL methods, U-Net is the most commonly employed segmentation method.…”
Section: Breast Cancer Prediction Using Deep Learningmentioning
confidence: 99%
“…Table 3 provides a comparison of recently developed DL methods in mammography for breast lesion segmentation. These methods include the Conditional Random Field model (CRF) ( 59 ), Adversarial Deep Structured Net ( 60 ), Deep Learning using You-Only-Look-Once ( 61 ), Conditional Residual U-Net (CRU-Net) ( 62 ), Mixed-Supervision-Guided (MS-ResCU-Net) and Residual-Aided Classification U-Net Model (ResCU-Net) ( 63 ), Dense U-Net with Attention Gates (AGs) ( 64 ), Residual Attention U-Net Model (RU-Net) ( 65 ), Modified U-Net ( 66 ), Mask RCNN ( 67 ), Full-Resolution Convolutional Network (FrCN) ( 68 ), U-Net ( 69 ), Conditional Generative Adversarial Networks (cGAN) ( 70 , 71 ), DeepLab ( 72 ), Attention-Guided Dense-Upsampling Network (AUNet) ( 73 ), FPN ( 74 ), modified CNN based on U-Net Model ( 76 ), deeply supervised U-Net ( 77 ), modified U-Net ( 78 ), and Tubule-U-Net ( 79 ). Among these DL methods, U-Net is the most commonly employed segmentation method.…”
Section: Breast Cancer Prediction Using Deep Learningmentioning
confidence: 99%
“…Despite the growing number of digital pathology and AI studies in our country, this progress is restricted by the digitization of pathology slides and AI-oriented technical facilities (16). Although a small number of articles (17) with focus on pathology and pathologists' participation are included in national publication directories, many Turkish researchers are involved in international studies and publications (8)(9)(10)12,(18)(19)(20), and the number of publications from Turkey has been increasing (21)(22)(23)(24)(25)(26).…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, there have been considerable research studies focusing on utilizing deep learning techniques for the automatic classification of breast cancer based on its pathology [3][4][5][6][7][8][9][10][11][12][13][14]. Nonetheless, medical institutions, being the proprietors of image data, have exhibited a preference for training models using their data due to the stringent privacy and security regulations surrounding medical data [1].…”
Section: Introductionmentioning
confidence: 99%