2021
DOI: 10.1155/2021/5557168
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Learning Rate in Deep CNN Model for Metastasis Detection and Classification of Histopathology Images

Abstract: Diagnosis of different breast cancer stages using histopathology whole slide images (WSI) is the gold standard in determining the grade of tissue metastasis. Computer-aided diagnosis (CAD) assists medical experts as a second opinion tool in early detection to prevent further proliferation. The field of pathology has advanced so rapidly that it is possible to obtain high-quality images from glass slides. Patches from the region of interest in histopathology images are extracted and trained using artificial neur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…Continuously using the same high learning rate throughout the training process makes it far more difficult for the weights to converge to their ideal values since their values are shifted by far greater amounts, just like only using only using coarse focus on a microscope. This can cause the model to converge at local minima instead [33]. Conversely, only using a low learning rate will substantially increase the computation time, and likewise may get stuck at local minima.…”
Section: Discussionmentioning
confidence: 99%
“…Continuously using the same high learning rate throughout the training process makes it far more difficult for the weights to converge to their ideal values since their values are shifted by far greater amounts, just like only using only using coarse focus on a microscope. This can cause the model to converge at local minima instead [33]. Conversely, only using a low learning rate will substantially increase the computation time, and likewise may get stuck at local minima.…”
Section: Discussionmentioning
confidence: 99%
“…Table 5 summarizes the relevant methods that appear in the bibliography, which use deep learning models for multiclass classification. In Table 5, there are strategies for dynamic modification of the learning rate, such as cyclical learning rates (Smith et al [10]), polynomial learning rates (Purnendu et al [7]) or dynamic learning rates (Anil et al [9]). There are also static learning rate methods, such as that of Anil et al [8], and methodologies based on transfer learning (such as that of Alinsaif et al [14]).…”
Section: Comparison With Other Methodologiesmentioning
confidence: 99%
“…Some studies compared the influence of not using a fixed LR in each epoch. Anil et al [9] proposed the use of a dynamic learning rate. Smith et al [10] proposed cyclic learning rates, a method that lets the learning rate vary cyclically between the appropriate thresholds.…”
Section: Introductionmentioning
confidence: 99%
“…For epoch values ranging from 10; 50; 100; 250; 750 and 1000 epochs that have been done by previous researchers [60], [61], [62], then for batch sizes of 16; 32; 64; 128; 256 and 512 available on Teachable Machine, these values are the minimum and maximum values that are already available without the need for modification. Then, for the learning rate value based on the average of previous research ranges from 0.00001; 0.0001; 0.001; 0.01; 0.1 and 1 [63], [64], [65].…”
Section: Figure 5 Max Pooling and Average Pooling [55]mentioning
confidence: 99%