2018
DOI: 10.1080/21681163.2018.1427148
|View full text |Cite
|
Sign up to set email alerts
|

The transition module: a method for preventing overfitting in convolutional neural networks

Abstract: Digital pathology has advanced substantially over the last decade with the adoption of slide scanners in pathology labs. The use of digital slides to analyse diseases at the microscopic level is both cost-effective and efficient. Identifying complex tumour patterns in digital slides is a challenging problem but holds significant importance for tumour burden assessment, grading and many other pathological assessments in cancer research. The use of convolutional neural networks (CNNs) to analyse such complex images… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 15 publications
0
9
0
Order By: Relevance
“…Dropout regularization is applied to mitigate the overfitting problems that traditional neural networks have. Recent studies used dropout regularization for enhancing DL testing results ( 43 , 51 , 52 ). When dropout regulation is applied at a hidden neural network layer, a random thinned neural network layer is produced from the original layer.…”
Section: Methodsmentioning
confidence: 99%
“…Dropout regularization is applied to mitigate the overfitting problems that traditional neural networks have. Recent studies used dropout regularization for enhancing DL testing results ( 43 , 51 , 52 ). When dropout regulation is applied at a hidden neural network layer, a random thinned neural network layer is produced from the original layer.…”
Section: Methodsmentioning
confidence: 99%
“…In this study, we showed that the DeepSnap-DL approach enables high-throughput and high-quality prediction of the PR antagonist due to the automatic extraction of feature values from 3D-chemical structures adjusted as suitable input data into the DL, as well as avoiding overfitting through selective activation of molecular features with integration of multi-layered networks (Guo et al, 2017;Liang et al, 2017;Kong and Yu, 2018;Akbar et al, 2019). In addition, consist with recent reports (Chauhan et al, 2019;Cortés-Ciriano and Bender, 2019), this study indicated both the training data size and image redundancy are critical factors when determining prediction performance.…”
Section: Comparison Of the Predictionmentioning
confidence: 99%
“…The authors in [15] introduced a transition module that can capture filters at different scales, collapsing the filters via global average pooling. This ultimately reduces the size of the network from convolutional to fully connected layers.…”
Section: Related Workmentioning
confidence: 99%