Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Breast cancer is a major health threat, with early detection crucial for improving cure and survival rates. Current systems rely on imaging technology, but digital pathology and computerized analysis can enhance accuracy, reduce false predictions, and improve medical care for breast cancer patients. The study explores the challenges in identifying benign and malignant breast cancer lesions using microscopic image datasets. It introduces a low‐dimensional multiple‐channel feature‐based method for breast cancer microscopic image recognition, overcoming limitations in feature utilization and computational complexity. The method uses RGB channels for image processing and extracts features using level co‐occurrence matrix, wavelet, Gabor, and histogram of oriented gradient. This approach aims to improve diagnostic efficiency and accuracy in breast cancer treatment. The core of our method is the SqE‐DDConvNet algorithm, which utilizes a 3 × 1 convolution kernel, SqE‐DenseNet module, bilinear interpolation, and global average pooling to enhance recognition accuracy and training efficiency. Additionally, we incorporate transfer learning with pre‐trained models, including mVVGNet16, EfficientNetV2B3, ResNet101V2, and CN2XNet, preserving spatial information and achieving higher accuracy under varying magnification conditions. The method achieves higher accuracy compared to baseline models, including texture and deep semantic features. This deep learning‐based methodology contributes to more accurate image classification and unique image recognition in breast cancer microscopic images.Research Highlights Introduces a low‐dimensional multiple‐channel feature‐based method for breast cancer microscopic image recognition. Uses RGB channels for image processing and extracts features using level co‐occurrence matrix, wavelet, Gabor, and histogram of oriented gradient. Employs the SqE‐DDConvNet algorithm for enhanced recognition accuracy and training efficiency. Transfer learning with pre‐trained models preserves spatial information and achieves higher accuracy under varying magnification conditions. Evaluates predictive efficacy of transfer learning paradigms within microscopic analysis. Utilizes CNN‐based pre‐trained algorithms to enhance network performance.
Breast cancer is a major health threat, with early detection crucial for improving cure and survival rates. Current systems rely on imaging technology, but digital pathology and computerized analysis can enhance accuracy, reduce false predictions, and improve medical care for breast cancer patients. The study explores the challenges in identifying benign and malignant breast cancer lesions using microscopic image datasets. It introduces a low‐dimensional multiple‐channel feature‐based method for breast cancer microscopic image recognition, overcoming limitations in feature utilization and computational complexity. The method uses RGB channels for image processing and extracts features using level co‐occurrence matrix, wavelet, Gabor, and histogram of oriented gradient. This approach aims to improve diagnostic efficiency and accuracy in breast cancer treatment. The core of our method is the SqE‐DDConvNet algorithm, which utilizes a 3 × 1 convolution kernel, SqE‐DenseNet module, bilinear interpolation, and global average pooling to enhance recognition accuracy and training efficiency. Additionally, we incorporate transfer learning with pre‐trained models, including mVVGNet16, EfficientNetV2B3, ResNet101V2, and CN2XNet, preserving spatial information and achieving higher accuracy under varying magnification conditions. The method achieves higher accuracy compared to baseline models, including texture and deep semantic features. This deep learning‐based methodology contributes to more accurate image classification and unique image recognition in breast cancer microscopic images.Research Highlights Introduces a low‐dimensional multiple‐channel feature‐based method for breast cancer microscopic image recognition. Uses RGB channels for image processing and extracts features using level co‐occurrence matrix, wavelet, Gabor, and histogram of oriented gradient. Employs the SqE‐DDConvNet algorithm for enhanced recognition accuracy and training efficiency. Transfer learning with pre‐trained models preserves spatial information and achieves higher accuracy under varying magnification conditions. Evaluates predictive efficacy of transfer learning paradigms within microscopic analysis. Utilizes CNN‐based pre‐trained algorithms to enhance network performance.
Background: The aim of this study was to establish a deep learning prediction model for neoadjuvant FLOT chemotherapy response. The neural network utilized clinical data and visual information from whole-slide images (WSIs) of therapy-naïve gastroesophageal cancer biopsies. Methods: This study included 78 patients from the University Hospital of Cologne and 59 patients from the University Hospital of Heidelberg used as external validation. Results: After surgical resection, 33 patients from Cologne (42.3%) were ypN0 and 45 patients (57.7%) were ypN+, while 23 patients from Heidelberg (39.0%) were ypN0 and 36 patients (61.0%) were ypN+ (p = 0.695). The neural network had an accuracy of 92.1% to predict lymph node metastasis and the area under the curve (AUC) was 0.726. A total of 43 patients from Cologne (55.1%) had less than 50% residual vital tumor (RVT) compared to 34 patients from Heidelberg (57.6%, p = 0.955). The model was able to predict tumor regression with an error of ±14.1% and an AUC of 0.648. Conclusions: This study demonstrates that visual features extracted by deep learning from therapy-naïve biopsies of gastroesophageal adenocarcinomas correlate with positive lymph nodes and tumor regression. The results will be confirmed in prospective studies to achieve early allocation of patients to the most promising treatment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.