Differentiation between benign and malignant breast cancer cases in X-ray images can be difficult due to their similar features. In recent studies, the transfer learning technique has been used to classify benign and malignant breast cancer by fine-tuning various pre-trained networks such as AlexNet, visual geometry group (VGG), GoogLeNet, and residual network (ResNet) on breast cancer datasets. However, these pre-trained networks have been trained on large benchmark datasets such as ImageNet, which do not contain labeled images related to breast cancers which lead to poor performance. In this research, we introduce a novel technique based on the concept of transfer learning, called double-shot transfer learning (DSTL). DSTL is used to improve the overall accuracy and performance of the pre-trained networks for breast cancer classification. DSTL updates the learnable parameters (weights and biases) of any pre-trained network by fine-tuning them on a large dataset that is similar to the target dataset. Then, the updated networks are fine-tuned with the target dataset. Moreover, the number of X-ray images is enlarged by a combination of augmentation methods including different variations of rotation, brightness, flipping, and contrast to reduce overfitting and produce robust results. The proposed approach has demonstrated a significant improvement in classification accuracy and performance of the pre-trained networks, making them more suitable for medical imaging.
The performance of hyperspectral image (HSI) classification is highly dependent on spatial and spectral information, and is heavily affected by factors such as data redundancy and insufficient spatial resolution. To overcome these challenges, many convolutional neural networks (CNN) especially 2D-CNN-based methods have been proposed for HSI classification. However, these methods produced insufficient results compared to 3D-CNN-based methods. On the other hand, the high computational complexity of the 3D-CNN-based methods is still a major concern that needs to be addressed. Therefore, this study introduces a consolidated convolutional neural network (C-CNN) to overcome the aforementioned issues. The proposed C-CNN is comprised of a three-dimension CNN (3D-CNN) joined with a two-dimension CNN (2D-CNN). The 3D-CNN is used to represent spatial–spectral features from the spectral bands, and the 2D-CNN is used to learn abstract spatial features. Principal component analysis (PCA) was firstly applied to the original HSIs before they are fed to the network to reduce the spectral bands redundancy. Moreover, image augmentation techniques including rotation and flipping have been used to increase the number of training samples and reduce the impact of overfitting. The proposed C-CNN that was trained using the augmented images is named C-CNN-Aug. Additionally, both Dropout and L2 regularization techniques have been used to further reduce the model complexity and prevent overfitting. The experimental results proved that the proposed model can provide the optimal trade-off between accuracy and computational time compared to other related methods using the Indian Pines, Pavia University, and Salinas Scene hyperspectral benchmark datasets.
Climate change and global warming lead to changes in the sea level and shoreline, which pose a huge threat to island regions. Therefore, it is important to effectively detect the shoreline changes. Taiwan is a typical island, located at the junction of the East China Sea and the South China Sea in the Pacific Northwest, and is deeply affected by shoreline changes. In this research, Taiwan was selected as the study area. In this research, an efficient shoreline detection method was proposed based on the semantic segmentation U-Net model using the Sentinel-1 synthetic aperture radar (SAR) data of Taiwan island. In addition, the batch normalization (BN) module was added to the convolution layers in the U-Net architecture to further improve the generalization ability of U-Net and accelerate the training process. A self-built shoreline dataset was introduced to train the U-Net model and test its detection efficiency. The dataset consists of a total of 4,029 SAR images covering all coastal areas of Taiwan. The training samples of the dataset were annotated by morphological processing and manual inspection. The segmentation results of U-Net were then processed by edge detection and morphological postprocessing to extract the shoreline. The experimental results showed that the proposed method could achieve a satisfactory detection performance compared with the related methods using the data provided by the Ministry of the Interior of Taiwan from 2016 to 2019 for different coastal landforms in Taiwan. Within a 5-pixel difference between the detected shoreline and the ground truth data, the F1-Meaure of the proposed method exceeded 80%. In addition, the potential of this method in shoreline change detection was validated with a sandbar located on the southwestern coast of Taiwan. Finally, the entire shoreline of Taiwan has been described by the proposed approach and the detected shoreline length was close to the actual length.
Inspired by Connected-UNets, this study proposes a deep learning model, called Connected-SegNets, for breast tumor segmentation from X-ray images. In the proposed model, two SegNet architectures are connected with skip connections between their layers. Moreover, the cross-entropy loss function of the original SegNet has been replaced by the intersection over union (IoU) loss function in order to make the proposed model more robust against noise during the training process. As part of data preprocessing, a histogram equalization technique, called contrast limit adapt histogram equalization (CLAHE), is applied to all datasets to enhance the compressed regions and smooth the distribution of the pixels. Additionally, two image augmentation methods, namely rotation and flipping, are used to increase the amount of training data and to prevent overfitting. The proposed model has been evaluated on two publicly available datasets, specifically INbreast and the curated breast imaging subset of digital database for screening mammography (CBIS-DDSM). The proposed model has also been evaluated using a private dataset obtained from Cheng Hsin General Hospital in Taiwan. The experimental results show that the proposed Connected-SegNets model outperforms the state-of-the-art methods in terms of Dice score and IoU score. The proposed Connected-SegNets produces a maximum Dice score of 96.34% on the INbreast dataset, 92.86% on the CBIS-DDSM dataset, and 92.25% on the private dataset. Furthermore, the experimental results show that the proposed model achieves the highest IoU score of 91.21%, 87.34%, and 83.71% on INbreast, CBIS-DDSM, and the private dataset, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.