In the last few decades, photovoltaic (PV) power station installations have surged across the globe. The output efficiency of these stations deteriorates with the passage of time due to multiple factors such as hotspots, shaded cell or module, short-circuited bypass diodes, etc. Traditionally, technicians inspect each solar panel in a PV power station using infrared thermography to ensure consistent output efficiency. With the advancement of drone technology, researchers have proposed to use drones equipped with thermal cameras for PV power station monitoring. However, most of these drone-based approaches require technicians to manually control the drone which in itself is a cumbersome task in the case of large PV power stations. To tackle this issue, this study presents an autonomous drone-based solution. The drone is mounted with both RGB (Red, Green, Blue) and thermal cameras. The proposed system can automatically detect and estimate the exact location of faulty PV modules among hundreds or thousands of PV modules in the power station. In addition, we propose an automatic drone flight path planning algorithm which eliminates the requirement of manual drone control. The system also utilizes an image processing algorithm to process RGB and thermal images for fault detection. The system was evaluated on a 1-MW solar power plant located in Suncheon, South Korea. The experimental results demonstrate the effectiveness of our solution.
Computer-aided diagnosis systems developed by computer vision researchers have helped doctors to recognize several endoscopic colorectal diseases more rapidly, which allows appropriate treatment and increases the patient's survival ratio. Herein, we present a robust architecture for endoscopic image classification using an efficient dilation in Convolutional Neural Network (CNNs). It has a high receptive field of view at the deep layers in increasing and decreasing dilation factor to preserve spatial details. We argue that dimensionality reduction in CNN can cause the loss of spatial information, resulting in miss of polyps and confusion in similar-looking images. Additionally, we use a regularization technique called DropBlock to reduce overfitting and deal with noise and artifacts. We compare and evaluate our method using various metrics: accuracy, recall, precision, and F1-score. Our experiments demonstrate that the proposed method provides the F1-score of 0.93 for Colorectal dataset and F1-score of 0.88 for KVASIR dataset. Experiments show higher accuracy of the proposed method over traditional methods when classifying endoscopic colon diseases. INDEX TERMSColorectal image classification, colon disease classification, colon disease classification with CNN.
Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, location, and surface largely affect identification, localisation, and characterisation. Moreover, colonoscopic surveillance and removal of polyps (referred to as polypectomy ) are highly operator-dependent procedures. There exist a high missed detection rate and incomplete removal of colonic polyps due to their variable nature, the difficulties to delineate the abnormality, the high recurrence rates, and the anatomical topography of the colon. There have been several developments in realising automated methods for both detection and segmentation of these polyps using machine learning. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets that come from different centres, modalities and acquisition systems. To test this hypothesis rigorously we curated a multi-centre and multi-population dataset acquired from multiple colonoscopy systems and challenged teams comprising machine learning experts to develop robust automated detection and segmentation methods as part of our crowd-sourcing Endoscopic computer vision challenge (EndoCV) 2021. In this paper, we analyse the detection results of the four top (among seven) teams and the segmentation results of the five top teams (among 16). Our analyses demonstrate that the top-ranking teams concentrated on accuracy (i.e., accuracy > 80% on overall Dice score on different validation sets) over real-time performance required for clinical applicability. We further dissect the methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets.Author contributions S. Ali conceptualised the work, led the challenge and workshop, prepared the dataset, software and performed all analyses. S. Ali, N. Ghatwary and D. Jha contributed in data annotations. T. de Lange, J.E. East, S. Realdon, R. Cannizzaro, D. Lamarque were involved providing colonoscopy data and in the validation and quality checks of the annotations used in this challenge. Challenge participants (E
In this nutshell, we propose a simple, efficient, and explainable deep learning-based U-Net algorithm for the MedAI challenge, focusing on precise segmentation of polyp and instrument and transparency on algorithms. We develop a straightforward encoder-decoder-based algorithm for the task above. We make an effort to make a simple network as much as possible. Specially, we focus on input resolution and width of the model to find the best optimal settings for the network. We perform ablation studies to cover this aspect.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.