There has been a surge in biomedical imaging technologies with the recent advancement of deep learning. It is being used for diagnosis from X-ray, computed tomography (CT) scan, electrocardiogram (ECG), and electroencephalography (EEG) images. However, most of them are solely for particular disease detection. In this research, a computer-aided deep learning model named COVID-CXDNetV2 has been presented to detect two separate diseases, coronavirus disease 2019 (COVID-19) and pneumonia, from the X-ray images in real-time. The proposed model is made based on you only look once (YOLOv2) with residual neural network (ResNet) and trained by a vast X-ray images dataset containing 3788 samples of three classes named COVID-19 pneumonia and normal. The model has obtained the maximum overall classification accuracy of 97.9% with a loss of 0.052 for multiclass classification (COVID-19, pneumonia, and normal) and 99.8% accuracy, 99.52% sensitivity, 100% specificity with a loss of 0.001 for binary classification (COVID-19 and normal), which beats some current state-of-the-art results. Authors believe that this method will be applicable in the medical domain for the diagnosis and will significantly contribute to real life.
Machine learning models have been very popular nowadays for providing rigorous solutions to complicated real-life problems. There are three main domains named supervised, unsupervised, and reinforcement. Supervised learning mainly deals with regression and classification. There exist several types of classification algorithms, and these are based on various bases. The classification performance varies based on the dataset velocity and the algorithm selection. In this article, we have focused on developing a model of angular nature that performs supervised classification. Here, we have used two shifting vectors named Support Direction Vector (SDV) and Support Origin Vector (SOV) to form a linear function. These vectors form a linear function to measure cosine-angle with both the target class data and the non-target class data. Considering target data points, the linear function takes such a position that minimizes its angle with target class data and maximizes its angle with non-target class data. The positional error of the linear function has been modelled as a loss function which is iteratively optimized using the gradient descent algorithm. In order to justify the acceptability of this method, we have implemented this model on three different standard datasets. The model showed comparable accuracy with the existing standard supervised classification algorithm. Doi: 10.28991/esj-2021-01306 Full Text: PDF
<span>Wireless capsule endoscopy (WCE) is a significant modern technique for observing the whole gastroenterological tract to diagnose various diseases like bleeding, ulcer, tumor, Crohn's disease, polyps etc in a non-invasive manner. However, it will make a substantial onus for physicians like human oversight errors with time consumption for manual checking of a vast amount of image frames. These problems motivate the researchers to employ a computer-aided system to classify the particular information from the image frames. Therefore, a computer-aided system based on the color threshold and morphological operation has been proposed in this research to recognize specified bleeding images from the WCE. Besides, A unique classifier, quadratic support vector machine (QSVM) has been employed for classifying the bleeding and non-bleeding images with the statistical feature vector in HSV color space. After extensive experiments on clinical data, 95.8% accuracy, 95% sensitivity, 97% specificity, 80% precision, 99% negative predicted value and 85% F1 score has been achieved, which outperforms some of the existing methods in this regard. It is expected that this methodology would bring a significant contribution to the WCE technology. </span>
<p>Nowadays, researchers are incorporating many modern and significant features on advanced driver assistance systems (ADAS). Lane marking detection is one of them, which allows the vehicle to maintain the perspective road lane. Conventionally, it is detected through handcrafted and very specialized features and goes through substantial post-processing, which leads to high computation, and less accuracy. Additionally, this conventional method is vulnerable to environmental conditions, making it an unreliable model. Consequently, this research work presents a deep learning-based model that is suitable for diverse environmental conditions, including multiple lanes, different daytime, different traffic conditions, good and medium weather conditions, and so forth. This approach has been derived from plain encode-decode E-Net architecture and has been trained by using the differential and cross-entropy losses for the backpropagation. The model has been trained and tested using 3,600 training and 2,700 testing images from TuSimple, a robust public dataset. Input images from very diverse environmental conditions have ensured better generalization of the model. This framework has reached a max accuracy of 96.61%, with an F1 score of 96.34%, a precision value of 98.91%, and a recall of 93.89%. Besides, this model has shown very small false positive and false negative values of 3.125% and 1.259%, which bits the performance of most of the existing state of art models.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.