Diabetic retinopathy is a vascular disease caused by uncontrolled diabetes. Its early detection can save diabetic patients from blindness. However, the detection of its severity level is a challenge for ophthalmologists since last few decades. Several efforts have been made for the identification of its limited stages by using pre-and post-processing methods, which require extensive domain knowledge. This study proposes an improved automated system for severity detection of diabetic retinopathy which is a dictionary-based approach and does not include pre-and post-processing steps. This approach integrates pathological explicit image representation into a learning outline. To create the dictionary of visual features, points of interest are detected to compute the descriptive features from retinal images through speed up robust features algorithm and histogram of oriented gradients. These features are clustered to generate a dictionary, then coding and pooling are applied for compact representation of features. Radial basis kernel support vector machine and neural network are used to classify the images into five classes namely normal, mild, moderate, severe non-proliferative diabetic retinopathy, and proliferative diabetic retinopathy. The proposed system exhibits improved results of 95.92% sensitivity and 98.90% specificity in relation to the reported state of the art methods.
Bioacoustics plays an important role in the conservation of bird species. Bio-acoustic surveys based on autonomous audio recording are both cost-effective and time-efficient. However, there are many bird species with different patterns of vocalization, and it is a challenging task to deal with them. Previous studies have revealed that many authors focus on the segmentation of bird audio without considering specific patterns of bird vocalization. Based on the existing literature, currently there is no work on the segmentation of monosyllabic and multisyllabic birds, separately. Therefore, this research addresses the aforementioned concern and also proposes a collection of audio features named ‘Perceptual, Descriptive, and Harmonic Features (PDHFs)’ that gives promising results in the classification of bird vocalization. Moreover, the classification results improved when monosyllabic and multisyllabic birds were classified separately. To analyze the performance of PDHFs, different classifiers were used in which Artificial neural network (ANN) outperformed other classifiers and demonstrated an accuracy of 98%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.