Automatic Image Annotation is a technique or a tool to retrieve content-based and semantic concepts images [1]. In technique, the image content is attached to a set of predefined switches. Content-Based Image Retrieval (CBIR) allows the users to retrieve the images efficiently. The image features are automatically extractable using image processing techniques. In this study, we proposed automatic image annotation using standardized color and texture called MPEG-7. These features include Color Layout Descriptor (CLD) and Scalable Color Descriptor (SCD) for colors and Edge Histogram Descriptor (EHD) for image texture. Moreover, to decrease the scope of color layout descriptor, we used Principal Components Analysis (PCA) and for classification we used Support Vector Machine (SVM). For an input search image, the above mentioned features are extracted and classification by Support Vector Machine and prepared to perform the image annotation. This system also presents the results of the comparison between different features from the MPEG-7 descriptors. The automatic image annotation which is presented in this study is related to TUDarmstadt images. The results confirm that the system is a reliable system which has both short vector length (maximum 400 elements for each image) and high precision of 90 percent.
Application of deep learning to enhance the accuracy of intrusion detection in modern computer networks were studied in this paper. The identification of attacks in computer networks is divided in to two categories of intrusion detection and anomaly detection in terms of the information used in the learning phase. Intrusion detection uses both routine traffic and attack traffic. Abnormal detection methods attempt to model the normal behavior of the system, and any incident that violates this model is considered to be a suspicious behavior. For example, if the web server, which is usually passive, tries to There are many addresses that are likely to be infected with the worm. The abnormal diagnostic methods are Statistical models, Secure system approach, Review protocol, Check files, Create White list, Neural Networks, Genetic Algorithm, Vector Machines, decision tree. Our results have demonstrated that our approach offers high levels of accuracy, precision and recall together with reduced training time. In our future work, the first avenue of exploration for improvement will be to assess and extend the capability of our model to handle zero-day attacks.
Presently, facial image recognition via a thermal camera is a critical phase in numerous fields. Systems using thermal facial images suffer from numerous problems in face identification. In this paper, a model Edge-Aided Generative Adversarial Network (EA-GAN) is introduced to overcome the difficulties of thermal face identification by synthesizing a visible faces image from the thermal version. To enhance the performance of the Conditional Generative Adversarial Network (CGAN) model for the create realistic face images, the edge information extracted from the thermal image has been used as input, thus lead to improving overall the system's achievement. Moreover, a new model is presented in the present work for face identification by integrating two Convolutional Neural Networks (CNN) to achieve high and rapid accuracy rates. Based on the experiments on the Carl dataset for faces, it is indicated that EA-GAN can synthesize visually comfortable and identity-preserving faces; thus, better performance is achieved in comparison with the state-of-the-art approaches for thermal facial identification.
It has become essential to search for and retrieve high-resolution and efficient images easily due to swift development of digital images, many present annotation algorithms facing a big challenge which is the variance for represent the image where high level represent image semantic and low level illustrate the features, this issue is known as "semantic gab". This work has been used MPEG-7 standard to extract the features from the images, where the color feature was extracted by using Scalable Color Descriptor (SCD) and Color Layout Descriptor (CLD), whereas the texture feature was extracted by employing Edge Histogram Descriptor (EHD), the CLD produced high dimensionality feature vector therefor it is reduced by Principal Component Analysis (PCA). The features that have extracted by these three descriptors could be passing to the classifiers (Naïve Bayes and Decision Tree) for training. Finally, they annotated the query image. In this study TUDarmstadt image bank had been used. The results of tests and comparative performance evaluation indicated better precision and executing time of Naïve Bayes classification in comparison with Decision Tree classification.
Corneal diseases are the most common eye disorders. Deep learning techniques are used to perform automated diagnoses of cornea. Deep learning networks require large-scale annotated datasets, which is conceded as a weakness of deep learning. In this work, a method for synthesizing medical images using conditional generative adversarial networks (CGANs), is presented. It also illustrates how produced medical images may be utilized to enrich medical data, improve clinical decisions, and boost the performance of the conventional neural network (CNN) for medical image diagnosis. The study includes using corneal topography captured using a Pentacam device from patients with corneal diseases. The dataset contained 3448 different corneal images. Furthermore, it shows how an unbalanced dataset affects the performance of classifiers, where the data are balanced using the resampling approach. Finally, the results obtained from CNN networks trained on the balanced dataset are compared to those obtained from CNN networks trained on the imbalanced dataset. For performance, the system estimated the diagnosis accuracy, precision, and F1-score metrics. Lastly, some generated images were shown to an expert for evaluation and to see how well experts could identify the type of image and its condition. The expert recognized the image as useful for medical diagnosis and for determining the severity class according to the shape and values, by generating images based on real cases that could be used as new different stages of illness between healthy and unhealthy patients.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.