Diabetic retinopathy, the most common diabetic eye disease, occurs when blood vessels in the retina change. Sometimes these vessels swell and leak fluid or even close off completely. In other cases, abnormal new blood vessels grow on the surface of the retina. Early detection can potentially reduce the risk of blindness. This paper presents an automated method for the detection of exudates in retinal color fundus images with high accuracy, First, the image is converted to HSI model, after preprocessing possible regions containing exudate, the segmented image without Optic Disc (OD) using algorithm Graph cuts, Invariant moments Hu in extraction feature vector are then classified as exudates and non-exudates using a Neural Network Classifier. All tests are applied on database DIARETDB1.
Abstract-The explosive growth of image data leads to the research and development of image content searching and indexing systems. Image annotation systems aim at annotating automatically animage with some controlled keywords that can be used for indexing and retrieval of images. This paper presents a comparative evaluation of the image content annotation system by using the multilayer neural networks and the nearest neighbour classifier. The region growing segmentation is used to separate objects, the Hu moments, Legendre moments and Zernike moments which are used in as feature descriptors for the image content characterization and annotation.The ETH-80 database image is used in the experiments here. The best annotation rate is achieved by using Legendre moments as feature extraction method and the multilayer neural network as a classifier.
Most of the reported works in the field of character recognition systems achieve modest results by using a single method for calculating the parameters of the character image and a single approach in the classification phase of the system. So, in order to improve the recognition rate, this document proposes an automatic system to recognize isolated printed Tifinagh characters by using a fusion of some classifiers and a combination of some features extraction methods. The Legendre moments, Zernike moments, Hu moments, Walsh transform, GIST and texture are used as descriptors in the features extraction phase due to their invariance to translation, rotation and scaling changes. In the classification phase, the neural network, the Bayesian network, the multiclass SVM (Support Vector Machine) and the nearest neighbour classifiers are combined together. The experimental results of each single features extraction method with each single classification method are compared with our approach to show its robustness. A recognition rate of 100 % is achieved by using some combined descriptors and classifiers.
Abstract-In order to improve the recognition rate, this document proposes an automatic system to recognize isolated printed Tifinagh characters by using a fusion of 3 classifiers and a combination of some features extraction methods. The Legendre moments, Zernike moments and Hu moments are used as descriptors in the features extraction phase due to their invariance to translation, rotation and scaling changes. In the classification phase, the neural network, the multiclass SVM (Support Vector Machine) and the nearest neighbour classifiers are combined together. The experimental results of each single features extraction method and each single classification method are compared with our approach to show its robustness.
The rapid growth of the Internet and multimedia information has generated a need for technical indexing and searching of multimedia information, especially in image retrieval. Image searching systems have been developed to allow searching in image databases. However, these systems are still inecient in terms of semantic image searching by textual query. To perform semantic searching, it is necessary to be able to transform the visual content of the images (colours, textures, shapes) into semantic information. This transformation, called image annotation, assigns a legend or keywords to a digital image. The traditional methods of image retrieval rely heavily on manual image annotation which is very subjective, very expensive and impossible given the size and the phenomenal growth of currently existing image databases. Therefore it is quite natural that the research has emerged in order to nd a computing solution to the problem. It is thus that research work has quickly bloomed on the automatic image annotation, aimed at reducing both the cost of annotation and the semantic
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.