The key problem in achieving efficient and user friendly Content Based Image Retrieval (CBIR), in domain of images is the development of a search mechanism to guarantee delivery of minimal irrelevant information (high precision) while insuring that relevant information is not overlooked (high recall). The current CBIR results need to be improved by indexing images according to semantics rather than objects that appear in the images. This problem of creating a meaning based index structure is solved using a concept based model with domain dependent ontology. The research analysis shows that, CBIR with ontology is still in primitive stage with very few topological relations exploited in the research, and the results still not satisfactory. Thus we propose a system for image retrieval which will use spatial information to build many of the topological relations like connectivity, adjacency, membership and orientation using ontology along with low level color and texture features for CBIR recognition.
Handwriten Character Recognition (HCR) for Indian Languages is an important problem where there is relatively little work has been done. Particularly difficult is the problem of recognition of Kagunita-the compound characters resulting from the consonant and vowel combination. To recognize a Kagunita, we need to identify the vowel and the consonant present in the Kagunita character image. In this paper, we investigate the use of moments features on Kannada Kagunita. Kannada characters are curved in nature with some symmetry observed in the shape. This information can be best extracted as a feature if we extract moment features from the directional images. So we are finding 4 directional images using Gabor wavelets from the dynamically preprocessed original image. We analyze the Kagunita set and identify the regions with vowel information and consonant information and cut these portions from the preprocessed original image and form a set of cut images. Moments and statistical features are extracted from original images, directional images and cut images. These features are used for both vowel and consonant recognition on Multi Layer Perceptron with Back Propagation Neural Network. The recognition result for vowels are average 86% and consonants are 65% when tested on separate test data. The confusion matrices for both vowels and consonants are analyzed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.