Semantic image segmentation is a fundamental yet challenging problem, which can be viewed as an extension of the conventional object detection with close relation to image segmentation and classification. It aims to partition images into non-overlapping regions that are assigned predefined semantic labels. Most of the existing approaches utilize and integrate low-level local features and high-level contextual cues, which are fed into an inference framework such as, the conditional random field (CRF). However, the lack of meaning in the primitives (i.e., pixels or superpixels) and the cues provides low discriminatory capabilities, since they are rarely object-consistent. Moreover, blind combinations of heterogeneous features and contextual cues exploitation through limited neighborhood relations in the CRFs tend to degrade the labeling performance. This paper proposes an ontology-based semantic image segmentation (OBSIS) approach that jointly models image segmentation and object detection. In particular, a Dirichlet process mixture model transforms the low-level visual space into an intermediate semantic space, which drastically reduces the feature dimensionality. These features are then individually weighed and independently learned within the context, using multiple CRFs. The segmentation of images into object parts is hence reduced to a classification task, where object inference is passed to an ontology model. This model resembles the way by which humans understand the images through the combination of different cues, context models, and rule-based learning of the ontologies. Experimental evaluations using the MSRC-21 and PASCAL VOC'2010 data sets show promising results.
Many fruit recognition systems today are designed to classify different type of fruits but there is no content-based fruit recognition system focuses on durian species. Durian, known as the king of tropical fruits, have few similar characteristics between different species where the skin have almost the same color from green to yellowish brown with slightly different shape of thorns and it is hard to differentiate them with the current methods. Sometimes it is even hard for general consumers to differentiate durian species by themselves. This work aims to contribute to an automatic content-based durian species recognition that will be able to assist users in differentiating various species of durian. Few global contour-based and region-based shape descriptors such as area, perimeter, and circularity are computed as feature vectors and K-Nearest Neighbors algorithm is used to classify the durian based on the extracted features. 10-fold cross-validation is used to evaluate the proposed system. Experimental results have shown that the proposed feature extraction method for the durian species recognition system has successfully obtained a positive recognition rate of 100%.
Many fruit recognition approaches today are designed to classify different type of fruits but there is little effort being done for content-based fruit recognition specifically focuses on durian species. Durian, known as the king of tropical fruits, have few similar characteristics between different species where the skin have almost the same colour from green to yellowish brown with just slightly different shape and pattern of thorns. Therefore, it is hard to differentiate them with the current methods. It would be valuable to have an automated content-based recognition framework that can automatically represent and recognise a durian species given a durian image as the input. Therefore, this work aims to contribute to a new representation method based on multiple features for effective durian recognition. Two features based on shape and texture is considered in this work. Simple shape signatures which include area, perimeter, and circularity are used to determine the shape of the fruit durian and its base while the texture of the fruit is constructed based on Local Binary Pattern. We extracted these features from 240 durian images and trained this proposed method using few classifiers. Based on 10-fold cross validation, it is found that Logistic Regression, Gaussian Naïve Bayesian, and Linear Discriminant Analysis classifiers performed equally well with 100% achievement of accuracy, precision, recall, and F1-score. We further tested the proposed algorithm on larger dataset which consisted of 42337 fruit images (64 various categories). Experimental results based on larger and more general dataset have shown that the proposed multiple features trained on Linear Discriminant Analysis classifier able to achieve 72.38% accuracy, 73% precision, 72% recall, and 72% F1-score.
There are many marine life around the world where it is essential to have proper documentation for future records. Many information retrieval systems for marine science today require text input from user and can only be accessed online. Therefore, people who do not know the name of the marine species or do not have Internet access cannot search using the systems. Responding to this important need, this work aims to develop a Content-based Image Retrieval (CBIR) system for marine invertebrates based on colour and shape features. With the CBIR system for marine invertebrates, users can use the system to look for marine invertebrates' species instead of using traditional methods of searching such as using books and encyclopedias. Users can easily upload the image of marine invertebrate that they want to search into the system and the system will retrieve all the other similar images of marine invertebrates along with their description. All the system interface's buttons, icons and text were designed in a way where any user can easily understand and further learn to operate the system themselves. Based on the retrieval effectiveness experiment and questionnaire-based survey, the proposed CBIR system for marine invertebrates is shown to be effective, help users search similar images of marine invertebrates, provide concise information on marine invertebrate's species for learning purposes, and is reliable and user-friendly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.