Any computer vision application development starts off by acquiring images and data, then preprocessing and pattern recognition steps to perform a task. When the acquired images are highly imbalanced and not adequate, the desired task may not be achievable. Unfortunately, the occurrence of imbalance problems in acquired image datasets in certain complex real-world problems such as anomaly detection, emotion recognition, medical image analysis, fraud detection, metallic surface defect detection, disaster prediction, etc., are inevitable. The performance of computer vision algorithms can significantly deteriorate when the training dataset is imbalanced. In recent years, Generative Adversarial Neural Networks (GANs) have gained immense attention by researchers across a variety of application domains due to their capability to model complex real-world image data. It is particularly important that GANs can not only be used to generate synthetic images, but also its fascinating adversarial learning idea showed good potential in restoring balance in imbalanced datasets.In this paper, we examine the most recent developments of GANs based techniques for addressing imbalance problems in image data. The real-world challenges and implementations of synthetic image generation based on GANs are extensively covered in this survey. Our survey first introduces various imbalance problems in computer vision tasks and its existing solutions, and then examines key concepts such as deep generative image models and GANs. After that, we propose a taxonomy to summarize GANs based techniques for addressing imbalance problems in computer vision tasks into three major categories: 1. Image level imbalances in classification, 2. object level imbalances in object detection and 3. pixel level imbalances in segmentation tasks. We elaborate the imbalance problems of each group, and provide GANs based solutions in each group. Readers will understand how GANs based techniques can handle the problem of imbalances and boost performance of the computer vision algorithms.
Purpose The purpose of this paper is twofold: to present gamified mobile experiences as valid tools for DMOs to enrich the experience of tourists, and to present the benefits provided to DMOs by analytics tools integrated on gamified mobile experiences. Design/methodology/approach Staff from three DMOs have generated a gamified mobile experience using a custom authoring tool designed and developed to fulfil their requirements. This gamified experience has targeted families with children visiting Basque Country during off-peak season. The experience has been validated over a period of seven weeks within a pilot project promoted by the local tourist information offices of the DMOs. Data directly provided by tourists and data gathered from analytic tools integrated on the gamified mobile experience have been analysed to fulfil the research objectives presented on the paper. Findings Both DMOs and tourists can benefit from gamified mobile experiences. The integration of analytics tools to gain insights into the behaviour of tourists can be a relevant information source for DMOs. Research limitations/implications The pilot project has targeted a niche tourism market, families with children visiting Basque Country, and has been running during off-peak season. Further studies focusing on other tourist types and different tourism season and destination types will be required to strengthen the validation of the research objectives presented on this paper. Practical implications The paper promotes both the development of gamified mobile experiences and the inclusion of analytics tools for DMOs to obtain relevant information about tourists and the mobile experiences. Originality/value A gamified mobile experience is generated by DMOs, validated on the basis of experience of real tourists. The analytics tools inside the gamified mobile experience provide DMOs with relevant information.
Greenhouse crop production is growing throughout the world and early pest detection is of particular importance in terms of productivity and reduction of the use of pesticides. Conventional eye observation methods are nonefficient for large crops. Computer vision and recent advances in deep learning can play an important role in increasing the reliability and productivity. This paper presents the development and comparison of two different approaches for vision based automated pest detection and identification, using learning strategies. A solution that combines computer vision and machine learning is compared against a deep learning solution. The main focus of our work is on the selection of the best approach based on pest detection and identification accuracy. The inspection is focused on the most harmful pests on greenhouse tomato and pepper crops, Bemisia tabaci and Trialeurodes vaporariorum. A dataset with a huge number of infected tomato plants images was created to generate and evaluate machine learning and deep learning models. The results showed that the deep learning technique provides a better solution because (a) it achieves the disease detection and classification in one step, (b) gets better accuracy, (c) can distinguish better between Bemisia tabaci and Trialeurodes vaporariorum, and (d) allows balancing between speed and accuracy by choosing different models.
To meet the demands of a rising population greenhouses must face the challenge of producing more in a more efficient and sustainable way. Innovative mobile robotic solutions with flexible navigation and manipulation strategies can help monitor the field in real-time. Guided by Integrated Pest Management strategies, robots can perform early pest detection and selective treatment tasks autonomously. However, combining the different robotic skills is an error prone work that requires experience in many robotic fields, usually deriving on ad-hoc solutions that are not reusable in other contexts. This work presents Robotframework, a generic ROS-based architecture which can easily integrate different navigation, manipulation, perception, and high-decision modules leading to a faster and simplified development of new robotic applications. The architecture includes generic real-time data collection tools, diagnosis and error handling modules, and user-friendly interfaces. To demonstrate the benefits of combining and easily integrating different robotic skills using the architecture, two flexible manipulation strategies have been developed to enhance the pest detection in its early state and to perform targeted spraying in simulated and field commercial greenhouses. Besides, an additional use-case has been included to demonstrate the applicability of the architecture in other industrial contexts.INDEX TERMS Precision agriculture, robotic control architecture, mobile manipulator, pest detection and treatment, greenhouse.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.