This paper presents a low-cost and cloud-based autonomous drone system to survey and monitor aquaculture sites. We incorporated artificial intelligence (AI) services using computer vision and combined various deep learning recognition models to achieve scalability and added functionality, in order to perform aquaculture surveillance tasks. The recognition model is embedded in the aquaculture cloud, to analyze images and videos captured by the autonomous drone. The recognition models detect people, cages, and ship vessels at the aquaculture site. The inclusion of AI functions for face recognition, fish counting, fish length estimation and fish feeding intensity provides intelligent decision making. For the fish feeding intensity assessment, the large amount of data in the aquaculture cloud can be an input for analysis using the AI feeding system to optimize farmer production and income. The autonomous drone and aquaculture cloud services are cost-effective and an alternative to expensive surveillance systems and multiple fixed-camera installations. The aquaculture cloud enables the drone to execute its surveillance task more efficiently with an increased navigation time. The mobile drone navigation app is capable of sending surveillance alerts and reports to users. Our multifeatured surveillance system, with the integration of deep-learning models, yielded high-accuracy results.
This paper aims to provide an overview of the capabilities of unmanned systems to monitor and manage aquaculture farms that support precision aquaculture using the Internet of Things. The locations of aquaculture farms are diverse, which is a big challenge on accessibility. For offshore fish cages, there is a difficulty and risk in the continuous monitoring considering the presence of waves, water currents, and other underwater environmental factors. Aquaculture farm management and surveillance operations require collecting data on water quality, water pollutants, water temperature, fish behavior, and current/wave velocity, which requires tremendous labor cost, and effort. Unmanned vehicle technologies provide greater efficiency and accuracy to execute these functions. They are even capable of cage detection and illegal fishing surveillance when equipped with sensors and other technologies. Additionally, to provide a more large-scale scope, this document explores the capacity of unmanned vehicles as a communication gateway to facilitate offshore cages equipped with robust, low-cost sensors capable of underwater and in-air wireless connectivity. The capabilities of existing commercial systems, the Internet of Things, and artificial intelligence combined with drones are also presented to provide a precise aquaculture framework.
The resolution of the computed depth maps of fish in an underwater environment limits the 3D fish metric estimation. This paper addresses this problem using object-based matching for underwater fish tracking and depth computing using convolutional neural networks (CNNs). First, for each frame in a stereo video, a joint object classification and semantic segmentation CNN is used to segment fish objects from the background. Next, the fish objects in these images are cropped and matched for the subpixel disparity computation using the video interpolation CNN. The calculated disparities and depth values estimate the fish metrics, including the length, height, and weight. Next, we tracked the fish across frames of the input stereo video to compute the metrics of the fish frame by frame. Finally, the fish median metrics are calculated for noise reduction caused by the fish motions. Hereafter, the fish with incorrect measurement by the stereo camera is cleaned before generating the final fish metric distributions, which are relevant inputs for learning decision models to manage a fish farm. We also constructed underwater stereo video datasets with actual fish metrics measured by humans to verify the effectiveness of our approach. Experimental results show a 5% error rate in our fish length estimation.INDEX TERMS convolutional neural network, object tracking, object-based stereo matching I. INTRODUCTION
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.