This paper presents a low-cost and cloud-based autonomous drone system to survey and monitor aquaculture sites. We incorporated artificial intelligence (AI) services using computer vision and combined various deep learning recognition models to achieve scalability and added functionality, in order to perform aquaculture surveillance tasks. The recognition model is embedded in the aquaculture cloud, to analyze images and videos captured by the autonomous drone. The recognition models detect people, cages, and ship vessels at the aquaculture site. The inclusion of AI functions for face recognition, fish counting, fish length estimation and fish feeding intensity provides intelligent decision making. For the fish feeding intensity assessment, the large amount of data in the aquaculture cloud can be an input for analysis using the AI feeding system to optimize farmer production and income. The autonomous drone and aquaculture cloud services are cost-effective and an alternative to expensive surveillance systems and multiple fixed-camera installations. The aquaculture cloud enables the drone to execute its surveillance task more efficiently with an increased navigation time. The mobile drone navigation app is capable of sending surveillance alerts and reports to users. Our multifeatured surveillance system, with the integration of deep-learning models, yielded high-accuracy results.
The resolution of the computed depth maps of fish in an underwater environment limits the 3D fish metric estimation. This paper addresses this problem using object-based matching for underwater fish tracking and depth computing using convolutional neural networks (CNNs). First, for each frame in a stereo video, a joint object classification and semantic segmentation CNN is used to segment fish objects from the background. Next, the fish objects in these images are cropped and matched for the subpixel disparity computation using the video interpolation CNN. The calculated disparities and depth values estimate the fish metrics, including the length, height, and weight. Next, we tracked the fish across frames of the input stereo video to compute the metrics of the fish frame by frame. Finally, the fish median metrics are calculated for noise reduction caused by the fish motions. Hereafter, the fish with incorrect measurement by the stereo camera is cleaned before generating the final fish metric distributions, which are relevant inputs for learning decision models to manage a fish farm. We also constructed underwater stereo video datasets with actual fish metrics measured by humans to verify the effectiveness of our approach. Experimental results show a 5% error rate in our fish length estimation.INDEX TERMS convolutional neural network, object tracking, object-based stereo matching I. INTRODUCTION
Monitoring the status of culture fish is an essential task for precision aquaculture using a smart underwater imaging device as a non-intrusive way of sensing to monitor freely swimming fish even in turbid or low-ambient-light waters. This paper developed a two-mode underwater surveillance camera system consisting of a sonar imaging device and a stereo camera. The sonar imaging device has two cloud-based Artificial Intelligence (AI) functions that estimate the quantity and the distribution of the length and weight of fish in a crowded fish school. Because sonar images can be noisy and fish instances of an overcrowded fish school are often overlapped, machine learning technologies, such as Mask R-CNN, Gaussian mixture models, convolutional neural networks, and semantic segmentation networks were employed to address the difficulty in the analysis of fish in sonar images. Furthermore, the sonar and stereo RGB images were aligned in the 3D space, offering an additional AI function for fish annotation based on RGB images. The proposed two-mode surveillance camera was tested to collect data from aquaculture tanks and off-shore net cages using a cloud-based AIoT system. The accuracy of the proposed AI functions based on human-annotated fish metric data sets were tested to verify the feasibility and suitability of the smart camera for the estimation of remote underwater fish metrics.
Precision aquaculture deploys multi-mode sensors on a fish farm to collect fish and environmental data and form a big collection of datasets to pre-train data-driven prediction models to fully understand the aquaculture environment and fish farm conditions. These prediction models empower fish farmers for intelligent decisions, thereby providing objective information to monitor and control factors of automatic aquaculture machines and maximize farm production. This paper analyzes the requirements of a digital transformation infrastructure consisting of five-layered digital twins using extensive literature reviews. Thus, the results help realize our goal of providing efficient management and remote monitoring of aquaculture farms. The system embeds cloud-based digital twins using machine learning and computer vision, together with sensors and artificial intelligence-based Internet of Things (AIoT) technologies, to monitor fish feeding behavior, disease, and growth. However, few discussions in the literature concerning the functionality of a cost-effective digital twin architecture for aquaculture transformation are available. Therefore, this study uses the modified analytical hierarchical analysis to define the user requirements and the strategies for deploying digital twins to achieve the goal of intelligent fish farm management. Based on the requirement analysis, the constructed prototype of the cloud-based digital twin system effectively improves the efficiency of traditional fish farm management.
Onshore farming for premium aquaculture is under scrutiny and criticism partially due to possible causes of adverse environmental impacts on other resource users and the surrounding environment. The best alternative to preventing or minimizing these impacts is to utilize open seawater by large submersible cage culture. The current operation in Taiwan has demonstrated that the culture operation is technically feasible but economically demanding because of high capital and operating costs. Therefore, this study conducted an economic analysis of the expansion of large submersible cage culture by selecting two premium species of snubnose pompano (Trachinotus anak) and cobia (Rachycentron canadum) and examined the profitability of large submersible cage culture investment. This study found that the current operation of four-unit cages highlighted a negative net present value and internal rate of return with a payback period of over ten and six years, respectively. Large submersible cage culture can be financially profitable when its operation unit expands from 8 to 24 units. Increasing unit cages to eight incurred a gross margin of 17.09%, BCR 1.21, with a payback period of 5.36 years. Expanding the operation to 24-unit cages was a potentially lucrative investment with a gross margin of 18.51%, BCR 1.23, PI 2.15, internal rate of return of 20.84%, and a payback period of 3.55 years. Sensitivity analyses revealed that market price and survival rate significantly impact the profitability of large submersible cage culture. Finally, it is suggested that producers could invest in 8-unit cages and maintaining the survival rate of snubnose pompano and cobia at 80% and 40%, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.