An accurate and reliable image based fruit detection system is critical for supporting higher level agriculture tasks such as yield mapping and robotic harvesting. This paper presents the use of a state-of-the-art object detection framework, Faster R-CNN, in the context of fruit detection in orchards, including mangoes, almonds and apples. Ablation studies are presented to better understand the practical deployment of the detection network, including how much training data is required to capture variability in the dataset. Data augmentation techniques are shown to yield significant performance gains, resulting in a greater than two-fold reduction in the number of training images required. In contrast, transferring knowledge between orchards contributed to negligible performance gain over initialising the Deep Convolutional Neural Network directly from ImageNet features. Finally, to operate over orchard data containing between 100-1000 fruit per image, a tiling approach is introduced for the Faster R-CNN framework. The study has resulted in the best yet detection performance for these orchards relative to previous works, with an F1-score of > 0.9 achieved for apples and mangoes.
Ground vehicles equipped with monocular vision systems are a valuable source of high resolution image data for precision agriculture applications in orchards. This paper presents an image processing framework for fruit detection and counting using orchard image data. A general purpose image segmentation approach is used, including two feature learning algorithms; multi-scale Multi-Layered Perceptrons (MLP) and Convolutional Neural Networks (CNN). These networks were extended by including contextual information about how the image data was captured (metadata), which correlates with some of the appearance variations and/or class distributions observed in the data. The pixel-wise fruit segmentation output is processed using the Watershed Segmentation (WS) and Circular Hough Transform (CHT) algorithms to detect and count individual fruits. Experiments were conducted in a commercial apple orchard near Melbourne, Australia. The results show an improvement in fruit segmentation performance with the inclusion of metadata on the previously benchmarked MLP network. We extend this work with CNNs, bringing agrovision closer to the state-of-the-art in computer vision, where although metadata had negligible influence, the best pixel-wise F1-score of 0.791 was achieved. The WS algorithm produced the best apple detection and counting results, with a detection F1-score of 0.858. As a final step, image fruit counts were accumulated over multiple rows at the orchard and compared against the post-harvest fruit counts that were obtained from a grading and counting machine. The count estimates using CNN and WS resulted in the best performance for this dataset, with a squared correlation coefficient of r 2 = 0.826.
This paper presents a novel multi-sensor framework to efficiently identify, track, localise and map every piece of fruit in a commercial mango orchard. A multiple viewpoint approach is used to solve the problem of occlusion, thus avoiding the need for labour-intensive field calibration to estimate actual yield. Fruit are detected in images using a state-of-the-art faster R-CNN detector, and pair-wise correspondences are established between images using trajectory data provided by a navigation system. A novel LiDAR component automatically generates image masks for each canopy, allowing each fruit to be associated with the corresponding tree. The tracked fruit are triangulated to locate them in 3D, enabling a number of spatial statistics per tree, row or orchard block. A total of 522 trees and 71,609 mangoes were scanned on a Calypso mango orchard near Bundaberg, Queensland, Australia, with 16 trees counted by hand for validation, both on the tree and after harvest. The results show that single, dual and multi-view methods can all provide precise yield estimates, but only the proposed multi-view approach can do so without calibration, with an error rate of only 1.36% for individual trees.
The scientific areas of plant genomics and phenomics are capable of improving plant productivity, yet they are limited by the manual labor that is currently required to perform in-field measurement, and a lack of technology for measuring the physical performance of crops growing in the field. A variety of sensor technology has the potential to efficiently measure plant characteristics that are related to production. Recent advances have also shown that autonomous airborne and manually driven ground-based sensor platforms provide practical mechanisms for deploying the sensors in the field. This paper advances the state-of-the-art by developing and rigorously testing an efficient system for high throughput in-field agricultural row-crop phenotyping. The system comprises an autonomous unmanned ground-vehicle robot for data acquisition and an efficient data post-processing framework to provide phenotype information over large-scale real-world plant-science trials. Experiments were performed at three trial locations at two different times of year, resulting in a total traversal of 43.8 km to scan 7.24 hectares and 2423 plots (including repeated scans). The height and canopy closure data were found to be highly repeatable (r 2 = 1.00 N = 280, r 2 = 0.99 N = 280, respectively) and accurate with respect to manually gathered field data (r 2 = 0.95 N = 470, r 2 = 0.91 N = 361, respectively), yet more objective and less-reliant on human skill and experience. The system was found to be a more labor-efficient mechanism for gathering data, which compares favorably to current standard manual practices. K E Y W O R D Sagriculture, hyperspectral and lidar sensing, plant phenomics, row-crop phenotyping, terrestrial robotics INTRODUCTIONPredicted global population increases are expected to cause a doubling in food demand by 2050, while at the same time the ability to grow more food is threatened by problems of water scarcity, soil fertility, and climate change. 12 Significant increases in food production are required, which will necessitate greater productivity in terms of yield per hectare and efficient use of natural resources. Given that "genetic diversity provides the basis for all plant improvement," 12 the study of different genetic varieties of crop (genomics) and how well they grow in different environmental conditions (phenomics) is critical to meet this challenge. Each year, around the world, millions of agricultural crops (such as grains and legumes) with different genetic profiles are grown in the field, subjected to different environmental factors (e.g., exposed to disease, herbicides, water stress, etc.) and the physical response of the plants (e.g., how tolerant they are, how much yield they produce) is measured. The process is repeated annually, driving plant productivity and adaptability forward, however, advances in genomics have not been matched by similar advances in phenomics and the ability to obtain these physical measurements is considered to be the major bottleneck. 2,12,14 Crop characteristics (phenotype t...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.