Abstract-Local data aggregation is an effective means to save sensor node energy and prolong the lifespan of wireless sensor networks. However, when a sensor network is used to track moving objects, the task of local data aggregation in the network presents a new set of challenges, such as the necessity to estimate, usually in real time, the constantly changing state of the target based on information acquired by the nodes at different time instants. To address these issues, we propose a distributed object tracking system which employs a cluster-based Kalman filter in a network of wireless cameras. When a target is detected, cameras that can observe the same target interact with one another to form a cluster and elect a cluster head. Local measurements of the target acquired by members of the cluster are sent to the cluster head, which then estimates the target position via Kalman filtering and periodically transmits this information to a base station. The underlying clustering protocol allows the current state and uncertainty of the target position to be easily handed off among clusters as the object is being tracked. This allows Kalman filter-based object tracking to be carried out in a distributed manner. An extended Kalman filter is necessary since measurements acquired by the cameras are related to the actual position of the target by nonlinear transformations. In addition, in order to take into consideration the time uncertainty in the measurements acquired by the different cameras, it is necessary to introduce nonlinearity in the system dynamics. Our object tracking protocol requires the transmission of significantly fewer messages than a centralized tracker that naively transmits all of the local measurements to the base station. It is also more accurate than a decentralized tracker that employs linear interpolation for local data aggregation. Besides, the protocol is able to perform real-time estimation because our implementation takes into consideration the sparsity of the matrices involved in the problem. The experimental results show that our distributed object tracking protocol is able to achieve tracking accuracy comparable to the centralized tracking method, while requiring a significantly smaller number of message transmissions in the network.
To optimize fruit production, a portion of the flowers and fruitlets of apple trees must be removed early in the growing season. The proportion to be removed is determined by the bloom intensity, i.e., the number of flowers present in the orchard. Several automated computer vision systems have been proposed to estimate bloom intensity, but their overall performance is still far from satisfactory even in relatively controlled environments. With the goal of devising a technique for flower identification which is robust to clutter and to changes in illumination, this paper presents a method in which a pre-trained convolutional neural network is finetuned to become specially sensitive to flowers. Experimental results on a challenging dataset demonstrate that our method significantly outperforms three approaches that represent the state of the art in flower detection, with recall and precision rates higher than 90%. Moreover, a performance assessment on three additional datasets previously unseen by the network, which consist of different flower species and were acquired under different conditions, reveals that the proposed method highly surpasses baseline approaches in terms of generalization capability. 1 , 2 1 The citation information for this article is: P. A. Dias, A. Tabb, and H. Medeiros, Apple flower detection using deep convolutional networks,
In fruit production, critical crop management decisions are guided by bloom intensity, i.e., the number of flowers present in an orchard. Despite its importance, bloom intensity is still typically estimated by means of human visual inspection. Existing automated computer vision systems for flower identification are based on hand-engineered techniques that work only under specific conditions and with limited performance. This work proposes an automated technique for flower identification that is robust to uncontrolled environments and applicable to different flower species. Our method relies on an end-to-end residual convolutional neural network (CNN) that represents the state-of-the-art in semantic segmentation. To enhance its sensitivity to flowers, we fine-tune this network using a single dataset of apple flower images. Since CNNs tend to produce coarse segmentations, we employ a refinement method to better distinguish between individual flower instances. Without any preprocessing or dataset-specific training, experimental results on images of apple, peach and pear flowers, acquired under different conditions demonstrate the robustness and broad applicability of our method.
We propose a light-weight event-driven protocol for wireless camera networks to allow for formation and propagation of clusters of cameras for the purpose of collaborative processing during object tracking. Cluster formation is triggered by the detection of objects with specific features. Our protocol allows for simultaneous formation and propagation of multiple clusters. Cameras being directional devices, more than one cluster may track a single object since groups of cameras outside each others communication range may see the same object. Entry into a cluster and cluster membership maintenance require a sensor node to confirm the presence of features of the object being tracked. Each cluster elects its own leader among the cameras that observe the same target. When a cluster leader loses track of an object, it assigns the leadership role to another cluster member. To avoid high communication overhead among cluster members, singlehop clusters are formed, i.e., every member of a cluster is within the communication range of the cluster head. We have implemented a simple version of this protocol on a test-bed and provide an experimental evaluation.
Dormant pruning of fruit trees is one of the most costly and labor‐intensive activities in specialty crop production. We present a system that solves the first step in the process of automated pruning: accurately measuring and modeling the fruit trees. Our system employs a laser sensor to collect observations of fruit trees from multiple perspectives, and it uses these observations to measure parameters needed for pruning. A split‐and‐merge clustering algorithm divides the collected data into three sets of points: trunk candidates, junction point candidates, and branches. The trunk candidates and junction point candidates are then further refined by a robust fitting algorithm that models as cylinders each segment of the trunk and primary branches. In this work, we focus on measuring the diameters of the primary branches and the trunk, which are important factors in dormant pruning and can be obtained directly from the cylindrical models. We show that the results are qualitatively satisfactory using synthetic and real data. Our experiments with three synthetic and three real apple trees of two different varieties showed that the system is able to identify the primary branches with an average accuracy of 98% and estimate their diameters with an average error of 0.6 cm. Although the current implementation of the system is too slow for large‐scale practical applications (it can measure approximately two trees per hour), our study shows that the proposed approach may serve as a fundamental building block of robotic pruners in the near future.
Abstract-The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimation (RoTSE) to determine tree traits in field settings. The process is composed of the following stages: image acquisition with a mobile robot unit, segmentation, reconstruction, curve skeletonization, conversion to a graph representation, and then computation of traits. Quantitative and qualitative results on apple trees are shown in terms of accuracy, computation time, and robustness. Compared to ground truth measurements, the RoTSE produced the following estimates: branch diameter (mean-squared error 0.99 mm), branch length (mean-squared error 45.64 mm), and branch angle (mean-squared error 10.36 degrees). The average run time was 8.47 minutes when the voxel resolution was 3 mm 3 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.