Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Background and Aims: Vine balance is defined as a relation between vegetative (mass of dormant pruning wood) and generative (yield) growth. For grapevine breeding, emphasis is usually placed on the evaluation of individual seedlings. In this study, we calculated the mass of dormant pruning wood with the assistance of an automated image-based method for estimating the pixel area of dormant pruning wood. The evaluation of digital images in combination with depth map calculation and image segmentation is a new and non-invasive tool for objective data acquisition. Methods and Results: The proposed method was tested on a set of seedlings planted at the Institute for Grapevine Breeding Geilweilerhof, Germany. All images taken in the field were geo-referenced, and the automated method was validated by manual segmentation. Together with additional yield parameters, the vine balance indices can be used to classify seedlings for breeding purposes. Conclusion: The computed pruning mass obtained using image-based methods is an accurate, inexpensive and easy method to estimate pruning mass compared with the manual time-consuming measurements. Together with the yield parameters, it is a suitable method for seedling evaluation and can also be used in precision viticulture. Significance of the Study: This study demonstrates an image-based evaluation of the pruning mass to be a highly valuable tool for grapevine research and grapevine breeding. Moreover, the tool might be used by industry to monitor vine balance. The key findings reported have the potential to increase grapevine breeding efficiency by using an accurate and objective phenotyping method.
When a vision sensor is used in conjunction with a robot, hand-eye calibration is necessary to determine the accurate position of the sensor relative to the robot. This is necessary to allow data from the vision sensor to be defined in the robot's global coordinate system. For 2D laser line sensors hand-eye calibration is a challenging process because they only collect data in two dimensions. This leads to the use of complex calibration artefacts and requires multiple measurements be collected, using a range of robot positions. This paper presents a simple and robust hand-eye calibration strategy that requires minimal user interaction and makes use of a single planar calibration artefact. A significant benefit of the strategy is that it uses a low-cost, simple and easily manufactured artefact; however, the lower complexity can lead to lower variation in calibration data. In order to achieve a robust hand-eye calibration using this artefact the impact of robot positioning strategies is considered to maintain variation. A theoretical basis for the necessary sources of input variation is defined by a mathematical analysis of the system of equations for the calibration process. From this a novel strategy is specified to maximize data variation by using a circular array of target scan lines to define a full set of required robot positions. A simulation approach is used to further investigate and optimise the impact of robot position on the calibration process, and the resulting optimal robot positions are then experimentally validated for a real robot mounted laser line sensor. Using the proposed optimum method, a semi-automatic calibration process, which requires only four manually scanned lines, is defined and experimentally demonstrated.
Farm detection using low resolution satellite images is an important topic in digital agriculture. However, it has not received enough attention compared to high-resolution images. Although high resolution images are more efficient for detection of land cover components, the analysis of low-resolution images are yet important due to the low-resolution repositories of the past satellite images used for timeseries analysis, free availability and economic concerns. The current paper addresses the problem of farm detection using low resolution satellite images. In digital agriculture, farm detection has significant role for key applications such as crop yield monitoring. Two main categories of object detection strategies are studied and compared in this paper; First, a two-step semi-supervised methodology is developed using traditional manual feature extraction and modelling techniques; the developed methodology uses the Normalized Difference Moisture Index (NDMI), Grey Level Co-occurrence Matrix (GLCM), 2-D Discrete Cosine Transform (DCT) and morphological features and Support Vector Machine (SVM) for classifier modelling. In the second strategy, high-level features learnt from the massive filter banks of deep Convolutional Neural Networks (CNNs) are utilised. Transfer learning strategies are employed for pretrained Visual Geometry Group Network (VGG-16) networks. Results show the superiority of the high-level features for classification of farm regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.