2010
DOI: 10.1016/j.compag.2009.09.012
|View full text |Cite
|
Sign up to set email alerts
|

From pixel to vine parcel: A complete methodology for vineyard delineation and characterization using remote-sensing data

Abstract: International audienceThe increasing availability of Very High Spatial Resolution images enables accurate digital maps production as an aid for management in the agricultural domain. In this study we develop a comprehensive and automatic tool for vineyard detection, delineation and characterizationusing aerial images and without any parcel plan availability. In France, vineyard training methods in rows or grids generate periodic patterns which make frequency analysis a suitable approach. The proposed method co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
50
0
1

Year Published

2011
2011
2019
2019

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 55 publications
(52 citation statements)
references
References 12 publications
1
50
0
1
Order By: Relevance
“…These quantitative results are similar or better than those that have been reported in the literature [8,9,23,24,28,30] for vineyards or similar crop detection studies. It should be also noted that in the EU (in particular, in Greece), the terroirs are significantly smaller in size, usually covering non-flat viticulture regions with highly complex terrain, and thus, their detection is significantly more challenging than, e.g., in Australia, California, Chile.…”
Section: Discussionsupporting
confidence: 89%
See 2 more Smart Citations
“…These quantitative results are similar or better than those that have been reported in the literature [8,9,23,24,28,30] for vineyards or similar crop detection studies. It should be also noted that in the EU (in particular, in Greece), the terroirs are significantly smaller in size, usually covering non-flat viticulture regions with highly complex terrain, and thus, their detection is significantly more challenging than, e.g., in Australia, California, Chile.…”
Section: Discussionsupporting
confidence: 89%
“…Regarding the developed canopy extraction methodology in contrast to similar efforts that are based on (e.g., NDVI) thresholds and/or line detection/fitting [28,32,49], we formulate the problem under a supervised object-based classification framework and propose a set of spectral and spatial features that can extract the canopy (not just lines, center-lines, etc.) while addressing different in-between the rows materials, like soil, weeds, rock, etc.…”
Section: Contributionmentioning
confidence: 99%
See 1 more Smart Citation
“…The key strengths of UAV are the high spatial ground resolution and a reduced planning time, which allows for highly flexible and timely vineyard monitoring [15,[38][39][40]. This study presents results using different classifications methods to detect and segment the vine canopy in ultra-high-resolution RGB imagery obtained from UAV.…”
Section: Perspectives and General Study Limitationsmentioning
confidence: 99%
“…In these systems, despite the high spatial resolution of the sensors currently employed, the outcoming information, such as the vigour zoning, only accounts for averaged data neglecting the contribution of single vine (Arnó, Martínez Casasnovas, Ribes Dasi, & Rosell, 2009). While row detection techniques saw a great development in these last few years (Comba, Gay, Primicerio, & Aimonino, 2015;Delenne, Durrieu, Rabatel, & Deshayes, 2010;Puletti, Perria, & Storchi, 2014;Smit, Sithole, & Strever, 2010), a methodology for single plant detection is still not available. Instead, the ability to recognize automatically single vine within a training row could remarkably improve the representation of the contribution of single plant to the canopy curtain, enabling to detect specific plant pathologies in the row and improving the accuracy of vigour zoning (Lee et al, 2010;Naidu, Perry, Pierce, & Mekuria, 2009;Sankaran, Mishra, Ehsani, & Davis, 2010).…”
Section: Introductionmentioning
confidence: 99%