2012
DOI: 10.5194/isprsarchives-xxxix-b7-185-2012
|View full text |Cite
|
Sign up to set email alerts
|

Support Vector Machine Classification of Object-Based Data for Crop Mapping, Using Multi-Temporal Landsat Imagery

Abstract: ABSTRACT:Crop mapping and time series analysis of agronomic cycles are critical for monitoring land use and land management practices, and analysing the issues of agro-environmental impacts and climate change. Multi-temporal Landsat data can be used to analyse decadal changes in cropping patterns at field level, owing to its medium spatial resolution and historical availability. This study attempts to develop robust remote sensing techniques, applicable across a large geographic extent, for state-wide mapping … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
33
0
3

Year Published

2015
2015
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 60 publications
(37 citation statements)
references
References 25 publications
1
33
0
3
Order By: Relevance
“…The OBIA approach is conducted through a two-step process: (1) image segmentation by aggregating a number of individual pixels or image sub-objects to form larger objects (primitive objects) based on the homogeneity, intensity, and texture of each investigated image; and (2) image classification and feature extraction. The result of the OBIA approach was successfully proved to be more accurate than that of the pixel-based approaches for land cover classification in recent studies, such as discrimination of different species of mangroves with Worldview-2 imagery [23], flood area delineation in the trans-boundary areas using the ENVISAT/ASAR and Landsat TM data [4], and crop mapping using the multi-temporal Landsat imagery [22]. Other applications of the object-based method for flood water and wetland mapping were introduced in [24][25][26].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The OBIA approach is conducted through a two-step process: (1) image segmentation by aggregating a number of individual pixels or image sub-objects to form larger objects (primitive objects) based on the homogeneity, intensity, and texture of each investigated image; and (2) image classification and feature extraction. The result of the OBIA approach was successfully proved to be more accurate than that of the pixel-based approaches for land cover classification in recent studies, such as discrimination of different species of mangroves with Worldview-2 imagery [23], flood area delineation in the trans-boundary areas using the ENVISAT/ASAR and Landsat TM data [4], and crop mapping using the multi-temporal Landsat imagery [22]. Other applications of the object-based method for flood water and wetland mapping were introduced in [24][25][26].…”
Section: Introductionmentioning
confidence: 99%
“…Traditional pixel-based image analysis algorithms for flood mapping and land use classification suffer from low accuracy, sub-pixel problems, and the speckle noise effect in the resulting images [20][21][22]. On the other hand, the object-based image analysis (OBIA) approach has been thoroughly developed in the last two decades to overcome the limitations and disadvantages of the traditional pixel-based approaches by generating and analyzing meaningful image objects instead of individual pixels and reducing the speckle noise effect.…”
Section: Introductionmentioning
confidence: 99%
“…The listed features are not readily available within pixel-based approaches. Consequently, object-based approaches have been widely used for crop classification in Landsat-like images [9,33,[36][37][38][39][40][41][42].…”
Section: Introductionmentioning
confidence: 99%
“…In the spatial aggregation step, spatial median values have been calculated, making the approach more robust against spatial outliers such as pixels affected by imperfect cloud or cloud-shadow masking. One of the benefits of the GEOBIA is that there is no removal of "salt and pepper" effects required [33,37].…”
Section: Synthetic Image Generation and Segmentationmentioning
confidence: 99%