IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium 2018
DOI: 10.1109/igarss.2018.8519188
|View full text |Cite
|
Sign up to set email alerts
|

Palm Trees Counting in Remote Sensing Imagery Using Regression Convolutional Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 2 publications
2
8
0
Order By: Relevance
“…Since the F1-measure uses both the precision and recall values to compute the test results, it can be assumed that the proposed approach performs better and returns a better balance between true-positive predicted and true-positive rates concerning the identification of palm-trees. Nonetheless, the results are consistent with recent literature where object detection applications were applied for the identification of single tree-species 6,7,56,57 ; but performed in the non-RGB image domain. The low precision values for the bounding box method may be explained by a high density of objects (i.e., M. flexuosa palm-trees).…”
Section: Comparative Results Between Object Detection Methodssupporting
confidence: 90%
See 1 more Smart Citation
“…Since the F1-measure uses both the precision and recall values to compute the test results, it can be assumed that the proposed approach performs better and returns a better balance between true-positive predicted and true-positive rates concerning the identification of palm-trees. Nonetheless, the results are consistent with recent literature where object detection applications were applied for the identification of single tree-species 6,7,56,57 ; but performed in the non-RGB image domain. The low precision values for the bounding box method may be explained by a high density of objects (i.e., M. flexuosa palm-trees).…”
Section: Comparative Results Between Object Detection Methodssupporting
confidence: 90%
“…For the identification of citrus-tree, a CNN method was able to provide 96.2% accuracy 13 , and in oil palm-tree detection, a deep neural network implementation returned an accuracy of 96.0% (Li et al, 2019). One different kind of palm trees than the ones evaluated in our dataset was investigated with a modification of the AlexNet CNN architecture and returned high prediction values (R² = 0.99, with the relative error between 2.6% to 9.2%) 56 . A study 7 achieved an accuracy higher than 90% to detect single tree-species using the RetinaNet and RGB images.…”
Section: Discussionmentioning
confidence: 96%
“…Since the F1-measure uses both the precision and recall values to compute the test results, it can be assumed that the proposed approach performs better and returns a better balance between true-positive predicted and true-positive rates concerning the identification of palm-trees. Nonetheless, the results are consistent with recent literature where object detection applications were applied for the identification of single tree-species (8,9,68,70); but performed in the non-RGB image domain. The low precision values for the bounding box method may be explained by a high density of objects (i.e., M. flexuosa palm-trees).…”
Section: P R E P R I N Tsupporting
confidence: 90%
“…For the identification of citrus-tree, a CNN method was able to provide 96.2% accuracy (16), and in oil palm-tree detection, a deep neural network implementation returned an accuracy of 96.0% . One different kind of palm trees than the ones evaluated in our dataset was investigated with a modification of the AlexNet CNN architecture and returned high prediction values (R² = 0.99, with the relative error between 2.6% to 9.2%) (70). A study (9) achieved an accuracy higher than 90% to detect single tree-species using the RetinaNet and RGB images.…”
Section: Discussionmentioning
confidence: 98%
“…Approaches that consider both spectral and spatial information in their model can improve estimates significantly (Zhang et al, 2017). This has been the most common strategy when dealing with vegetation analysis and deep networks in the last few years (Djerriri et al, 2018;Li et al, 2017;Csillik et al, 2018;Safonova et al, 2019;Weinstein et al, 2019;Osco et al, 2020b).…”
Section: Introductionmentioning
confidence: 99%