2016
DOI: 10.1016/j.ifacol.2016.10.033
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous Leaf Picking Using Deep Learning and Visual-Servoing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
17
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 35 publications
(17 citation statements)
references
References 5 publications
0
17
0
Order By: Relevance
“…In the previous paper [5], the "Monoscopic Depth Analysis" approach was discussed: a method by which the leaf position was determined through multiple images in multiple camera locations. This method is quite reliable.…”
Section: A Previous Method: Single Leaf Approachmentioning
confidence: 99%
See 2 more Smart Citations
“…In the previous paper [5], the "Monoscopic Depth Analysis" approach was discussed: a method by which the leaf position was determined through multiple images in multiple camera locations. This method is quite reliable.…”
Section: A Previous Method: Single Leaf Approachmentioning
confidence: 99%
“…Our prior work [5] demonstrated the applicability of Deep Neural Networks for the leaf detection problem. In particular, complex and variable leaf appearance, challenging backgrounds, and changing natural lighting make devising any hand-crafted features very difficult for this problem.…”
Section: A Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…A depth sensor was fixed to acquire the top-view the whole plant and to facilitate motion planning. Ahlin et al (2016) used deep convolutional neural network to detect plant leaves in the images acquired by a RGB camera mounted on the end-effector. Sparse feature points between frames were used to compute 3D location of the leaves for guiding the endeffector.…”
Section: Introductionmentioning
confidence: 99%
“…A depth sensor was fixed above the plant to acquire the top-view image of the whole plant for motion planning. Ahlin et al (2016) used a deep convolutional neural network to detect plant leaves in the images acquired by an RGB camera mounted on the end effector of a robotic manipulator. Sparse feature points between frames were used to compute the 3D location of the leaves for guiding the end effector.…”
mentioning
confidence: 99%