2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8463191
|View full text |Cite
|
Sign up to set email alerts
|

Cartman: The Low-Cost Cartesian Manipulator that Won the Amazon Robotics Challenge

Abstract: The Amazon Robotics Challenge enlisted sixteen teams to each design a pick-and-place robot for autonomous warehousing, addressing development in robotic vision and manipulation. This paper presents the design of our custombuilt, cost-effective, Cartesian robot system Cartman, which won first place in the competition finals by stowing 14 (out of 16) and picking all 9 items in 27 minutes, scoring a total of 272 points. We highlight our experience-centred design methodology and key aspects of our system that cont… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
91
1
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 125 publications
(95 citation statements)
references
References 12 publications
0
91
1
1
Order By: Relevance
“…In 2016, many teams used deep learning to segment objects for the alignment phase, training semantic segmentation networks with separate classes for each object instance on hand-labeled [46] or self-supervised datasets [9]. Team ACRV, the winners of the 2017 ARC, fine-tuned RefineNet to segment and classify 40 unique known objects in a bin, with a system to quickly learn new items with a semi-automated procedure [10,47]. In contrast, our method uses deep learning for category-agnostic segmentation, which can can be used to segment a wide variety of objects not seen in training.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In 2016, many teams used deep learning to segment objects for the alignment phase, training semantic segmentation networks with separate classes for each object instance on hand-labeled [46] or self-supervised datasets [9]. Team ACRV, the winners of the 2017 ARC, fine-tuned RefineNet to segment and classify 40 unique known objects in a bin, with a system to quickly learn new items with a semi-automated procedure [10,47]. In contrast, our method uses deep learning for category-agnostic segmentation, which can can be used to segment a wide variety of objects not seen in training.…”
Section: Related Workmentioning
confidence: 99%
“…An attempt was considered successful if the robot lifted the target object out of the bin and successfully transported the object to a receptacle. One approach to this problem is to collect real images of the items piled in the bin, labeling object masks in each image, and using that data to train or fine-tune a deep neural network for object classification and segmentation [10,11]. However, that data collection process is time consuming and must be re-performed for new object sets, and training and fine-tuning a Mask R-CNN can take some time.…”
Section: B Precision-recall Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…Lastly, many researchers have presented complete robotic pick-and-place systems [28], [29], offering a broader view of different modules such as motion planning, perception, control and grasping. Regarding motion planning, these reported systems commonly employ reactive strategies that are based on online visual feedback.…”
Section: Related Workmentioning
confidence: 99%
“…While bin packing is studied extensively, to the best of the authors' knowledge there are few attempts to deploy bin packing solutions on real robots, where inaccuracies in vision and control are taken into account. Such inaccuracies have been considered in the context of efforts relating to the Amazon Robotics Challenge [12] [13] [14] [15] [16] [17] but most of these systems do not deal with bin packing. Most deployments of automatic packing use mechanical components, such as conveyor trays, that are specifically designed for certain products [18], rendering them difficult to customize and deploy.…”
Section: Related Workmentioning
confidence: 99%