2021
DOI: 10.1007/s11042-020-09937-9
|View full text |Cite
|
Sign up to set email alerts
|

Color invariant state estimator to predict the object trajectory and catch using dexterous multi-fingered delta robot architecture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…A predetermined library of ArUco markers [6, 7, 15] was used to estimate the pose of an object. Using monocular vision and markers to achieve a Cadaver CVJ pose [8, 9, 16] is possible. The legs are connected to the base platform by universal (U) joints and the moving platform by spherical (S) joints.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…A predetermined library of ArUco markers [6, 7, 15] was used to estimate the pose of an object. Using monocular vision and markers to achieve a Cadaver CVJ pose [8, 9, 16] is possible. The legs are connected to the base platform by universal (U) joints and the moving platform by spherical (S) joints.…”
Section: Methodsmentioning
confidence: 99%
“…The marker image distortion must be compared to the library's distortion to calculate the pose at the target position. The marker pattern improves accuracy [7][8][9][10][11][12]. The authors describe the normal CVJ anatomy and numerous CVJ disorders [10].…”
Section: Introductionmentioning
confidence: 99%
“…The image of the identified marker and the marker image stored in the library is used for pose estimation. The transformation for perspective correction and damped least square method is used iteratively to minimize the projection error [20][21][22][23] and yields the pose of the marker (target position) in the camera frame. The process of determining the pose of the top platform of the 6-UScS manipulator with respect to the base platform frame is explained as follows:…”
Section: Pose Estimationmentioning
confidence: 99%