2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8967863
|View full text |Cite
|
Sign up to set email alerts
|

Vision-based Automatic Control of a 5-Fingered Assistive Robotic Manipulator for Activities of Daily Living

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 20 publications
0
11
0
Order By: Relevance
“…Moreover, there is also a wide applicability of DL techniques in the IoT-Healthcare use cases and Wearables, for instance, to derive short-term and long-term health predictions. • Robotics: In robotics, DNNs served in a wide range of use cases like autonomous vehicles [14], humanoid robots [15], assistive robots [16], swarms [17], and drone control system [18]. • Smart Energy Management: DL can also be used to preserve valuable resources such as electricity.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, there is also a wide applicability of DL techniques in the IoT-Healthcare use cases and Wearables, for instance, to derive short-term and long-term health predictions. • Robotics: In robotics, DNNs served in a wide range of use cases like autonomous vehicles [14], humanoid robots [15], assistive robots [16], swarms [17], and drone control system [18]. • Smart Energy Management: DL can also be used to preserve valuable resources such as electricity.…”
Section: Introductionmentioning
confidence: 99%
“…The performance of these algorithms ranges from 77.8% to 99.05%. Regarding speed, it is shown that some algorithms were tested without considering the execution time [ 86 , 88 ], one algorithm is reported to run at a speed of 21.62 FPS [ 99 ], and the rest run in real time. Regarding hardware, the algorithms for object recognition were tested on robotic platforms such as the ARMAR-III robot [ 82 ] or the Jaco robot [ 93 , 95 , 100 ], to mention a few.…”
Section: Discussion and Conclusionmentioning
confidence: 99%
“…Then, the most representative points of each object are extracted to store them in a database and use to recognize places in a home. On the contrary, a simulated option is presented by Wang et al in [ 99 ] for an assistive robotic arm capable of helping disabled people to pick up objects on the floor. The authors implemented an RGB-D camera to extract cloud points from the scene.…”
Section: Algorithms Used For Objectsmentioning
confidence: 99%
“…The CV system was additionally used just before state 4, in which a depth image of the object of interest was taken, and the shape of that object was determined via a 5-layer Deep Convolutional Neural Network (DCNN). In the future, this shape determination step could be used to determine grasp type for the end-effector [18] or determine the final autonomous grasping strategy. The DCNN was trained on about 500 depth images collected in the simulator with the robot in this intermediate state, consistently achieving over 90% accuracy on the validation set.…”
Section: Computer Vision Systemmentioning
confidence: 99%
“…While the visual control system presented here needs improvements to robustly work in a real-world environment, it was adequate to test our current framework for BCI control in simulation. However, the entire system is largely modular, so it should be trivial to upgrade it to include concepts such as classification and localization using deep learning [18], though speed may be an important factor with more complex architectures. Additionally, as the robot autonomously completes the grasp from step 4 onward, other more complex grasping techniques could be implemented that have been learned through techniques such as deep reinforcement learning [19], [20], rather than just tuning a few parameters.…”
Section: Computer Vision Systemmentioning
confidence: 99%