2023
DOI: 10.1109/tmech.2022.3210592
|View full text |Cite
|
Sign up to set email alerts
|

A Control Strategy of Robot Eye-Head Coordinated Gaze Behavior Achieved for Minimized Neural Transmission Noise

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 40 publications
0
14
0
Order By: Relevance
“…The subjects were asked to hold the handle to move according to the desired trajectory in every training trial. He/She made movement corrections in response to visual feedback and force feedback [37, 38]. After each trial, the posterior distribution of the subject’s performance to the hyperparameter was generated by the Gaussian process based on historical hyperparameters and performances.…”
Section: Methodsmentioning
confidence: 99%
“…The subjects were asked to hold the handle to move according to the desired trajectory in every training trial. He/She made movement corrections in response to visual feedback and force feedback [37, 38]. After each trial, the posterior distribution of the subject’s performance to the hyperparameter was generated by the Gaussian process based on historical hyperparameters and performances.…”
Section: Methodsmentioning
confidence: 99%
“…With the development of robot application and the combination of deep learning technology and robots ( Qi and Su, 2022 ), robots are increasingly intelligent ( Chen and Qiao, 2020b , Wang et al, 2022 ) and behaving more and more like human beings ( Su et al, 2022b ). Now robots can be driven by humans to display natural and appropriate behaviors in social scenes ( Liu et al, 2022a ). Cylinder-aperture ESM measurement system sees the successful application of a 6-DOF manipulator ( Liu et al, 2022b ).…”
Section: Our Choicementioning
confidence: 99%
“…Deep learning-related technologies are increasingly integrated into people’s daily life, and object detection algorithms ( Qi et al, 2021 ; Liu et al, 2022a , b ; Xu et al, 2022 ), as a crucial component of the autonomous driving perception layer, can create a solid foundation for behavioral judgments during autonomous driving. Although object detection algorithms based on 2D images ( Bochkovskiy et al, 2020 ; Bai et al, 2022 ; Cheon et al, 2022 ; Gromada et al, 2022 ; Long et al, 2022 ; Otgonbold et al, 2022 ; Wahab et al, 2022 ; Wang et al, 2022 ) have had a lot of success at this stage, single-view images cannot completely reflect the position pose, and motion orientation of objects in 3D space due to the lack of depth information in 2D images.…”
Section: Introductionmentioning
confidence: 99%