2011
DOI: 10.4236/ica.2011.22013
|View full text |Cite
|
Sign up to set email alerts
|

Reduced Model Based Control of Two Link Flexible Space Robot

Abstract: Model based control schemes use the inverse dynamics of the robot arm to produce the main torque component necessary for trajectory tracking. For model-based controller one is required to know the model parameters accurately. This is a very difficult task especially if the manipulator is flexible. So a reduced model based controller has been developed, which requires only the information of space robot base velocity and link parameters. The flexible link is modeled as Euler Bernoulli beam. To simplify the anal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 11 publications
0
10
0
Order By: Relevance
“…In such cases, reinforcement learning and policy search algorithms that can learn from a robot's experience have been shown to be successful [8], [9] for tasks such as object manipulation [10], [11], [12], locomotion [13], [14], [15], [16] and flight [17]. However, most of this work involves using a model-free component to approximate features of the robot or the world that cannot be modeled while still using model-based controllers for other parts of the system [12], [18] In work where flexibility is taken into consideration, learning is still based either on building a more complex model [6], [19], [20], an approximate model [21] or plugging in a learned-model component into a model-based controller. Recently, work involving end-to-end model-free methods using deep reinforcement learning have been demonstrated successfully in rigid real robots [22], [23], [16].…”
Section: Related Workmentioning
confidence: 99%
“…In such cases, reinforcement learning and policy search algorithms that can learn from a robot's experience have been shown to be successful [8], [9] for tasks such as object manipulation [10], [11], [12], locomotion [13], [14], [15], [16] and flight [17]. However, most of this work involves using a model-free component to approximate features of the robot or the world that cannot be modeled while still using model-based controllers for other parts of the system [12], [18] In work where flexibility is taken into consideration, learning is still based either on building a more complex model [6], [19], [20], an approximate model [21] or plugging in a learned-model component into a model-based controller. Recently, work involving end-to-end model-free methods using deep reinforcement learning have been demonstrated successfully in rigid real robots [22], [23], [16].…”
Section: Related Workmentioning
confidence: 99%
“…The system dynamics and parameters do not appear in controller (16), which make it very robust to shocks from the system parameter variations.…”
Section: Remarkmentioning
confidence: 99%
“…The parameters K p and K d in the proposed controller (16) have the same values with that in PD control, and the noncollocated control parameters are determined as K e = diag(0, 0, 0, 440, 320, 0, 0, 0, 0, 0, 0) and Comparing with the results of PD control (26), it can be seen that the proposed noncollocated model-free position control (16) can achieve better performance of position regulation with less overshoots and vibrations, thereby the tip position trajectory of the end effector are smoother shown in Fig. 8.…”
Section: Simulation Studiesmentioning
confidence: 99%
See 1 more Smart Citation
“…There has been a great number of studies coping with controlling flexible-arm robots, many of which investigate both theoretical and experimental aspects in this field [4] [5]. On grounds of the flexibility of the arms, along with a trajectory-tracking control problem, vibration control should be also considered for such systems to improve the control system performance [6].…”
Section: Introductionmentioning
confidence: 99%