2021
DOI: 10.1080/08839514.2021.2015106
|View full text |Cite
|
Sign up to set email alerts
|

Force Control of a Shape Memory Alloy Spring Actuator Based on Internal Electric Resistance Feedback and Artificial Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…Nevertheless, accurate control of the electrical resistance is difficult due to complex non-linear characteristics such as hysteresis and saturation. Therefore, a resistance control system with additional hysteresis compensators has been proposed for the non-linear characteristics of SMA [9][10][11][12]. A feedforward hysteresis compensator can effectively compensate for non-linear phenomena.…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, accurate control of the electrical resistance is difficult due to complex non-linear characteristics such as hysteresis and saturation. Therefore, a resistance control system with additional hysteresis compensators has been proposed for the non-linear characteristics of SMA [9][10][11][12]. A feedforward hysteresis compensator can effectively compensate for non-linear phenomena.…”
Section: Introductionmentioning
confidence: 99%
“…The use of electrical resistance (E.R.) as a sensor is investigated in a few references (Karimi and Konh, 2020; Prechtl et al, 2020; Sarmento et al, 2022; Simone et al, 2017; Urata et al, 2007). These studies focus on a SMA that undergoes a constant stress value.…”
Section: Introductionmentioning
confidence: 99%
“…An adaptive Q-Learning controller was proposed for controlling an SMA actuated humanoid robotic finger [25]. The resistance feedback control of an SMA spring actuator was investigated and artificial neural networks were used instead of constitutive models [26]. With the development of RL theory, the DeepMind team added a neural network into RL instead of the original reward table, which developed into deep reinforcement learning (DRL) [27].…”
Section: Introductionmentioning
confidence: 99%