2021
DOI: 10.1155/2021/6617309
|View full text |Cite
|
Sign up to set email alerts
|

Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Models

Abstract: Reinforcement learning (RL) agents can learn to control a nonlinear system without using a model of the system. However, having a model brings benefits, mainly in terms of a reduced number of unsuccessful trials before achieving acceptable control performance. Several modelling approaches have been used in the RL domain, such as neural networks, local linear regression, or Gaussian processes. In this article, we focus on techniques that have not been used much so far: symbolic regression (SR), based on genetic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 24 publications
(38 reference statements)
0
1
0
Order By: Relevance
“…We found that startup and camera connection initiated long delays, with each operation taking ≈1 s. Beyond those two functions, all other functions were completed in significantly less than 100 ms. Future work demonstrating operability in an array of light sources, and possibly under actively changing lighting conditions, may shed light on how the anticipation of dynamic lighting conditions could be incorporated into the training set for untethered magnetic manipulation systems. Various laboratories have applied advanced learning techniques such as reinforcement learning to the problem of controlling small magnetic devices [48][49][50], and we acknowledge that the application of such learning approaches would improve the performance of our controller. However, here, we emphasize the ability to control a simple magnetic sphere using very simple regression models and a small sample data set.…”
Section: Gui Response Timementioning
confidence: 99%
“…We found that startup and camera connection initiated long delays, with each operation taking ≈1 s. Beyond those two functions, all other functions were completed in significantly less than 100 ms. Future work demonstrating operability in an array of light sources, and possibly under actively changing lighting conditions, may shed light on how the anticipation of dynamic lighting conditions could be incorporated into the training set for untethered magnetic manipulation systems. Various laboratories have applied advanced learning techniques such as reinforcement learning to the problem of controlling small magnetic devices [48][49][50], and we acknowledge that the application of such learning approaches would improve the performance of our controller. However, here, we emphasize the ability to control a simple magnetic sphere using very simple regression models and a small sample data set.…”
Section: Gui Response Timementioning
confidence: 99%