The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2023
DOI: 10.36227/techrxiv.21901419
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Unified Framework for Multi-Agent Formation with a Non-repetitive Leader Trajectory: Adaptive Control and Iterative Learning Control

Abstract: <p>Formation tracking (FT) control aims at handling cooperative tasks in multi-agent systems (MASs) to achieve desired performance. In these tasks, the leader's input is generally non-zero and  unknown to all followers, i.e., its trajectory can be arbitrary and non-repetitive. In this paper, the additive property of linear systems is exploited to develop a unified framework for FT tasks of MASs, consisting of adaptive observer-based control (AOC) and iterative learning control (ILC). This framework emplo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 33 publications
0
1
0
Order By: Relevance
“…The general optimization process of ACMs is to minimize the associated energy function through gradient or steepest descent method. However, it should be aware that it may be hard to figure out the global minima if the energy function is non-convex [33,[84][85][86][87] , which may cause a failed segmentation in the form of falling into a local minima. Specifically, the traditional gradient or steepest descent approach is initialized by the initial level set function and then descends at each iteration, ; the descending direction is controlled by the slope or the derivative of the evolution curve.…”
Section: Fast and Stable Optimization Algorithmmentioning
confidence: 99%
“…The general optimization process of ACMs is to minimize the associated energy function through gradient or steepest descent method. However, it should be aware that it may be hard to figure out the global minima if the energy function is non-convex [33,[84][85][86][87] , which may cause a failed segmentation in the form of falling into a local minima. Specifically, the traditional gradient or steepest descent approach is initialized by the initial level set function and then descends at each iteration, ; the descending direction is controlled by the slope or the derivative of the evolution curve.…”
Section: Fast and Stable Optimization Algorithmmentioning
confidence: 99%