2020
DOI: 10.1109/tro.2019.2955321
|View full text |Cite
|
Sign up to set email alerts
|

Approximate Optimal Motion Planning to Avoid Unknown Moving Avoidance Regions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(17 citation statements)
references
References 66 publications
1
16
0
Order By: Relevance
“…ADP was successfully extended to address input constrained control problems in Modares et al (2013) and Vamvoudakis et al (2016) . The state-constrained ADP problem was studied in the context of obstacle avoidance in Walters et al (2015) and Deptula et al (2020) , where an additional term that penalizes proximity to obstacles was added to the cost function. Since the added proximity penalty in Walters et al (2015) was finite, the ADP feedback could not guarantee obstacle avoidance, and an auxiliary controller was needed.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…ADP was successfully extended to address input constrained control problems in Modares et al (2013) and Vamvoudakis et al (2016) . The state-constrained ADP problem was studied in the context of obstacle avoidance in Walters et al (2015) and Deptula et al (2020) , where an additional term that penalizes proximity to obstacles was added to the cost function. Since the added proximity penalty in Walters et al (2015) was finite, the ADP feedback could not guarantee obstacle avoidance, and an auxiliary controller was needed.…”
Section: Introductionmentioning
confidence: 99%
“…Since the added proximity penalty in Walters et al (2015) was finite, the ADP feedback could not guarantee obstacle avoidance, and an auxiliary controller was needed. In Deptula et al (2020) , a barrier-like function was used to ensure unbounded growth of the proximity penalty near the obstacle boundary. While this approach results in avoidance guarantees, it relies on the relatively strong assumption that the value function is continuously differentiable over a compact set that contains the obstacles in spite of penalty-induced discontinuities in the cost function.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The approaches in [21], [22] rely on a barrier Lyapunov function (BLF) [12] based system transformation to convert a system with rectangular state constraints (i.e., the safe set is a hyper-rectangle) into an equivalent unconstrained system, allowing for the use of unconstrained ADP algorithms to develop control policies that guarantee stability and safety. Other authors have successfully developed ADP methods for particular safety-critical tasks with more complex constraints [25]; however, the developments are task-specific and may not generalize beyond their respective domains. Motivated by the recent success of CBFs in domains such as robotics that require complex safety requirements that cannot be expressed as box constraints on the system states, the authors in [23], [24] incorporate a CBF-based term into the cost function of an optimal control problem to synthesize control policies capable of guaranteeing satisfaction of safety constraints expressed as CBFs.…”
Section: Introductionmentioning
confidence: 99%
“…In contrast, the approach developed in this paper accounts for safe sets defined as the super-level set of a general continuously differentiable function, which can be used to encapsulate a variety of complex safety requirements [9]. In contrast to existing ADP approaches that leverage CBFs [23], [24] (and the closely related motion planning framework [25]), the method developed in this paper does not rely on including a non-differentiable term in the cost function that may compromise the differentiability of the resulting value function. Instead, a novel safeguarding controller is developed that shields the learned policy corresponding to an unconstrained optimal control problem from unsafe actions in a minimally invasive fashion.…”
Section: Introductionmentioning
confidence: 99%