2020
DOI: 10.1007/s00500-020-05194-y
|View full text |Cite
|
Sign up to set email alerts
|

Shadowed type 2 fuzzy-based Markov model to predict shortest path with optimized waiting time

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 15 publications
0
10
0
Order By: Relevance
“…The history of reinforcement learning can be traced back to the 1950s. Its basic idea is to transform the sequential decision-making problem into a Markov model [28]. It establishes the mapping between the environmental state and the state action value function through the interaction between the agent (robot) and the environment.…”
Section: The Improved Q-learning Algorithmmentioning
confidence: 99%
“…The history of reinforcement learning can be traced back to the 1950s. Its basic idea is to transform the sequential decision-making problem into a Markov model [28]. It establishes the mapping between the environmental state and the state action value function through the interaction between the agent (robot) and the environment.…”
Section: The Improved Q-learning Algorithmmentioning
confidence: 99%
“…Lemma 1: [25] Taking the stochastic system (11). If there exists a Lyapunov function V ( ̟(k), δ k ) :…”
Section: Dmodelling Of Closed-loop Systemmentioning
confidence: 99%
“…for all ̟ ∈ Q 2n ̟ , k > 0. Then the system (11) Lemma 3: [39] ( Abel Lemma-based Finite-sum Inequality) For any integers ι 1 and ι 2 with ι 2 −ι 1 > 1 with a matrix Q > 0 , then the following inequality holds:…”
Section: Dmodelling Of Closed-loop Systemmentioning
confidence: 99%
See 2 more Smart Citations