2018
DOI: 10.4108/eai.12-9-2018.155557
|View full text |Cite
|
Sign up to set email alerts
|

Analysis on Improving the Response Time with PIDSARSA-RAL in ClowdFlows Mining Platform

Abstract: This paper provides an improved parallel data processing in Big Data mining using ClowdFlows platform. The big data processing involves an improvement in Proportional Integral Derivative (PID) controller using Reinforcement Adaptive Learning (RAL). The Reinforcement Adaptive Learning involves the use of Actor-critic State-action-reward-state-action (SARSA) learning that suits well the stream mining module of ClowdFlows platform. The study concentrates on batch mode processing in Big Data mining model with the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 6 publications
(11 reference statements)
0
1
0
Order By: Relevance
“…Then the agent receives a reward ( r t ) or penalty ( p t ) that signifies whether the action is good or bad. From these state (S) and action (A), the mapping ( π ) is learnt by DRL (π: S → A ), where the improved model for state and action is referred from Reference .…”
Section: Optimal Route Establishmentmentioning
confidence: 99%
“…Then the agent receives a reward ( r t ) or penalty ( p t ) that signifies whether the action is good or bad. From these state (S) and action (A), the mapping ( π ) is learnt by DRL (π: S → A ), where the improved model for state and action is referred from Reference .…”
Section: Optimal Route Establishmentmentioning
confidence: 99%
“…Conventionally, Sarsa and Q-learning have been applied to robotics problems [11]. Recently, a fuzzy Sarsa learning (known as FSL) algorithm was proposed to control a biped walking robot [12] and a PID-SARSA-RAL algorithm was proposed in the ClowdFlows platform to improve parallel data processing [13]. Q-learning is often applied in combination with a proportional integral derivative (PID) controller.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, some relevant applications of model-free reinforcement learning for tracking problems can be found in [20,21]. In these results [12][13][14][15][16][17][18], the PID controller has attracted substantial attention as an application of reinforcement learning algorithms [13][14][15][16]. However, the literature [13,[15][16][17][18] has provided optimal but static gains in control, and a complete search of the Q-table is required in [14].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations