2021
DOI: 10.1109/tnsm.2020.3037486
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive and Reinforcement Learning Approaches for Online Network Monitoring and Analysis

Abstract: Network-monitoring data commonly arrives in the form of fast and changing data streams. Continuous and dynamic learning is an effective learning strategy when dealing with such data, where concept drifts constantly occur. We propose different stream-based, adaptive learning approaches to analyze networktraffic streams on the fly. We address two major challenges associated to stream-based machine learning and online network monitoring: (i) how to dynamically learn from and adapt to non-stationary data changing … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 39 publications
0
4
0
Order By: Relevance
“…In contrast to traditional ML techniques, reinforcement learning (RL) aims to find an optimal learning policy strategy during the model-building phase [18]. RL-based techniques are performed through an entity called the agent.…”
Section: B Reinforcement Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast to traditional ML techniques, reinforcement learning (RL) aims to find an optimal learning policy strategy during the model-building phase [18]. RL-based techniques are performed through an entity called the agent.…”
Section: B Reinforcement Learningmentioning
confidence: 99%
“…Traditional ML relies upon an input and output pair, where an ML model receives a given event feature vector as input and outputs its estimated class value. RL-based approaches are significantly different from traditional ML-based ones [18]. RL-based techniques are not given the event class value [3].…”
Section: B Reinforcement Learningmentioning
confidence: 99%
“…Then, on the basis of the estimated throughput, we apply the Thompson sampling (TS) [33] based Bayesian approach to find the best DBCA channel bonding level intelligently. TS has several advantages -(a) it is a reinforcement learning algorithm which is suitable for problems demanding online decisions [34]- [36], (b) it generates the probability distribution of the success rate of applying an action based on it knowledge of exploiting the action in the past, and (c) in order to maximize the cumulative rewards obtained by applying actions, the application of exploration and exploitation leads to the learning of a new environment and utilizing the best past knowledge as well. In TS, the actions are chosen sequentially by a manner such that a balance is maintained between decisions exploring new information to improve future performance and decisions exploiting the past knowledge to maximize present performance.…”
Section: Our Approachmentioning
confidence: 99%
“…Cui is with the School of Computer Science, University of Technology Sydney, Sydney, Australia use individual classifier with incremental learning capability to update its structure when new data arrives. Unfortunately, concept drift is prevalent in dynamic streaming environment such as intrusion detection systems (IDS), in where the model performance of incremental learning may degrade [4], [5], [6]. Specifically, when an unknown intrusion occurs in the network, the current data distribution will change dynamically.…”
Section: Introductionmentioning
confidence: 99%