2018
DOI: 10.1016/j.ins.2016.10.023
|View full text |Cite
|
Sign up to set email alerts
|

Abstracting massive data for lightweight intrusion detection in computer networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0
8

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 63 publications
(35 citation statements)
references
References 22 publications
0
27
0
8
Order By: Relevance
“…A case study using data from the UCI Machine Learning Repository [97] reduced an initial set of 4764 features to 623 [98]. For instance reduction, Wang et al (2016) employ a framework based on two clustering algorithms, affinity propagation (AP) and k-nearest neighbor (k-NN), to extract "exemplars", or representations of some number of actual data points [99]. A clustering algorithm is employed to cluster the data instances into similar groups; an exemplar is then defined to represent the group.…”
Section: Framework For Data Reductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A case study using data from the UCI Machine Learning Repository [97] reduced an initial set of 4764 features to 623 [98]. For instance reduction, Wang et al (2016) employ a framework based on two clustering algorithms, affinity propagation (AP) and k-nearest neighbor (k-NN), to extract "exemplars", or representations of some number of actual data points [99]. A clustering algorithm is employed to cluster the data instances into similar groups; an exemplar is then defined to represent the group.…”
Section: Framework For Data Reductionmentioning
confidence: 99%
“…Even if TSFRESH then filters out 50% of the features, there still could remain many hundreds of features in the model. This could be an excessive number of features that strains the capacity of the analyst to truly grasp what is going on For instance reduction, Wang et al (2016) employ a framework based on two clustering algorithms, affinity propagation (AP) and k-nearest neighbor (k-NN), to extract "exemplars", or representations of some number of actual data points [99]. A clustering algorithm is employed to cluster the data instances into similar groups; an exemplar is then defined to represent the group.…”
Section: Framework For Data Reductionmentioning
confidence: 99%
“…where ( ) is the information entropy of a sample and ( | ) is the conditional entropy of feature in the sample, which represents the information quantity (i.e., exists or does not exist in the classification system) [26,27].…”
Section: Information Gain Information Gain (Ig)mentioning
confidence: 99%
“…In order to detect anomalies in network, correlate parameters from different layers should be combined [8]. Some papers focus on building a new hierarchical framework for intrusion detection as well as data processing based on the feature classification and selection [9][10][11].…”
Section: Anomaly Detectionmentioning
confidence: 99%