Intrusion detection is an important area of research. Traditionally, the approach taken to nd attacks is to inspect the contents of every packet. However, packet inspection cannot easily be performed at high-speeds. Therefore, researchers and operators started investigating alternative approaches, such as ow-based intrusion detection. In that approach the ow of data through the network is analyzed, instead of the contents of each individual packet. The goal of this paper is to provide a survey of current research in the area of ow-based intrusion detection. The survey starts with a motivation why ow-based intrusion detection is needed. The concept of ows is explained, and relevant standards are identied. The paper provides a classication of attacks and defense techniques and shows how ow-based techniques can be used to detect scans, worms, Botnets and Denial of Service (DoS) attacks.
Flow monitoring has become a prevalent method for monitoring traffic in high-speed networks. By focusing on the analysis of flows, rather than individual packets, it is often said to be more scalable than traditional packet-based traffic analysis. Flow monitoring embraces the complete chain of packet observation, flow export using protocols such as NetFlow and IPFIX, data collection, and data analysis. In contrast to what is often assumed, all stages of flow monitoring are closely intertwined. Each of these stages therefore has to be thoroughly understood, before being able to perform sound flow measurements. Otherwise, flow data artifacts and data loss can be the consequence, potentially without being observed. This paper is the first of its kind to provide an integrated tutorial on all stages of a flow monitoring setup. As shown throughout this paper, flow monitoring has evolved from the early nineties into a powerful tool, and additional functionality will certainly be added in the future. We show, for example, how the previously opposing approaches of Deep Packet Inspection and flow monitoring have been united into novel monitoring approaches.
In 2012, the Dutch National Research and Education Network, SURFnet, observed a multitude of Distributed Denial of Service (DDoS) attacks against educational institutions. These attacks were effective enough to cause the online exams of hundreds of students to be cancelled. Surprisingly, these attacks were purchased by students from websites, known as Booters. These sites provide DDoS attacks as a paid service (DDoS-as-a-Service) at costs starting from 1 USD. Since this problem was first identified by SURFnet, Booters have been used repeatedly to perform attacks on schools in SURFnet's constituency. Very little is known, however, about the characteristics of Booters, and particularly how their attacks are structure. This is vital information needed to mitigate these attacks. In this paper we analyse the characteristics of 14 distinct Booters based on more than 250 GB of network data from real attacks. Our findings show that Booters pose a real threat that should not be underestimated, especially since our analysis suggests that they can easily increase their firepower based on their current infrastructure.
Flow-based intrusion detection has recently become a promising security mechanism in high speed networks (1-10 Gbps). Despite the richness in contributions in this field, benchmarking of flow-based IDS is still an open issue. In this paper, we propose the first publicly available, labeled data set for flowbased intrusion detection. The data set aims to be realistic, i.e., representative of real traffic and complete from a labeling perspective. Our goal is to provide such enriched data set for tuning, training and evaluating ID systems. Our setup is based on a honeypot running widely deployed services and directly connected to the Internet, ensuring attack-exposure. The final data set consists of 14.2M flows and more than 98% of them has been labeled.
The domain name system (DNS) is a core component of the Internet. It performs the vital task of mapping human readable names into machine readable data (such as IP addresses, which hosts handle e-mail, and so on). The content of the DNS reveals a lot about the technical operations of a domain. Thus, studying the state of large parts of the DNS over time reveals valuable information about the evolution of the Internet. We collect a unique long-term data set with daily DNS measurements for all the domains under the main top-level domains (TLDs) on the Internet (including .com, .net, and .org, comprising 50% of the global DNS name space). This paper discusses the challenges of performing such a large-scale active measurement. These challenges include scaling the daily measurement to collect data for the largest TLD (.com, with 123M names) and ensuring that a measurement of this scale does not impose an unacceptable burden on the global DNS infrastructure. The paper discusses the design choices we have made to meet these challenges and documents the design of the measurement system we implemented based on these choices. Two case studies related to cloud e-mail services illustrate the value of measuring the DNS at this scale. The data this system collects is valuable to the network research community. Therefore, we end this paper by discussing how we make the data accessible to other researchers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.