Adversary thinking is an essential skill for cybersecurity experts, enabling them to understand cyber attacks and set up effective defenses. While this skill is commonly exercised by Capture the Flag games and hands-on activities, we complement these approaches with a key innovation: undergraduate students learn methods of network attack and defense by creating educational games in a cyber range. In this paper, we present the design of two courses, instruction and assessment techniques, as well as our observations over the last three semesters. The students report they had a unique opportunity to deeply understand the topic and practice their soft skills, as they presented their results at a faculty open day event. Their peers, who played the created games, rated the quality and educational value of the games overwhelmingly positively. Moreover, the open day raised awareness about cybersecurity and research and development in this field at our faculty. We believe that sharing our teaching experience will be valuable for instructors planning to introduce active learning of cybersecurity and adversary thinking.
The exchange of security alerts is a current trend in network security and incident response. Alerts from network intrusion detection systems are shared among organizations so that it is possible to see the "big picture" of current security situation. However, the quality and redundancy of the input data seem to be underrated. We present four use cases of aggregation of the alerts from network intrusion detection systems. Alerts from a sharing platform deployed in the Czech national research and education network were examined in a case study. Volumes of raw and aggregated data are presented and a rule of thumb is proposed: up to 85 % of alerts can be aggregated. Finally, we discuss the practical implications of alert aggregation for the network intrusion detection system, such as (in)completeness of the alerts and optimal time windows for aggregation.
In this paper, we present an empirical study on vulnerability enumeration in computer networks using common network probing and monitoring tools. We conducted active network scans and passive network monitoring to enumerate software resources and their version present in the network. Further, we used the data from third-party sources, such as Internet-wide scanner Shodan. We correlated the measurements with the list of recent vulnerabilities obtained from NVD using the CPE as a common identifier used in both domains. Subsequently, we compared the approaches in terms of network coverage and precision of system identification. Finally, we present a sample list of vulnerabilities observed in our campus network. Our work helps in approximating the number of vulnerabilities and vulnerable hosts in large networks, where it is often impractical or costly to perform vulnerability scans using specialized tools, and in situations, where a quick estimate is more important than thorough analysis.
Asset identification plays a vital role in situational awareness building. However, the current trends in communication encryption and the emerging new protocols turn the wellknown methods into a decline as they lose the necessary data to work correctly. In this paper, we examine the traffic patterns of the TLS protocol and its changes introduced in version 1.3. We train a machine learning model on TLS handshake parameters to identify the operating system of the client device and compare its results to well-known identification methods. We test the proposed method in a large wireless network. Our results show that precise operating system identification can be achieved in encrypted traffic of mobile devices and notebooks connected to the wireless network.
Abstract-Modern distributed stream processing systems can potentially be applied to real time network flow processing. However, differences in performance make some systems more suitable than others for being applied to this domain. We propose a novel performance benchmark, which is based on common security analysis algorithms of NetFlow data to determine the suitability of distributed stream processing systems. Three of the most used distributed stream processing systems are benchmarked and the results are compared with NetFlow data processing challenges and requirements. The benchmark results show that each system reached a sufficient data processing speed using a basic deployment scenario with little to no configuration tuning. Our benchmark, unlike any other, enables the performance of small structured messages to be processed on any stream processing system.
Identification of a communicating device operating system is a fundamental part of network situational awareness. However, current networks are large and change often which implies the need for a system that will be able to continuously monitor the network and handle changes in identified operating systems. The aim of this paper is to compare machine learning methods performance for OS fingerprinting on real-world data in the terms of processing time, memory requirements, and performance measures of accuracy, precision, and recall.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.