The advent of deep learning has brought in disruptive techniques with unprecedented accuracy rates in so many fields and scenarios. Tasks such as the detection of regions of interest and semantic features out of images and video sequences are quite effectively tackled because of the availability of publicly available and adequately annotated datasets. This paper describes a use case scenario with a deep learning models' stack being used for crowd behaviour analysis. It consists of two main modules preceded by a pre-processing step. The first deep learning module relies on the integration of YOLOv5 and DeepSORT to detect and track down pedestrians from CCTV cameras' video sequences. The second module ingests each pedestrian's spatial coordinates, velocity, and trajectories to cluster groups of people using the Coherent Neighbor Invariance technique. The method envisages the acquisition of video sequences from cameras overlooking pedestrian areas, such as public parks or squares, in order to check out any possible unusualness in crowd behaviour. Due to its design, the system first checks whether some anomalies are underway at the microscale level. Secondly, It returns clusters of people at the mesoscale level depending on velocity and trajectories. This work is part of the physical behaviour detection module developed for the S4AllCities H2020 project.
The advancement of cyber-physical behaviour detection and understanding in context of urban environment safety and security has been developed in the S4AllCities project (S4AllCities, 2020). Specifically, various concepts of fundamental artificial intelligence and reasoning have been successfully developed and will subsequently be tested in situ in S4AllCities pilot sites during the coming year 2022 (Sabeur et al, 2021). The detection of anomalies in TCP and UDP communication-based protocols taking place in context of urban spaces have been investigated. These were also complemented with the detection of unusualness in crowd physical behaviour in the same urban spaces. The aim is to combine both modes (cyber and physical) of detection and behaviour understanding, in order to advance our situation awareness in context of native knowledge and reasoning for efficiently maintaining safety and security across the urban space. Native knowledge concerns the evaluated risks and mitigation measures for response to potential cyber-physical attacks on the urban space. In this study, the deployed machine learning techniques achieved good performances for classifying cyber and physical behaviour under various scenarios of potential attacks. Our future work is to exercise the performance, evaluation and validation of our intelligent algorithms using in situ cyber and physical observation scenarios of the urban spaces of the three S4AllCities pilot sites in Europe.References:S4AllCities (2020). Safe and Secure Smart Spaces for all Cities H2020 project ID number 883522. https://www.s4allcities.eu/project. Sabeur Z., Angelopoulos C.M., Collick L., Chechina N., Cetinkaya D., Bruno A. (2021) Advanced Cyber and Physical Situation Awareness in Urban Smart Spaces. In: Ayaz H., Asgher U., Paletta L. (eds) Advances in Neuroergonomics and Cognitive Engineering. AHFE 2021. Lecture Notes in Networks and Systems, vol 259. pp. 428-441. Springer, Cham. https://doi.org/10.1007/978-3-030-80285-1_50
The S4AllCities project has progressed rapidly during the last twelve months since it began in 2020 for the development of three distinct digital twins that collectively augment intelligence concerning cyber and physical security monitoring in smart urban spaces. These respectively specialize on; a) Distributed Edge Computing IoT (DEC-IoT); b) Malicious Actions Information Detection System (MAIDS); and c) Augmented Context Management System (ACMS) (S4AllCities, 2020). These three twins are built under a distributed System of Systems (SoS) architecture. Further, they each acquire real-time observations of both cyber and physical spaces while processing data for the critical extraction of knowledge at their levels. The extracted knowledge, represented as “events” at each of the respective twins levels, is communicated across the S4AllCities SoS Apache Kafka communication client/ server protocols. These respectively specialize in advancement of situation awareness at their levels. Namely, for the intelligent edge processing of observations in the urban space under the DEC-IoT; the detection of unusualness and understanding of cyber and human behavior under the MAIDS; while augmenting all awareness for the final release of threat alerts and proposed regulated responses (ACMS). In this paper, we will introduce the S4AllCities SoS overall architecture and the three twins high level functions. Then we will focus on describing our development of the MAIDS sub-modules and their functions under the De-Facto Joint Director of Laboratory (JDL) data fusion framework. The JDL framework efficiently enables the intelligent monitoring, detection and interpretation of the potential presence of threats and/or attacks in urban spaces. These attacks are either of cyber, physical, or both malicious nature. The well-known Endsley model for the cognitive advancement of situational awareness is mapped into the JDL framework in the context of critical decision support on cyber-physical surveillance in urban spaces. The JDL is much more adaptive for big data processing, machine learning, context knowledge modelling and augmented situational awareness of the cyber-physical space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.