BackgroundLiving systems are associated with Social networks — networks made up of nodes, some of which may be more important in various aspects as compared to others. While different quantitative measures labeled as “centralities” have previously been used in the network analysis community to find out influential nodes in a network, it is debatable how valid the centrality measures actually are. In other words, the research question that remains unanswered is: how exactly do these measures perform in the real world? So, as an example, if a centrality of a particular node identifies it to be important, is the node actually important?PurposeThe goal of this paper is not just to perform a traditional social network analysis but rather to evaluate different centrality measures by conducting an empirical study analyzing exactly how do network centralities correlate with data from published multidisciplinary network data sets.MethodWe take standard published network data sets while using a random network to establish a baseline. These data sets included the Zachary's Karate Club network, dolphin social network and a neural network of nematode Caenorhabditis elegans. Each of the data sets was analyzed in terms of different centrality measures and compared with existing knowledge from associated published articles to review the role of each centrality measure in the determination of influential nodes.ResultsOur empirical analysis demonstrates that in the chosen network data sets, nodes which had a high Closeness Centrality also had a high Eccentricity Centrality. Likewise high Degree Centrality also correlated closely with a high Eigenvector Centrality. Whereas Betweenness Centrality varied according to network topology and did not demonstrate any noticeable pattern. In terms of identification of key nodes, we discovered that as compared with other centrality measures, Eigenvector and Eccentricity Centralities were better able to identify important nodes.
We live in a time where electronic gadgets and integrated sensors are all around usfrom versatile Smartphones and tablets to portable PCs, and from indoor temperature regulators to microwave ovens. We live in a new world-a world of smart*-where intelligence and connectivity is added to every conceivable object. The vision of the internet of things (IoT) by Ashton (2009) appears to have manifested itself-albeit in unexpected ways. This emergence of the IoT in our everyday lives obviously has numerous implications resulting in a very different environment and society. Considering that the IoT concept is itself quite new, it is understandable that it is difficult to model. Researchers from the communication systems area often focus primarily Abstract Sensors, coupled with transceivers, have quickly evolved from technologies purely confined to laboratory test beds to workable solutions used across the globe. These mobile and connected devices form the nuts and bolts required to fulfill the vision of the so-called internet of things (IoT). This idea has evolved as a result of proliferation of electronic gadgets fitted with sensors and often being uniquely identifiable (possible with technological solutions such as the use of Radio Frequency Identifiers). While there is a growing need for comprehensive modeling paradigms as well as example case studies for the IoT, currently there is no standard methodology available for modeling such real-world complex IoT-based scenarios. Here, using a combination of complex networks-based and agent-based modeling approaches, we present a novel approach to modeling the IoT. Specifically, the proposed approach uses the Cognitive Agent-Based Computing (CABC) framework to simulate complex IoT networks. We demonstrate modeling of several standard complex network topologies such as lattice, random, small-world, and scale-free networks. To further demonstrate the effectiveness of the proposed approach, we also present a case study and a novel algorithm for autonomous monitoring of power consumption in networked IoT devices. We also discuss and compare the presented approach with previous approaches to modeling. Extensive simulation experiments using several network configurations demonstrate the effectiveness and viability of the proposed approach.
The number of client-side attacks is increasing day-by-day. These attacks are launched by using various methods like phishing, drive-by downloads, click-frauds, social engineering, scareware, and ransomware. To get more advantage with less exertion and time, the attackers are focus on the clients, rather than servers which are more secured as compared to the clients. This makes clients as an easy target for the attackers on the Internet. A number of systems/tools have been created by the security community with various functions for detection of client-side attacks. The discovery of malicious servers that launch the client side attacks can be characterized in two types. First to detect malicious servers with passive detection which is often signature based. Second to detect the malicious servers with active detection often with dynamic malware analysis. Current systems or tools have more focus on identifying malicious servers rather than preventing the clients from those malicious servers. In this paper, we have proposed a solution for the detection and prevention of malicious servers that use the Bro Intrusion Detection System (IDS) and VirusTotal API 2.0. The detected malicious link is then blocked at the gateway.
The risk of malware has increased drastically in recent years due to advances in the IT industry but it also increased the need for malware analysis and prevention. Hackers inject malicious code using awful applications. In this research, a framework is proposed to identify malicious Android applications based on repacked malicious code. The sensitive features of android applications are extracted using source code. These extracted features are compared with existing malware signatures to identify repacked malicious android applications. Experiments are performed using 3490 android-based malware samples belonging to 21 different malware families. A threshold value for malware categorization is defined using fuzzy logic. If the fuzzy comparison match is greater than 40%, the application is malicious. Meanwhile, if the match is greater than 10% and less than 40%, the application is suspicious otherwise benign. Furthermore, the proposed framework presents around 74% of the repacked malware compared to other similar approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.