Deployment of new optimized routing rules on routers are challenging, owing to the tight coupling of the data and control planes and a lack of global topological information. Due to the distributed nature of the traditional classical internet protocol networks, the routing rules and policies are disseminated in a decentralized manner, which causes looping issues during link failure. Software-defined networking (SDN) provides programmability to the network from a central point. Consequently, the nodes or data plane devices in SDN only forward packets and the complexity of the control plane is handed over to the controller. Therefore, the controller installs the rules and policies from a central location. Due to the central control, link failure identification and restoration becomes pliable because the controller has information about the global network topology. Similarly, new optimized rules for link recovery can be deployed from the central point. Herein, we review several schemes for link failure recovery by leveraging SDN while delineating the cons of traditional networking. We also investigate the open research questions posed due to the SDN architecture. This paper also analyzes the proactive and reactive schemes in SDN using the OpenDayLight controller and Mininet, with the simulation of application scenarios from the tactical and data center networks.
Summary To address the challenging needs of high‐performance big data processing, parallel‐distributed frameworks such as Hadoop are being utilized extensively. However, in heterogeneous environments, the performance of Hadoop clusters is below par. This is primarily because the blocks of the clusters are allocated equally to all nodes without regard to differences in the capability of individual nodes. This results in reduced data locality. Thus, a new data‐placement scheme that enhances data locality is required for Hadoop in heterogeneous environments. This article proposes a new data placement scheme that preserves the same degree of data locality in heterogeneous environments as that of the standard Hadoop, with only a small amount of replicated data. In the proposed scheme, only those blocks with the highest probability of being accessed remotely are selected and replicated. The results of experiments conducted indicate that the proposed scheme incurs only a 20% disk space overhead and has virtually the same data locality ratio as the standard Hadoop, which has a replication factor of three and 200% disk space overhead.
Summary In a cloud‐scale publish/subscribe messaging system, it is difficult to partition subscription data among several servers. Without a sophisticated scheme and a system architecture, the messaging system would either waste resources or fail to deliver messages on time. In this study, we propose DRDA, a dynamic replication degree adjustment technology, for efficient message delivery. The technology calculates and maintains the number of subscription replications at a reasonable level by monitoring the statuses of servers, based on the number of subscription replications and the frequency of event dissemination. To verify the effectiveness of our proposed scheme and system architecture, we build a prototype of a content‐based publish/subscribe system that dynamically adjusts the number of replications among brokers. Furthermore, we compare the load balance, resource overhead, and performance of a publish/subscribe system with DRDA with a publish/subscribe system without DRDA. The experimental results show that DRDA outperforms other approaches under various parameter configurations. We have added the prototype code to a GitHub repository to make it publicly available.
This paper proposes a deep model-based entity alignment method for the edge-specific knowledge graphs (KGs) to resolve the semantic heterogeneity between the edge systems’ data. To do so, this paper first analyzes the edge-specific knowledge graphs (KGs) to find unique characteristics. The deep model-based entity alignment method is developed based on their unique characteristics. The proposed method performs the entity alignment using a graph which is not topological but data-centric, to reflect the characteristics of the edge-specific KGs, which are mainly composed of the instance entities rather than the conceptual entities. In addition, two deep models, namely BERT (bidirectional encoder representations from transformers) for the concept entities and GAN (generative adversarial networks) for the instance entities, are applied to model learning. By utilizing the deep models, neural network models that humans cannot interpret, it is possible to secure data on the edge systems. The two learning models trained separately are integrated using a graph-based deep learning model GCN (graph convolution network). Finally, the integrated deep model is utilized to align the entities in the edge-specific KGs. To demonstrate the superiority of the proposed method, we perform the experiment and evaluation compared to the state-of-the-art entity alignment methods with the two experimental datasets from DBpedia, YAGO, and wikidata. In the evaluation metrics of Hits@k, mean rank (MR), and mean reciprocal rank (MRR), the proposed method shows the best predictive and generalization performance for the KG entity alignment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.