The proliferation of the wireless network over the last decade is one of the significant drivers for the increased deployment of mobile ad hoc networks (MANETs) in the battle field. It is not practically possible to build a fixed wired network infrastructure in battle field. But it is possible to create a mobile wireless network infrastructure because of the mobility of the soldiers. MANET is justified by the possibility of building a network where no infrastructure exists. MANET with group communication applications and multicasting can highly benefit from a networking environment such as military and emergency uses. In such applications, the used ad hoc networks need to be reliable and secure. In recent years, a specific technique called the universal generating function technique (UGFT) has been applied to determine the network reliability. The UGFT is based on an approach that is closely connected to generating functions that are widely used in probability theory. This work devotes to assess the MANET reliability using the UGFT. Reliability of the MANET is defined as the probability that the transformed message from the source can be passed successfully through the MANET and reached the target without any delay. Two kinds of UGFs are discussed in this work, and an algorithm has been proposed to execute the system reliability. This UGFT is illustrated with a case study in a battlefield environment. An MC consists of transmitting a packet to a group of mobile nodes identified by a single destination MC address and hence is intended for a group-oriented computing. The multicast service is employed in areas of a collaborative work, for example, in rescue operations, battlefields, video conferencing, and so on. An MC packet is typically delivered to all members of its destination group with the same reliability as regular unicast packets. An MC can reduce communication costs and the delivery delay. In addition, it can provide a robust communication mechanism when the receiver's individual address is changeable.Network reliability is an important part of planning, designing, and controlling network. There are many approaches for executing network reliability. 1-3 Chaturvedi and Misra 4 have proposed a hybrid method to evaluate the reliability of complex networks. Ahmad and Omid 5 have calculated the all terminal network reliability using recursive truncation algorithm. Some authors 6-9 have evaluated
The scalability of similarity joins is threatened by the unexpected data characteristic of data skewness. This is a pervasive problem in scientific data. Due to skewness, the uneven distribution of attributes occurs, and it can cause a severe load imbalance problem. When database join operations are applied to these datasets, skewness occurs exponentially. All the algorithms developed to date for the implementation of database joins are highly skew sensitive. This paper presents a new approach for handling data-skewness in a character- based string similarity join using the MapReduce framework. In the literature, no such work exists to handle data skewness in character-based string similarity join, although work for set based string similarity joins exists. Proposed work has been divided into three stages, and every stage is further divided into mapper and reducer phases, which are dedicated to a specific task. The first stage is dedicated to finding the length of strings from a dataset. For valid candidate pair generation, MR-Pass Join framework has been suggested in the second stage. MRFA concepts are incorporated for string similarity join, which is named as “MRFA-SSJ” (MapReduce Frequency Adaptive – String Similarity Join) in the third stage which is further divided into four MapReduce phases. Hence, MRFA-SSJ has been proposed to handle skewness in the string similarity join. The experiments have been implemented on three different datasets namely: DBLP, Query log and a real dataset of IP addresses & Cookies by deploying Hadoop framework. The proposed algorithm has been compared with three known algorithms and it has been noticed that all these algorithms fail when data is highly skewed, whereas our proposed method handles highly skewed data without any problem. A set-up of the 15-node cluster has been used in this experiment, and we are following the Zipf distribution law for the analysis of skewness factor. Also, a comparison among existing and proposed techniques has been shown. Existing techniques survived till Zipf factor 0.5 whereas the proposed algorithm survives up to Zipf factor 1. Hence the proposed algorithm is skew insensitive and ensures scalability with a reasonable query processing time for string similarity database join. It also ensures the even distribution of attributes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.