The set covering problem (SCP) is a well known classic combinatorial NP-hard problem, having practical application in many fields. To optimize the objective function of the SCP, many heuristic, meta heuristic, greedy and approximation approaches have been proposed in the recent years. In the development of swarm intelligence, the particle swarm optimization is a nature inspired optimization technique for continuous problems and for discrete problems we have the well known discrete particle swarm optimization (DPSO) method. Aiming towards the best solution for discrete problems, we have the recent method called jumping particle swarm optimization (JPSO). In this DPSO the improved solution is based on the particles attraction caused by attractor. In this paper, a new approach based on JPSO is proposed to solve the SCP. The proposed approach works in three phases: for selecting attractor, refining the feasible solution given by the attractor in order to reach the optimality and for removing redundancy in the solution. The proposed approach has been tested on the benchmark instances of SCP and compared with best known methods. Computational results show that it produces high quality solution in very short running times when compared to other algorithms.
Web benefit proposal has turned into a basic issue as administrations turn out to be progressively common on the Web. Some current strategies concentrate on content coordinating methods, while others depend on QoS estimation. Notwithstanding, benefit biological system is advancing after some time with administrations distributing, flourishing and dying. Barely any current techniques consider or abuse the advancement of administration environment on benefit suggestion. This paper utilizes a probabilistic way to deal with foresee the prevalence of administrations to upgrade the suggestion execution. A strategy is introduced that concentrates benefit development designs by abusing inert dirichlet assignment (LDA) and time arrangement expectation. A period mindful administration suggestion structure is built up for mash up creation that behaviors joint examination of fleeting data, content portrayal and chronicled mash up benefit utilization in an advancing administration biological system. Tests on a certifiable administration store, Programmable Web. com, demonstrate that the proposed approach prompts a higher exactness than customary shared separating and substance coordinating techniques, by considering fleeting data. Keywords: Progressively, Administration, Programmable, Biological, Probabilistic.
I.INTRODUCTION The wide reception of administration arranged engineering and distributed computing, the quantity of web administrations (these days more often than not as web APIs) distributed on the Web has been quickly developing [1]. Mash up, Web application made through administration synthesis, has turned into a prevalent method to reuse existing administrations and abbreviate programming innovation cycle [2].As a result, a few web benefit environment has risen as of late, consistently collecting web administrations. Not with standing such promising actualities, making a mash up may take an unpracticed engineer an extraordinary measure of time to look in the ocean of accessible administrations in the stores for proper administration parts [3].Along these lines, benefit disclosure and Proposal approach is basic to encourage mash up designers in finding wanted administrations [4]. Most existing administration suggestion techniques depend on content coordinating, essentially concentrating on watchword seek [6], [7] and semantic based approach [8]. However catchphrase seeks is typically wasteful while semantic-based approach is costly to develop by and by. A probabilistic approach for benefit disclosure in light of dormant dirichlet designation (LDA) is proposed in [9] to address the test. It removes highlights from WSDL reports and utilizes the LDA model to portray the inactive subjects amongst administrations and client questions. As opposed to these techniques considering content depiction, others concentrate on helping engineers discover administrations meeting expected nature of administration (QoS) criteria. Non practical properties of administrations under thought incorporate unwavering quality, acce...
Data analysis is an important functionality in cloud computing which allows a huge amount of data to be processed over very large clusters. Hadoop is a software framework for large data analysis. It provide a Hadoop distributed file system for the analysis and transformation of very large data sets is performed using the MapReduce paradigm. MapReduce is known as a popular way to hold data in the cloud environment due to its excellent scalability and good fault tolerance. Map Reduce is a programming model widely used for processing large data sets. Hadoop Distributed File System is designed to stream those data sets. The Hadoop MapReduce system was often unfair in its allocation and a dramatic improvement is achieved through the Mapper Reducer System. The proposed Mapper Reducer function using the mean shift clustering based algorithm allows us to analyze the data set and achieve better performance in executing the job by using optimal configuration of mappers and reducers based on the size of the data sets and also helps the users to view the status of the job and to find the error localization of scheduled jobs. This will efficiently utilize the performance tuning properties of optimized scheduled jobs. So, the efficiency of the system will result in substantially lowered system cost, energy usage, management complexity and increases the performance of the system.
Abstract. Harmonic mean labeling was introduced by Sandhya et al. We extend this notion to k-super harmonic mean labeling. In this paper, we investigate k-Super harmonic mean labeling of some snake graphs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.