As an influential technology of swarm evolutionary computing (SEC), the particle swarm optimization (PSO) algorithm has attracted extensive attention from all walks of life. However, how to rationally and effectively utilize the population resources to equilibrate the exploration and utilization is still a key dispute to be resolved. In this paper, we propose a novel PSO algorithm called Chaos Adaptive Particle Swarm Optimization (CAPSO), which adaptively adjust the inertia weight parameter w and acceleration coefficients c 1 , c 2 , and introduces a controlling factor γ based on chaos theory to adaptively adjust the range of chaotic search. This makes the algorithm have favorable adaptability, and then the particles can not only effectively prevent missing the global optimal solution, but also have a high probability of jumping out of the local optimal solution. To verify the stability, convergence speed, and accuracy of CAPSO, we conduct ample experiments on 6 test functions. In addition, to further verify the effectiveness and scalability of CAPSO, comparative experiments are carried out on the CEC2013 test suite. Finally, the results prove that CAPSO outperforms other peer algorithms to achieve satisfactory performance.
To address the problems of the slow convergence and inefficiency in the existing adaptive PID controllers, we propose a new adaptive PID controller using the asynchronous advantage actor-critic (A3C) algorithm. Firstly, the controller can train the multiple agents of the actor-critic structures in parallel exploiting the multi-thread asynchronous learning characteristics of the A3C structure. Secondly, in order to achieve the best control effect, each agent uses a multilayer neural network to approach the strategy function and value function to search the best parameter-tuning strategy in continuous action space. The simulation results indicate that our proposed controller can achieve the fast convergence and strong adaptability compared with conventional controllers.
The traditional Internet has encountered a bottleneck in allocating network resources for emerging technology needs. Network virtualization (NV) technology as a future network architecture, the virtual network embedding (VNE) algorithm it supports shows great potential in solving resource allocation problems. Combined with the efficient machine learning (ML) algorithm, a neural network model close to the substrate network environment is constructed to train the reinforcement learning agent. This paper proposes a two-stage VNE algorithm based on deep reinforcement learning (DRL) (TS-DRL-VNE) for the problem that the mapping result of existing heuristic algorithm is easy to converge to the local optimal solution. For the problem that the existing VNE algorithm based on ML often ignores the importance of substrate network representation and training mode, a DRL VNE algorithm based on full attribute matrix (FAM-DRL-VNE) is proposed. In view of the problem that the existing VNE algorithm often ignores the underlying resource changes between virtual network requests, a DRL VNE algorithm based on matrix perturbation theory (MPT-DRL-VNE) is proposed. Experimental results show that the above algorithm is superior to other algorithms.
The rapid development and deployment of network services has brought a series of challenges to researchers. On the one hand, the needs of Internet end users/applications reflect the characteristics of travel alienation, and they pursue different perspectives of service quality. On the other hand, with the explosive growth of information in the era of big data, a lot of private information is stored in the network. End users/applications naturally start to pay attention to network security. In order to solve the requirements of differentiated quality of service (QoS) and security, this paper proposes a virtual network embedding (VNE) algorithm based on deep reinforcement learning (DRL), aiming at the CPU, bandwidth, delay and security attributes of substrate network. DRL agent is trained in the network environment constructed by the above attributes. The purpose is to deduce the mapping probability of each substrate node and map the virtual node according to this probability. Finally, the breadth first strategy (BFS) is used to map the virtual links. In the experimental stage, the algorithm based on DRL is compared with other representative algorithms in three aspects: long term average revenue, long term revenue consumption ratio and acceptance rate. The results show that the algorithm proposed in this paper has achieved good experimental results, which proves that the algorithm can be effectively applied to solve the end user/application differentiated QoS and security requirements.
Multi-modal retrieval has received widespread consideration since it can commendably provide massive related data support for the development of Computational Social Systems (CSS). However, the existing works still face the following challenges: (1) Rely on the tedious manual marking process when extended to CSS, which not only introduces subjective errors but also consumes abundant time and labor costs; (2) Only using strongly aligned data for training, lacks concern for the adjacency information, which makes the poor robustness and semantic heterogeneity gap difficult to be effectively fit; (3) Mapping features into real-valued forms, which leads to the characteristics of high storage and low retrieval efficiency. To address these issues in turn, we have designed a multi-modal retrieval framework based on web knowledge-driven, called Unsupervised and Robust Graph Convolutional Hashing (URGCH). The specific implementations are as follows: First, a "secondary semantic selffusion" approach is proposed, which mainly extracts semanticrich features through pre-trained neural networks, constructs the joint semantic matrix through semantic fusion, and eliminates
-Service-Oriented Computing achieves its full potential when services interoperate. Current service-oriented computing research is concerned with the low level interoperation among services, such as service discovery, service composition etc. However, a high level research issue in form of the feature interaction problem is challenging the interoperation of services. Traditional feature interaction methods are focused on the service design phase using formal methods or pragmatic software engineering analysis. Autonomy and distribution of service development and deployment create needs for runtime detection and resolution of feature interactions in SOC. This paper investigates the detection of feature interactions in web services at runtime and proposes ESTRIPs, an extended STRIPS operation to ensure conflict-free services are identified to populate business processes, using a combination of OWL-S, SWRL and runtime data extracted from SOAP messages. First, we define the feature interaction problem in business process during their execution and then introduce the ESTRIPS method. The implementation of a prototype is illustrated and a real world scenario shows the plausibility of our method for detecting feature interactions in business processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.