Abstract:Toward the minimal weighted vertex cover (MWVC) in agent-based networking systems, this paper recasts it as a potential game and proposes a distributed learning algorithm based on relaxed greed and finite memory. With the concept of convention, we prove that our algorithm converges with probability 1 to Nash equilibria, which serve as the bridge connecting the game and the MWVC. More importantly, an additional degree of freedom is also provided for equilibrium refinement, such that increasing memory lengths an… Show more
“…Following this line, Balcan et al studied a broad family of covering problems in a distributed setting [21], by carefully constructing an advice vector with the aid of centralized information. Aiming for fully distributed coordination, Sun et al [26] addressed the MWVC problem by decomposing the system-level objective into local utilities and proposed a restricted greed and memory-based algorithm (RGMA) within the framework of potential game theory. To the best knowledge of the authors, the FBR and the RGMA provide the-state-of-the-art results for the distributed MWVC problem.…”
Section: B Distributed Methodsmentioning
confidence: 99%
“…We use the prefix "1-" or "0-" to describe a node with a i = 1 or a i = 0, which is also referred to as a selected node or an unselected node. The solution space is given by A = {a|a i ∈ {0, 1}, i ∈ N}, i = {j|(i, j) ∈ E} represents the neighborhood of node i, with i / ∈ i , and coefficient λ is introduced to penalize uncovered edges [26].…”
Section: A Mwvc Problemmentioning
confidence: 99%
“…Specifically, the BA network is constructed by following the growth and preferential attachment rule in the BA model [46]: starting from a ring network with n = 5 vertices, at each step, we add a new vertex with two edges linked to the existing vertices. Furthermore, the grid network is used to represent a regular scenario, where each node has the same number of neighbors [26]. As in the literature, the vertex weights are randomly generated, which are then normalized such that i∈N ω i = 1.…”
Section: B Performance Evaluation On Complex Networkmentioning
confidence: 99%
“…1) RGMA: This algorithm is also designed within the framework of learning in potential games, where each player updates its action according to a restricted greed and unweighted memory rule [26]. Although convergence to Nash equilibria is guaranteed, INEs cannot be avoided.…”
Toward better approximation for the minimumweighted vertex cover (MWVC) problem in multiagent systems, we present a distributed algorithm from the perspective of learning in games. For self-organized coordination and optimization, we see each vertex as a potential game player who makes decisions using local information of its own and the immediate neighbors. The resulting Nash equilibrium is classified into two categories, i.e., the inferior Nash equilibrium (INE) and the dominant Nash equilibrium (DNE). We show that the optimal solution must be a DNE. To achieve better approximation ratios, local rules of perturbation and weighted memory are designed, with the former destroying the stability of an INE and the latter facilitating the refinement of a DNE. By showing the existence of an improvement path from any INE to a DNE, we prove that when the memory length is larger than 1, our algorithm converges in finite time to DNEs, which could not be improved by exchanging the action of a selected node with all its unselected neighbors. Moreover, additional freedom for solution efficiency refinement is provided by increasing the memory length. Finally, intensive comparison experiments demonstrate the superiority of the presented methodology to the state of the art, both in solution efficiency and computation speed.
“…Following this line, Balcan et al studied a broad family of covering problems in a distributed setting [21], by carefully constructing an advice vector with the aid of centralized information. Aiming for fully distributed coordination, Sun et al [26] addressed the MWVC problem by decomposing the system-level objective into local utilities and proposed a restricted greed and memory-based algorithm (RGMA) within the framework of potential game theory. To the best knowledge of the authors, the FBR and the RGMA provide the-state-of-the-art results for the distributed MWVC problem.…”
Section: B Distributed Methodsmentioning
confidence: 99%
“…We use the prefix "1-" or "0-" to describe a node with a i = 1 or a i = 0, which is also referred to as a selected node or an unselected node. The solution space is given by A = {a|a i ∈ {0, 1}, i ∈ N}, i = {j|(i, j) ∈ E} represents the neighborhood of node i, with i / ∈ i , and coefficient λ is introduced to penalize uncovered edges [26].…”
Section: A Mwvc Problemmentioning
confidence: 99%
“…Specifically, the BA network is constructed by following the growth and preferential attachment rule in the BA model [46]: starting from a ring network with n = 5 vertices, at each step, we add a new vertex with two edges linked to the existing vertices. Furthermore, the grid network is used to represent a regular scenario, where each node has the same number of neighbors [26]. As in the literature, the vertex weights are randomly generated, which are then normalized such that i∈N ω i = 1.…”
Section: B Performance Evaluation On Complex Networkmentioning
confidence: 99%
“…1) RGMA: This algorithm is also designed within the framework of learning in potential games, where each player updates its action according to a restricted greed and unweighted memory rule [26]. Although convergence to Nash equilibria is guaranteed, INEs cannot be avoided.…”
Toward better approximation for the minimumweighted vertex cover (MWVC) problem in multiagent systems, we present a distributed algorithm from the perspective of learning in games. For self-organized coordination and optimization, we see each vertex as a potential game player who makes decisions using local information of its own and the immediate neighbors. The resulting Nash equilibrium is classified into two categories, i.e., the inferior Nash equilibrium (INE) and the dominant Nash equilibrium (DNE). We show that the optimal solution must be a DNE. To achieve better approximation ratios, local rules of perturbation and weighted memory are designed, with the former destroying the stability of an INE and the latter facilitating the refinement of a DNE. By showing the existence of an improvement path from any INE to a DNE, we prove that when the memory length is larger than 1, our algorithm converges in finite time to DNEs, which could not be improved by exchanging the action of a selected node with all its unselected neighbors. Moreover, additional freedom for solution efficiency refinement is provided by increasing the memory length. Finally, intensive comparison experiments demonstrate the superiority of the presented methodology to the state of the art, both in solution efficiency and computation speed.
“…Definition 2 (Ordinal Potential Game [31]): If a game is a potential game, then equation : S ⇒ R is a potential function, where S = i S i , for every i ∈ N , s i , s i , ∈ S i , s −i ∈ j =i S j , the following relation exists: 6) Potential game is often applied to spectrum control in wireless network [32], decentralized optimization in channel selection [33], vertex cover problem in wireless sensor networks [34], etc. It owns a Finite Improvement Property (FIP) and always admits NE.…”
Crowdsensing high quality data relies on the efficient participation of users. However, the existing incentive mechanism is unable to take into account the dual requirements of both quantity and quality of users' participation. In this paper, we propose Crowdsensing Task Selection algorithm and rewards allocation incentive mechanism based on Reputation Evaluation model(CTSRE), which deploys the reputation weighted rewards allocation method to effectively encourage users to actively participate in the execution of tasks. In CTSRE, we adopt a game-theoretic approach and apply best response dynamics based algorithm to achieve the goal of maximizing users' utilities. We show that the task selection algorithm can converge in finite time and meet the fairness requirement. We also design a reputation conversion method and updating rule to improve incentive and fairness of the mechanism. Through numerical experiments and comparative analysis, we verify that the task selection algorithm meets the convergence requirements. The application of sigmoid function for reputation conversion improves the fairness of rewards allocation and motivate users to improve their reputation to obtain high rewards. Experimental results indicate that CTSRE can effectively ensure the quantity and the quality of users' participation.
This article explores recursive algorithms for parameter identification issues of Hammerstein output‐error systems. The proposed approach includes the key term separation auxiliary model recursive gradient algorithm, which utilizes the gradient search and the key term separation. To enhance computational efficiency, the system is decomposed into two or three subsystems through the hierarchical identification principle. Based on this, a key term separation based auxiliary model two‐stage recursive gradient algorithm and a key term separation based auxiliary model three‐stage recursive gradient algorithm are presented. The simulation results verify the validity of the obtained algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.