We present in this paper an incentive-compatible distributed optimization method applied to social choice problems. The method works by computing and collecting VCG taxes in a distributed fashion. This introduces a certain resilience to manipulation from the problem solving agents. An extension of this method sacrifices Pareto-optimality in favor of budget-balance: the solutions chosen are not optimal anymore, but the advantage is that the self interested agents pay the taxes between themselves, thus producing no tax surplus. This eliminates unwanted incentives for the problem solving agents, ensuring their faithfulness.
Case-based reasoning (CBR) is an approach to problem solving that emphasizes the role of prior experience during future problem solving (i.e., new problems are solved by reusing and if necessary adapting the solutions to similar problems that were solved in the past). It has enjoyed considerable success in a wide variety of problem solving tasks and domains. Following a brief overview of the traditional problem-solving cycle in CBR, we examine the cognitive science foundations of CBR and its relationship to analogical reasoning. We then review a representative selection of CBR research in the past few decades on aspects of retrieval, reuse, revision, and retention.
R. LÓPEZ DE MÁNTARAS ET AL.
Traditional centralised approaches to security are difficult to apply to large, distributed marketplaces in which software agents operate. Developing a notion of trust that is based on the reputation of agents can provide a softer notion of security that is sufficient for many multi-agent applications. In this paper, we address the issue of incentivecompatibility (i.e. how to make it optimal for agents to share reputation information truthfully), by introducing a sidepayment scheme, organised through a set of broker agents, that makes it rational for software agents to truthfully share the reputation information they have acquired in their past experience. We also show how to use a cryptographic mechanism to protect the integrity of reputation information and to achieve a tight bounding between the identity and reputation of an agent.
The proliferation of online news creates a need for filtering interesting articles. Compared to other products, however, recommending news has specific challenges: news preferences are subject to trends, users do not want to see multiple articles with similar content, and frequently we have insufficient information to profile the reader.In this paper, we introduce a class of news recommendation systems based on context trees. They can provide highquality news recommendations to anonymous visitors based on present browsing behaviour. Using an unbiased testing methodology, we show that they make accurate and novel recommendations, and that they are sufficiently flexible for the challenges of news recommendation.
We consider schemes for obtaining truthful reports on a common but hidden
signal from large groups of rational, self-interested agents. One example are
online feedback mechanisms, where users provide observations about the quality
of a product or service so that other users can have an accurate idea of what
quality they can expect. However, (i) providing such feedback is costly, and
(ii) there are many motivations for providing incorrect feedback.
Both problems can be addressed by reward schemes which (i) cover the cost of
obtaining and reporting feedback, and (ii) maximize the expected reward of a
rational agent who reports truthfully. We address the design of such
incentive-compatible rewards for feedback generated in environments with pure
adverse selection. Here, the correlation between the true knowledge of an agent
and her beliefs regarding the likelihoods of reports of other agents can be
exploited to make honest reporting a Nash equilibrium.
In this paper we extend existing methods for designing incentive-compatible
rewards by also considering collusion. We analyze different scenarios, where,
for example, some or all of the agents collude. For each scenario we
investigate whether a collusion-resistant, incentive-compatible reward scheme
exists, and use automated mechanism design to specify an algorithm for deriving
an efficient reward mechanism
We consider the problem of reinforcing federated learning with formal privacy guarantees. We propose to employ Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, to provide sharper privacy loss bounds. We adapt the Bayesian privacy accounting method to the federated setting and suggest multiple improvements for more efficient privacy budgeting at different levels. Our experiments show significant advantage over the state-of-the-art differential privacy bounds for federated learning on image classification tasks, including a medical application, bringing the privacy budget below ε = 1 at the client level, and below ε = 0.1 at the instance level. Lower amounts of noise also benefit the model accuracy and reduce the number of communication rounds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.