Reputation systems provide reputation values of rated parties to users. These reputation values, typically aggregations of individual user ratings, shall be reliable, i.e. should enable a realistic assessment of the probability that the rated party behaves as expected in a transaction. In order for the reputation values to stay reliable and, thus, for the reputation system to provide a benefit, the system needs to be resistant against manipulations by users, the rated parties trying to improve their reputation values, and even against competitors trying to worsen a reputation value. At the same time, a reputation system shall provide privacy protection for users: rated parties shall not be able to learn who provided a certain rating. Otherwise users might not take part in the system as they fear bad feedback in revenge for bad ratings, or users do not want to be connected to certain transactions based on their provided ratings.In this paper we come up with a solution that provides both, reliability of reputation values on the one hand, and privacy protection for users on the other hand. In contrast to related work, our solution only makes use of a single reputation provider that needs to be trusted (to a certain extent) and does not require any bulletin boards to be present in the system. We make use of the Paillier cryptosystem to provide an aggregation of individual user ratings in a way that no party can learn which user provided a certain rating.
The gathering of data about oneself (such as running speed, pulse, breathing rate, food consumption, etc.) is rapidly becoming more popular, and has lead to the catch phrase “Quantified Self” (QS). While this trend creates opportunities both for individuals and for society, it also creates risks, due to the data’s personal and often sensitive nature. Countering these risks, while keeping the benefits of QS services, is a task both for the legal system and for the technical community. However, it should also take users’ expectations into account. We therefore analyze the legal situation of QS services based on European law and the privacy policies of some major service providers to clarify the practical consequences for users. We present the result of a study concerning the users’ views on privacy, revealing a conflict between the user’s expectations and the providers’ practices. To help resolve the conflict, we discuss how existing and future privacy-enhancing technologies can avoid the risks associated with QS services.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.