Items shared through Social Media may affect more than one user's privacy -e.g., photos that depict multiple users, comments that mention multiple users, events in which multiple users are invited, etc. The lack of multi-party privacy management support in current mainstream Social Media infrastructures makes users unable to appropriately control to whom these items are actually shared or not. Computational mechanisms that are able to merge the privacy preferences of multiple users into a single policy for an item can help solve this problem. However, merging multiple users' privacy preferences is not an easy task, because privacy preferences may conflict, so methods to resolve conflicts are needed. Moreover, these methods need to consider how users' would actually reach an agreement about a solution to the conflict in order to propose solutions that can be acceptable by all of the users affected by the item to be shared. Current approaches are either too demanding or only consider fixed ways of aggregating privacy preferences. In this paper, we propose the first computational mechanism to resolve conflicts for multi-party privacy management in Social Media that is able to adapt to different situations by modelling the concessions that users make to reach a solution to the conflicts. We also present results of a user study in which our proposed mechanism outperformed other existing approaches in terms of how many times each approach matched users' behaviour.
If citing, it is advised that you check and use the publisher's definitive version for pagination, volume/issue, and date of publication details. And where the final published version is provided on the Research Portal, if citing you are again advised to check the publisher's website for any subsequent corrections.
Operating at a large scale and impacting large groups of people, automated systems can make consequential and sometimes contestable decisions. Automated decisions can impact a range of phenomena, from credit scores to insurance payouts to health evaluations. These forms of automation can become problematic when they place certain groups or people at a systematic disadvantage. These are cases of discrimination-which is legally defined as the unfair or unequal treatment of an individual (or group) based on certain protected characteristics (also known as protected attributes) such as income, education, gender, or ethnicity. When the unfair treatment is caused by automated decisions, usually taken by intelligent agents or other AI-based systems, the topic of digital discrimination arises. Digital discrimination is prevalent in a diverse range of fields, such as in risk assessment systems for policing and credit scoresDigital discrimination is becoming a serious problem, as more and more decisions are delegated to systems increasingly based on artificial intelligence (AI) techniques such as machine learning. Although a significant amount of research has been undertaken from different disciplinary angles to understand this challenge-from computer science to law to sociology-none of these fields have been able to resolve the problem on their own terms. For instance, computational methods to verify and certify bias-free data
Argumentation-based debates are mechanisms that a group can use to resolve conflicting opinions and hence reach agreement. They have many potential applications in on-line communities and other open environments. In this paper, we provide computational infrastructure to support argumentation-based debates, in particular focusing on the problem of how participants in a debate can reach agreement about the outcome of the debate, given all the statements that have been made. Our approach makes it possible to represent arguments that are put forward by the participants in a debate, allows both positive and negative relationships between the arguments to be represented, and makes it possible for participants to express opinions about both the arguments and the outcome of the debate. Our main contribution is to provide a novel method-indeed the first method-for computing the collective decision that emerges from the combination of a set of arguments and a set of opinions about whether the arguments hold or not. To do this, we carry out a formal investigation of a family of aggregation functions. This family starts with a function that is firmly rooted in the social choice literature, and is extended with functions that are more oriented towards the use of argumentation. We prove that to ensure that the collective decision is coherent, a property that we think is essential, an aggregation function needs to take into account the dependencies between arguments. We also provide an empirical analysis of the performance of our approach to reaching a collective decision, showing that a collective decision can be reached for debates, of the size that one currently finds online, in reasonable time.
Many real incidents demonstrate that users of Online Social Networks need mechanisms that help them manage their interactions by increasing the awareness of the different contexts that coexist in Online Social Networks and preventing them from exchanging inappropriate information in those contexts or disseminating sensitive information from some contexts to others. Contextual integrity is a privacy theory that conceptualises the appropriateness of information sharing based on the contexts in which this information is to be shared. Computational models of Contextual Integrity assume the existence of well-defined contexts, in which individuals enact pre-defined roles and information sharing is governed by an explicit set of norms. However, contexts in Online Social Networks are known to be implicit, unknown a priori and ever changing; users relationships are constantly evolving; and the information sharing norms are implicit. This makes current Contextual Integrity models not suitable for Online Social Networks. In this paper, we propose the first computational model of Implicit Contextual Integrity, presenting an information model and an Information Assistant Agent that uses the information model to learn implicit contexts, relationships and the information sharing norms to help users avoid inappropriate information exchanges and undesired information disseminations. Through an experimental evaluation, we validate the properties of Information Assistant Agents, which are shown to: infer the information sharing norms even if a small proportion of the users follow the norms and in presence of malicious users; help reduce the exchange of inappropriate information and the dissemination of sensitive information with only a partial view of the system and the information received and sent by their users; and minimise the burden to the users in terms of raising unnecessary alerts.Comment: Authors Version of the paper accepted for publication in the Information Sciences journal (http://www.journals.elsevier.com/information-sciences/
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.