Preserving users’ privacy is important for Web systems. In systems where transactions are managed by a single user, such as e-commerce systems, preserving privacy of the transactions is merely the capability of access control. However, in online social networks, where each transaction is managed by and has effect on others, preserving privacy is difficult. In many cases, the users’ privacy constraints are distributed, expressed in a high-level manner, and would depend on information that only becomes available over interactions with others. Hence, when a content is being shared by a user, others who might be affected by the content should discuss and agree on how the content will be shared online so that none of their privacy constraints are violated. To enable this, we model users of the social networks as agents that represent their users’ privacy constraints as semantic rules. Agents argue with each other on propositions that enable their privacy rules by generating facts and assumptions from their ontology. Moreover, agents can seek help from others by requesting new information to enrich their ontology. Using assumption-based argumentation, agents decide whether a content should be shared or not. We evaluate the applicability of our approach on real-life privacy scenarios in comparison with user surveys.
Social network users expect the social networks that they use to preserve their privacy. Traditionally, privacy breaches have been understood as malfunctioning of a given system. However, in online social networks, privacy breaches are not necessarily a malfunctioning of a system but a byproduct of its workings. The users are allowed to create and share content about themselves and others. When multiple entities start distributing content without a control, information can reach unintended individuals and inference can reveal more information about the user. Accordingly, this paper first categorizes the privacy violations that take place in online social networks. Our categorization yields that the privacy violations in online social networks stem from intricate interactions and detecting these violations requires semantic understanding of events. Our proposed approach is based on agent-based representation of a social network, where the agents manage users' privacy requirements by creating commitments with the system. The privacy context, including the relations among users or content types are captured using description logic. The proposed detection algorithm performs reasoning using the description logic and commitments on a varying depths of social networks. We implement the proposed model and evaluate our approach using real-life social networks.
Online social networks provide an environment for their users to share content with others, where the user who shares a content item is put in charge, generally ignoring others that might be affected by it. However, a content that is shared by one user can very well violate the privacy of other users. To remedy this, ideally, all users who are related to a content should get a say in how the content should be shared. Recent approaches advocate the use of agreement technologies to enable stakeholders of a post to discuss the privacy configurations of a post. This allows related individuals to express concerns so that various privacy violations are avoided up front. Existing techniques try to establish an agreement on a single post. However, most of the time, agreement should be established over multiple posts such that the user can tolerate slight breaches of privacy in return of a right to share posts themselves in future interactions. As a result, users can help each other preserve their privacy, viewing this as their social responsibility. This article develops a reciprocity-based negotiation for reaching privacy agreements among users and introduces a negotiation architecture that combines semantic privacy rules with utility functions. We evaluate our approach over multiagent simulations with software agents that mimic users based on a user study.
People are increasingly employing artificial intelligence as the basis for decision-support systems (DSSs) to assist them in making well-informed decisions. Adoption of DSS is challenging when such systems lack support, or evidence, for justifying their recommendations. DSSs are widely applied in the medical domain, due to the complexity of the domain and the sheer volume of data that render manual processing difficult. This paper proposes a metalevel argumentation-based decision-support system that can reason with heterogeneous data (e.g. body measurements, electronic health records, clinical guidelines), while incorporating the preferences of the human beneficiaries of those decisions. The system constructs template-based explanations for the recommendations that it makes. The proposed framework has been implemented in a system to support stroke patients and its functionality has been tested in a pilot study. User feedback shows that the system can run effectively over an extended period.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.