#Multilateral security considers different and possibly conflicting security requirements of different parties and strives to balance these requirements. This paper introduces the concept of multilateral security giving some example problems and solutions. It focuses on a personal reachability and security management system that was developed to overcome the caller ID conflict. The prototype and its relation to multilateral security are described. Further, some major real world assessments of the prototype and the experiences gained are discussed. The paper concludes with a collection of technical design strategies for multilateral security that were considered important for the success of the project and some remarks on further challenges.
Today’s environment of data-driven business models relies heavily on collecting as much personal data as possible. Besides being protected by governmental regulation, internet users can also try to protect their privacy on an individual basis. One of the most famous ways to accomplish this, is to use privacy-enhancing technologies (PETs). However, the number of users is particularly important for the anonymity set of the service. The more users use the service, the more difficult it will be to trace an individual user. There is a lot of research determining the technical properties of PETs like Tor or JonDonym, but the use behavior of the users is rarely considered, although it is a decisive factor for the acceptance of a PET. Therefore, it is an important driver for increasing the user base.We undertake a first step towards understanding the use behavior of PETs employing a mixed-method approach. We conducted an online survey with 265 users of the anonymity services Tor and JonDonym (124 users of Tor and 141 users of JonDonym). We use the technology acceptance model as a theoretical starting point and extend it with the constructs perceived anonymity and trust in the service in order to take account for the specific nature of PETs. Our model explains almost half of the variance of the behavioral intention to use the two PETs. The results indicate that both newly added variables are highly relevant factors in the path model. We augment these insights with a qualitative analysis of answers to open questions about the users’ concerns, the circumstances under which they would pay money and choose a paid premium tariff (only for JonDonym), features they would like to have and why they would or would not recommend Tor/JonDonym. Thereby, we provide additional insights about the users’ attitudes and perceptions of the services and propose new use factors not covered by our model for future research.
Smartphone apps have the power to monitor most of people's private lives. Apps can permeate private spaces, access and map social relationships, monitor whereabouts and chart people's activities in digital and/or real world. We are therefore interested in how much information a particular app can and intends to retrieve in a smartphone. Privacy-friendliness of smartphone apps is typically measured based on single-source analyses, which in turn, does not provide a comprehensive measurement regarding the actual privacy risks of apps. This paper presents a multi-source method for privacy analysis and data extraction transparency of Android apps. We describe how we generate several data sets derived from privacy policies, app manifestos, user reviews and actual app profiling at run time. To evaluate our method, we present results from a case study carried out on ten popular fitness and exercise apps. Our results revealed interesting differences concerning the potential privacy impact of apps, with some of the apps in the test set violating critical privacy principles. The result of the case study shows large differences that can help make relevant app choices.
Popular smartphone apps may receive several thousands of user reviews containing statements about apps' functionality, interface, user-friendliness, etc. They sometimes also comprise privacy relevant information that can be extremely helpful for app developers to better understand why users complain about certain privacy aspects of their apps. However, due to the complicated and sometimes vague nature of reviews, it is quite though and time consuming for developers to go through all these reviews to get information about privacy aspects of apps. Furthermore, previous studies confirmed that sometimes bad privacy practices happen due to the app developers' lack of knowledge in API definition and usage. In addition, such information can be useful for mobile users as the lack of privacy indicators in smartphone ecosystems prevents them from being able to compare apps in terms of privacy and to perform informed privacy decision making when selecting apps. Therefore, in this paper we propose Mobile App Reviews Summarization (MARS) to overcome the aforementioned difficulties. We exploit user reviews on the Google Play Store as a relevant source in order to extract and quantify privacy relevant claims associated with apps. Based on Machine Learning (ML), Natural Language Processing (NLP) and sentiment analysis techniques, MARS detects privacy relevant reviews and categorizes them into a pre-identified list of privacy threats in the context of mobile apps. The combination of these concepts provides developers with specific knowledge about the privacy threats and behavior of apps based on user generated reports that are otherwise difficult to detect. Not only developers, but also users can benefit from such mechanism to compare apps in terms of privacy aspects. To this end, we complement MARS by a novel app behavior monitoring tool that further enhances the whole reliability of the results generated by MARS. Our results demonstrate the applicability of our approach which provides precision, recall and F-score as high as 94.84%, 91.30% and 92.79%, respectively. Also, we obtained interesting findings concerning the quantity and quality of privacy relevant information published in the user reviews and their relation to the apps' behavior in reality indicating that user reviews are important and valuable source of information regarding the privacy behavior of mobile apps.
Since the ruling of the European Court of Justice, the right to be forgotten has provided more informational self-determination to users, whilst raising new questions around Google's role as arbiter of online content and the power to rewrite history. We investigated the debate that unfolded on Twitter around the #righttobeforgotten through social network analysis. The results revealed that latent topics, namely Google's role as authority, alternated in popularity with rising and fading flare topics. The public sphere, or Öffentlichkeit, that we observed resembles the traditional one, with elite players such as news portals, experts and corporations participating, but it also differs significantly in terms of the underlying mechanisms and means of information diffusion. Experts are critical to comment, relay and make sense of information. We discuss the implications for theories of the public sphere and examine why social media do not serve as a democratising tool for ordinary citizens.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.