BackgroundThe recent proliferation of self-tracking technologies has allowed individuals to generate significant quantities of data about their lifestyle. These data can be used to support health interventions and monitor outcomes. However, these data are often stored and processed by vendors who have commercial motivations, and thus, they may not be treated with the sensitivity with which other medical data are treated. As sensors and apps that enable self-tracking continue to become more sophisticated, the privacy implications become more severe in turn. However, methods for systematically identifying privacy issues in such apps are currently lacking.ObjectiveThe objective of our study was to understand how current mass-market apps perform with respect to privacy. We did this by introducing a set of heuristics for evaluating privacy characteristics of self-tracking services.MethodsUsing our heuristics, we conducted an analysis of 64 popular self-tracking services to determine the extent to which the services satisfy various dimensions of privacy. We then used descriptive statistics and statistical models to explore whether any particular categories of an app perform better than others in terms of privacy.ResultsWe found that the majority of services examined failed to provide users with full access to their own data, did not acquire sufficient consent for the use of the data, or inadequately extended controls over disclosures to third parties. Furthermore, the type of app, in terms of the category of data collected, was not a useful predictor of its privacy. However, we found that apps that collected health-related data (eg, exercise and weight) performed worse for privacy than those designed for other types of self-tracking.ConclusionsOur study draws attention to the poor performance of current self-tracking technologies in terms of privacy, motivating the need for standards that can ensure that future self-tracking apps are stronger with respect to upholding users’ privacy. Our heuristic evaluation method supports the retrospective evaluation of privacy in self-tracking apps and can be used as a prescriptive framework to achieve privacy-by-design in future apps.
The Internet of Things (IoT) systems are designed and developed either as standalone applications from the ground-up or with the help of IoT middleware platforms. They are designed to support different kinds of scenarios, such as smart homes and smart cities. Thus far, privacy concerns have not been explicitly considered by IoT applications and middleware platforms. This is partly due to the lack of systematic methods for designing privacy that can guide the software development process in IoT. In this paper, we propose a set of guidelines, a privacyby-design framework, that can be used to assess privacy capabilities and gaps of existing IoT applications as well as middleware platforms. We have evaluated two open source IoT middleware platforms, namely OpenIoT and Eclipse SmartHome, to demonstrate how our framework can be used in this way.
It is argued that the technique increases the awareness of the researcher to ethical concerns and enables her or him to respond more sensitively. Laddered question technique is evaluated with reference to my own research into the negotiation of student support amongst nurse distance learners.
It is by no means clear that the diagnosis 'altered body image' is sufficiently clear to enable the nurse to plan and deliver a package of body-image care. This paper proposes a model of body-image care based upon five central concepts, and the accepted metaparadigms of nursing. It is suggested that the nurse is uniquely positioned to deliver the best body-image care, and that this model will assist her to assess body-image needs and to then action the altered body-image diagnosis more effectively.
This paper presents the findings from a small-scale experiment investigating the presentation of a synchronous remote electronic examination. It discusses the students' experiences of taking such an examination. The study confirms that the majority of participants found the experience at least as good as a conventional written examination. In addition, typing answers does not prevent students from producing answers in the time available. However, the pressure of time continues to be a major cause of anxiety for students. The paper discusses technical issues, particularly those related to the loss of communications during the 3-hour duration of the exam. Although software processes were available to save and restore students' answers throughout the examination, problems still occurred and more robust software is required.
Internet of Things (IoT) applications typically collect and analyse personal data that can be used to derive sensitive information about individuals. However, thus far, privacy concerns have not been explicitly considered in software engineering processes when designing IoT applications. The advent of behaviour driven security mechanisms, failing to address privacy concerns in the design of IoT applications can have security implications. In this paper, we explore how a Privacy-by-Design (PbD) framework, formulated as a set of guidelines, can help software engineers integrate data privacy considerations into the design of IoT applications. We studied the utility of this PbD framework by studying how software engineers use it to design IoT applications. We also explore the challenges in using the set of guidelines to influence the IoT applications design process. In addition to highlighting the benefits of having a PbD framework to make privacy features explicit during the design of IoT applications, our studies also surfaced a number of challenges associated with the approach. A key finding of our research is that the PbD framework significantly increases both novice and expert software engineers' ability to design privacy into IoT applications.
As with all the major advances in information and communication technology, ubiquitous computing (ubicomp) introduces new risks to individual privacy. Our analysis of privacy protection in ubicomp has identified four layers through which users must navigate: the regulatory regime they are currently in, the type of ubicomp service required, the type of data being disclosed, and their personal privacy policy. We illustrate and compare the protection afforded by regulation and by some major models for user control of privacy. We identify the shortcomings of each and propose a model which allows user control of privacy levels in a ubicomp environment. Our model balances the user's privacy preferences against the applicable privacy regulations and incorporates five types of user controlled "noise" to protect location privacy by introducing ambiguities. We also incorporate an economics-based approach to assist users in balancing the trade-offs between giving up privacy and receiving ubicomp services. We conclude with a scenario and heuristic evaluation which suggests that regulation can have both positive and negative influences on privacy interfaces in ubicomp and that social translucence is an important heuristic for ubicomp privacy interface functionality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.