A number of wearable 'lifelogging' camera devices have been released recently, allowing consumers to capture images and other sensor data continuously from a first-person perspective. Unlike traditional cameras that are used deliberately and sporadically, lifelogging devices are always 'on' and automatically capturing images. Such features may challenge users' (and bystanders') expectations about privacy and control of image gathering and dissemination. While lifelogging cameras are growing in popularity, little is known about privacy perceptions of these devices or what kinds of privacy challenges they are likely to create.To explore how people manage privacy in the context of lifelogging cameras, as well as which kinds of first-person images people consider 'sensitive,' we conducted an in situ user study (N = 36) in which participants wore a lifelogging device for a week, answered questionnaires about the collected images, and participated in an exit interview. Our findings indicate that: 1) some people may prefer to manage privacy through in situ physical control of image collection in order to avoid later burdensome review of all collected images; 2) a combination of factors including time, location, and the objects and people appearing in the photo determines its 'sensitivity;' and 3) people are concerned about the privacy of bystanders, despite reporting almost no opposition or concerns expressed by bystanders over the course of the study.
Abstract-Cameras are now commonplace in our social and computing landscapes and embedded into consumer devices like smartphones and tablets. A new generation of wearable devices (such as Google Glass) will soon make 'first-person' cameras nearly ubiquitous, capturing vast amounts of imagery without deliberate human action. 'Lifelogging' devices and applications will record and share images from people's daily lives with their social networks. These devices that automatically capture images in the background raise serious privacy concerns, since they are likely to capture deeply private information. Users of these devices need ways to identify and prevent the sharing of sensitive images.As a first step, we introduce PlaceAvoider, a technique for owners of first-person cameras to 'blacklist' sensitive spaces (like bathrooms and bedrooms). PlaceAvoider recognizes images captured in these spaces and flags them for review before the images are made available to applications. PlaceAvoider performs novel image analysis using both fine-grained image features (like specific objects) and coarse-grained, scene-level features (like colors and textures) to classify where a photo was taken. PlaceAvoider combines these features in a probabilistic framework that jointly labels streams of images in order to improve accuracy. We test the technique on five realistic firstperson image datasets and show it is robust to blurriness, motion, and occlusion.
Online social network providers have become treasure troves of information for marketers and researchers. To profit from their data while honoring the privacy of their customers, social networking services share 'anonymized' social network datasets, where, for example, identities of users are removed from the social network graph. However, by using external information such as a reference social graph (from the same network or another network with similar users), researchers have shown how such datasets can be de-anonymized. These approaches use 'network alignment' techniques to map nodes from the reference graph into the anonymized graph and are often sensitive to larger network sizes, the number of seeds, and noise -which may be added to preserve privacy.We propose a divide-and-conquer approach to strengthen the power of such algorithms. Our approach partitions the networks into 'communities' and performs a two-stage mapping: first at the community level, and then for the entire network. Through extensive simulation on real-world social network datasets, we show how such community-aware network alignment improves deanonymization performance under high levels of noise, large network sizes, and a low number of seeds. Even when nodes cannot be explicitly mapped, the community structure can be mapped between both networks, thus reducing the anonymity of users. For example, for our (real-world) Twitter dataset with 90,000 nodes, 20% noise, and 16 seeds, the state-of-the-art technique reduces anonymity by 0 bits, whereas our approach reduces anonymity by 9.71 bits (with 40% of nodes mapped).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.