Increases in routine data collection and surveillance in recent years have resulted in ongoing tension between citizens' privacy concerns, perceived need for government surveillance and acceptance of policies. We address the lack of Australia focussed research through an online survey of 100 Australian residents. Data was analysed using PLS, revealing that privacy concerns around collection influence acceptance of surveillance but do not influence enactment of privacy protections. Conversely, respondents' concerns about secondary use of data were unrelated to their levels of acceptance, yet were a significant determinant of privacy protections. These findings suggest that respondents conflate surveillance with collection of data, and may not consider subsequent secondary use. This highlights the multi-dimensional nature of privacy which must be studied at sufficiently granular level to draw meaningful conclusions. Our research also considers the role of trust in government, and perceived need for surveillance and these findings are discussed with their implications.
Though there is a tension between citizens' privacy concerns and their acceptance of government surveillance, there is little systematic research in this space, and less still in a cross cultural context. We address the research gap by modeling the factors that drive public acceptance of government surveillance, and by exploring the influence of national culture. The research involved an online survey of 242 Australian and Sri Lankan residents. Data was analyzed using PLS, revealing that privacy concerns around initial collection of citizens' data influenced levels of acceptance of surveillance in Australia but not Sri Lanka, whereas concerns about secondary use of data did not influence levels of acceptance in either country. These findings suggest that respondents conflate surveillance with the collection of data and may not consider subsequent secondary use. We also investigate cultural differences, finding that societal collectivism and power distance significantly affect the strength of the relationships between privacy concerns and acceptance of surveillance, on the one hand, and adoption of privacy protections, on the other. Our research also considers the role of trust in government, and perceived need for surveillance. Findings are discussed with their implications for theory and practice.
This paper considers the so-called 'right to be forgotten', in the context of the 2014 decision of the European Court of Justice (ECJ) in the case of Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González. It also considers the 'right of erasure' contained in the current EU Data Protection Directive, as well as the proposal for a new right of erasure to be included in the new EU data protection framework. The paper proposes a particular way of understanding the right to be forgotten and suggests a broad definition of it. It examines claims that the ECJ's decision in Google 'invented' a right to be forgotten. It also considers whether individuals have a right to be forgotten under the current EU Directive, and whether they will have such a right when the new data protection regulation becomes law. More generally, the paper considers whether a right to be forgotten has been recognised as an aspect of a broader right to privacy, and whether the Google decision moves us closer to an understanding of privacy as the right to an appropriate flow of information, in line with Nissenbaum's framework of contextual integrity.
The unwanted distribution of images of children and young people is an issue of concern for many, including young people themselves and those who advocate for them. This article draws on research into child development, self‐presentation, and the developmental implications of computer mediated communication to suggest that distribution of images of children and young people, where unwanted by the image subject, can have implications for development, particularly the development of self‐esteem. It is suggested that an awareness of these implications can and should inform the legal response to the problem of unwanted distribution of images of children and that there is a need to move the discussion about unwanted distribution of images beyond its traditional realm of personality rights and the discourse on privacy. A take‐down scheme relating to images of children is proposed as one response to the problem of unwanted image sharing, particularly in the online realm.
Facebook has recently been subject to scrutiny by privacy regulators in Europe, as well as by the US Federal Trade Commission, in relation to the introduction of its 'tag suggest' feature. This feature uses face recognition technology to create a biometric template of users' faces, and had been introduced to Facebook users as a default (opt-out) setting. One outcome of the recent scrutiny has been the temporary deactivation of the tag suggest feature. However, there is every indication that Facebook intends to re-introduce the feature in the not too distant future. This article canvasses some of the privacy implications of face recognition technology, particularly as it is used by Facebook, and in the private sector generally. Legal implications of Facebook's use of biometric templates and the generation and use of biometric information are considered by reference to the Privacy Act 1988 (Cth) as recently amended by the Privacy Amendment (Enhancing Privacy Protection) Act 2012 (Cth). In particular, the threshold issue of the application of Australia's federal information privacy laws to overseas organisations that have no presence in Australia and do not have servers in the country is considered. Definitional issues around the fundamental terms 'collect' and 'receive', as used in the amended Privacy Act, are also discussed, along with an overview of possible compliance risks for Facebook arising from Australia's information privacy regime. Finally, the article offers some reflections on the efficacy of Australian information privacy laws in regulating the creation and use of biometric face templates and associated information in the social media context.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.