In the early 2000s, we surveyed and analyzed the global repertoire of policy instruments deployed to protect personal data in "The Governance of Privacy." In this article, we explore how those instruments have changed as a result of 15 years of fundamental transformations in information technologies, and the new digital economy that they have produced. We review the contemporary range of transnational, regulatory, self-regulatory and technical instruments according to the same framework, and conclude that the types of policy instrument have remained remarkably stable, even though they are now deployed on a global scale, rather than in association with particular legal or administrative traditions. While the labels remain the same, however, the conceptual foundations for their legitimation and justification are shifting as a greater emphasis on accountability, risk, ethics and the social/political value of privacy have gained purchase in the policy community. Our exercise in self-reflection demonstrates both continuity and change within the governance of privacy, and displays how we would have tackled the same research project today. As a broader case study of regulation, it also highlights the importance of going beyond the technical and instrumental labels. The change or stability of policy instruments do not take place in isolation from the wider conceptualizations that shape their meaning, purpose and effect.
Anonymisation of personal data has a long history stemming from the expansion of the types of data products routinely provided by National Statistical Institutes. Variants on anonymisation have received serious criticism reinforced by much-publicised apparent failures. We argue that both the operators of such schemes and their critics have become confused by being overly focused on the properties of the data themselves. We claim that, far from being able to determine whether data are anonymous (and therefore non-personal) by looking at the data alone, any anonymisation technique worthy of the name must take account of not only the data but also their environment. This paper proposes an alternative formulation called functional anonymisation that focuses on the relationship between the data and the environment within which the data exist (their data environment). We provide a formulation for describing the relationship between the data and their environment that links the legal notion of personal data with the statistical notion of disclosure control. Anonymisation, properly conceived and effectively conducted, can be a critical part of the toolkit of the privacy-respecting data controller and the wider remit of providing accurate and usable data.
In recent years, there has been growing concern in the UK that local services aimed at risky or vulnerable people are ineffective, because of persistent failure to share information about their clients. Despite considerable national policy effort to encourage better information-sharing, previous research indicates that there are many cases where information is still not shared when it should be, or where it is shared when it should not be, with potentially devastating results. This article uses data from the largest empirical study of local information-sharing yet undertaken to examine four policysectors where multi-agency working has come to the fore. It shows variations in their information-sharing and confidentiality practices can be explained by neo-Durkheimian institutional theory and uses insights from this theory to argue that current policy tools, emphasising formal regulation, are unlikely to lead to consistent and acceptable outcomes, not least because of unresolved conflicts in values and aims.
It is commonly accepted that the use of personal information in business and government puts individual privacy at risk. However, little is known about these risksÐ for instance, whether and how they can be measured, and how they vary across social groups and the sectors in which personal data are used. Unless we can gain a purchase on such issues, our knowledge of the societal effects of information technology and systems will remain de® cient, and the ability to make and implement better policies for privacy protection, and perhaps for a more equitable distribution of risk and protection, will remain impaired. The article explores this topic, examining conventional paradigms in data protection, including the one-dimensional view of the ª data subject,º that inhibit better conceptualizations and practices. It looks at some comparative survey evidence that casts light on the question of the distribution of privacy risks and concerns. It examines theoretical issues in the literature on risk, raising questions about the objectivity and perception of the risk of privacy invasion.
The protection of privacy is predicated on the individual's right to privacy and stipulates a number of principles that are primarily focused on information privacy or data protection and, as such, are insufficient to apply to other types of privacy and to the protection of other entities beyond the individual. This article identifies additional privacy principles that would apply to other types of privacy and would enhance the consideration of risks or harms to the individual, to groups and to society as a whole if they are violated. They also relate to the way privacy impact assessment (PIA) may be conducted. There are important reasons for generating consideration of and debate about these principles. First, they help to recalibrate a focus in Europe on data protection to the relative neglect of other types of privacy. Second, it is of critical importance at a time when PIA (renamed 'data protection impact assessment', or DPIA) may become mandatory under the European Commission's proposed Data Protection Regulation. Such assessment is an important instrument for identifying and mitigating privacy risks, but should address all types of privacy. Third, one can construct an indicative table identifying harms or risks to these additional privacy principles, which can serve as an important tool or instrument for a broader PIA to address other types of privacy.
This article addresses the anticipated use and users of smart energy technologies and the contribution of these technologies to energy sustainability. It focuses on smart grids and smart energy meters. Qualitative accounts given by European technology developers and experts reveal how they understand the final use and social impacts of these technologies. The article analyzes these accounts and compares the UK’s smart meter rollout with experiences from other European countries, especially Finland, to provide insights into the later adoption stages of smart energy and how its impacts have evolved. The analysis highlights significant differences in the likely intensity and manner of user engagement with smart grids and meters: depending first on whether we are considering existing technologies or smart technologies that are expected to mature sometime in the next decade, and second on whether the ‘user’ is the user of smart meters or the user of an entire layer of new energy services and applications. By deploying the strategic approach developed in the article, smart grid developers and experts can give more explicit attention to recognizing the descriptions of ‘users’ in smart-grid projects and to the feasibility of these expectations of ‘use’ in comparison to the possibilities and limits of energy services and applications in different country contexts. The examination of user representations can also point out the need for further technology and service development if some of the envisioned user profiles and user actions appear unrealistic for presently available technologies.
The tension between the goals of integrated, seamless public services, requiring more extensive data sharing, and of privacy protection, now represents a major challenge for UK policy-makers, regulators and service managers. In Part I of this article (see Public Administration volume 83, number 1, pp. 111-33), we showed that attempts to manage this tension are being made at two levels. First, a settlement is being attempted at the level of general data protection law and the rules that govern datasharing practices across the public sector. We refer to this as the horizontal dimension of the governance of data sharing and privacy. Secondly, settlements are also being attempted within particular fields of public policy and service delivery; this we refer to as the vertical dimension.In this second part, we enquire whether risks to privacy are greater in some policy sectors than others. We do this, first by showing how the Labour Government's policy agenda is producing stronger imperatives towards data sharing than was the case under previous administrations in three fields of public policy and services, and by examining the safeguards introduced in these fields. We then compare the settlements emerging from differing practices within each of these policy sectors, before briefly assessing which, if any, principles of data protection seem to be most at risk and in which policy contexts. Four strategies for the governance of data sharing and privacy are recapitulated -namely, seeking to make the two commitments consistent or even mutually reinforcing; mitigating the tensions with safeguards such as detailed guidelines; allowing privacy to take precedence over integration; and allowing data sharing to take precedence over privacy. We argue that the UK government has increasingly sought to pursue the second strategy and that the vertical dimension is, in practice, much more important in defining the settlement between data sharing and privacy than is the horizontal dimension. This strategy is, however, potentially unstable and may not be sustainable. The conclusion proposes a radical recasting of the way in which the idea of a 'balance' between privacy and datasharing imperatives is conceived.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.