Although "privacy by design" (PBD)-embedding privacy protections into products during design, rather than retroactively-uses the term "design" to recognize how technical design choices implement and settle policy, design approaches and methodologies are largely absent from PBD conversations. Critical, speculative, and value-centered design approaches can be used to elicit reflections on relevant social values early in product development, and are a natural fit for PBD and necessary to achieve PBD's goal. Bringing these together, we present a case study using a design workbook of speculative design fictions as a values elicitation tool. Originally used as a reflective tool among a research group, we transformed the workbook into artifacts to share as values elicitation tools in interviews with graduate students training as future technology professionals. We discuss how these design artifacts surface contextual, socially-oriented understandings of privacy, and their potential utility in relationship to other values levers.
In this chapter we analyze the rhetorical work of speculative design methods to advance third wave agendas in HCI. We contrast the history of speculative design that is often cited in HCI papers from the mid 2000s onward that frames speculative design as a critical methodological intervention in HCI, linked to radical art practice and critical theory, with the history of how speculative design was introduced to HCI publications through corporate design research initiatives from the RED group at Xerox PARC. Our argument is that third wave, critically oriented, speculative design "works" in HCI because it is highly compatible with other forms of conventional corporate speculation (e.g. concept videos and scenario planning). This reading of speculative design re-centers the "criticality" from the method itself to its ability to advance agendas that challenge dominant practices in technology design. We will look at how practitioners trade on the rhetorical ambiguity of future oriented design practices to introduce these ideas in contexts where they may not otherwise have much purchase. Our chapter concludes with a call for critically oriented practitioners in this space to share their experiences navigating speculative design ambiguity and to document the disciplinary history of the method's development.
Design futuring approaches, such as speculative design, design fiction and others, seek to (re)envision futures and explore alternatives. As design futuring becomes established in HCI design research, there is an opportunity to expand and develop these approaches. To that end, by reflecting on our own research and examining related work, we contribute five modes of reflection. These modes concern formgiving, temporality, researcher positionality, real-world engagement, and knowledge production. We illustrate the value of each mode through careful analysis of selected design exemplars and provide questions to interrogate the practice of design futuring. Each reflective mode offers productive resources for design practitioners and researchers to articulate their work, generate new directions for their work, and analyze their own and others' work.
ImportanceIn patients with severe aortic valve stenosis at intermediate surgical risk, transcatheter aortic valve replacement (TAVR) with a self-expanding supra-annular valve was noninferior to surgery for all-cause mortality or disabling stroke at 2 years. Comparisons of longer-term clinical and hemodynamic outcomes in these patients are limited.ObjectiveTo report prespecified secondary 5-year outcomes from the Symptomatic Aortic Stenosis in Intermediate Risk Subjects Who Need Aortic Valve Replacement (SURTAVI) randomized clinical trial.Design, Setting, and ParticipantsSURTAVI is a prospective randomized, unblinded clinical trial. Randomization was stratified by investigational site and need for revascularization determined by the local heart teams. Patients with severe aortic valve stenosis deemed to be at intermediate risk of 30-day surgical mortality were enrolled at 87 centers from June 19, 2012, to June 30, 2016, in Europe and North America. Analysis took place between August and October 2021.InterventionPatients were randomized to TAVR with a self-expanding, supra-annular transcatheter or a surgical bioprosthesis.Main Outcomes and MeasuresThe prespecified secondary end points of death or disabling stroke and other adverse events and hemodynamic findings at 5 years. An independent clinical event committee adjudicated all serious adverse events and an independent echocardiographic core laboratory evaluated all echocardiograms at 5 years.ResultsA total of 1660 individuals underwent an attempted TAVR (n = 864) or surgical (n = 796) procedure. The mean (SD) age was 79.8 (6.2) years, 724 (43.6%) were female, and the mean (SD) Society of Thoracic Surgery Predicted Risk of Mortality score was 4.5% (1.6%). At 5 years, the rates of death or disabling stroke were similar (TAVR, 31.3% vs surgery, 30.8%; hazard ratio, 1.02 [95% CI, 0.85-1.22]; P = .85). Transprosthetic gradients remained lower (mean [SD], 8.6 [5.5] mm Hg vs 11.2 [6.0] mm Hg; P < .001) and aortic valve areas were higher (mean [SD], 2.2 [0.7] cm2 vs 1.8 [0.6] cm2; P < .001) with TAVR vs surgery. More patients had moderate/severe paravalvular leak with TAVR than surgery (11 [3.0%] vs 2 [0.7%]; risk difference, 2.37% [95% CI, 0.17%- 4.85%]; P = .05). New pacemaker implantation rates were higher for TAVR than surgery at 5 years (289 [39.1%] vs 94 [15.1%]; hazard ratio, 3.30 [95% CI, 2.61-4.17]; log-rank P < .001), as were valve reintervention rates (27 [3.5%] vs 11 [1.9%]; hazard ratio, 2.21 [95% CI, 1.10-4.45]; log-rank P = .02), although between 2 and 5 years only 6 patients who underwent TAVR and 7 who underwent surgery required a reintervention.Conclusions and RelevanceAmong intermediate-risk patients with symptomatic severe aortic stenosis, major clinical outcomes at 5 years were similar for TAVR and surgery. TAVR was associated with superior hemodynamic valve performance but also with more paravalvular leak and valve reinterventions.
The explosion in the use of software in important sociotechnical systems has renewed focus on the study of the way technical constructs reflect policies, norms, and human values. This effort requires the engagement of scholars and practitioners from many disciplines. And yet, these disciplines often conceptualize the operative values very differently while referring to them using the same vocabulary. The resulting conflation of ideas confuses discussions about values in technology at disciplinary boundaries. In the service of improving this situation, this paper examines the value of shared vocabularies, analytics, and other tools that facilitate conversations about values in light of these disciplinary specific conceptualizations, the role such tools play in furthering research and practice, outlines different conceptions of "fairness"deployed in discussions about computer systems, and provides an analytic tool for interdisciplinary discussions and collaborations around the concept of fairness. We use a case study of risk assessments in criminal justice applications to both motivate our effort-describing how conflation of different concepts under the banner of "fairness" led to unproductive confusion-and illustrate the value of the fairness analytic by demonstrating how the rigorous analysis it enables can assist in identifying key areas of theoretical, political, and practical misunderstanding or disagreement, and where desired support alignment or collaboration in the absence of consensus. the terms we consider here is at the early stages of formation, now is the time to attend to the infrastructure necessary to support its development. BACKGROUNDParticularly with the rise of machine learning as a core technical tool for building computer systems and the concomitant sense that these systems were not adequately reflecting the values and goals of their designers and creators, the question of how to build values into software systems has gained significant traction in recent years. One focal point for the community has been the rise of the FAT scholarly meetings (FAT/ML, or the Workshop on Fairness, Accountability, and Transparency in Machine Learning, held annually since 2014 at a major machine learning conference and FAT * , the (now, ACM) Conference on Fairness, Accountability, and Transparency, which aims to build community outside of research with a machine learning focus. 2 In a similar vein, prior research within Computer Supported Cooperative Work (CSCW) and related fields, such as Human Computer Interaction (HCI) and Science & Technology Studies, has paid attention to the ways in which technical practices and computational artifacts of all kinds embed or promote a range of social values (e.g., [78,110,137,138,151]). Research programs developed to focus on values and technology, such as Value Sensitive Design and "values in design", include forms of both analysis to identify and critique values associated with systems, and methods for incorporating values into the processes of engineering and design [48,4...
In calls for privacy by design (PBD), regulators and privacy scholars have investigated the richness of the concept of "privacy." In contrast, "design" in HCI is comprised of rich and complex concepts and practices, but has received much less attention in the PBD context. Conducting a literature review of HCI publications discussing privacy and design, this paper articulates a set of dimensions along which design relates to privacy, including: the purpose of design, which actors do design work in these settings, and the envisioned beneficiaries of design work. We suggest new roles for HCI and design in PBD research and practice: utilizing valuesand critically-oriented design approaches to foreground social values and help define privacy problem spaces. We argue such approaches, in addition to current "design to solve privacy problems" efforts, are essential to the full realization of PBD, while noting the politics involved when choosing design to address privacy. CCS CONCEPTS • Security and privacy → Human and societal aspects of security and privacy; • Social and professional topics → Computing / technology policy; • Human-centered computing → HCI design and evaluation methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.