Many people express concern that Facebook’s users are overly connected. This article examines responses to survey questions asked after a large random sample of American Facebook users had been paid to deactivate Facebook. We find a recurring discourse of mind including, for example, references to mindfulness. Using iterative qualitative coding, we ask what meanings and practices are invoked in this discourse. Furthermore, we critically assess the potential of what respondents describe to address the problems of overconnection. We find explicit awareness of the automaticity of use, the value and content of Facebook, and how it makes users feel. We find that users came to practice disconnection at many nested levels of vernacular affordances. Ultimately, we argue that Facebook has become a landscape trap, altering daily life such that individual practices, such as mindful scrolling, cannot overcome the overconnection problems it may create. Mindfulness in this discourse may be power, but it is power to avoid elements of Facebook, not power to transform it.
Machines, from artificially intelligent digital assistants to embodied robots, are becoming more pervasive in everyday life. Drawing on feminist science and technology studies (STS) perspectives, we demonstrate how machine designers are not just crafting neutral objects, but relationships between machines and humans that are entangled in human social issues such as gender and power dynamics. Thus, in order to create a more ethical and just future, the dominant assumptions currently underpinning the design of these human-machine relations must be challenged and reoriented toward relations of justice and inclusivity. This paper contributes the "social machine" as a model for technology designers who seek to recognize the importance, diversity and complexity of the social in their work, and to engage with the agential power of machines. In our model, the social machine is imagined as a potentially equitable relationship partner that has agency and as an "other" that is distinct from, yet related to, humans, objects, and animals. We critically examine and contrast our model with tendencies in robotics that consider robots as tools, human companions, animals or creatures, and/or slaves. In doing so, we demonstrate ingrained dominant assumptions about human-machine relations and reveal the challenges of radical thinking in the social machine design space. Finally, we present two design challenges based on non-anthropomorphic figuration and mutuality, and call for experimentation, unlearning dominant tendencies, and reimagining of sociotechnical futures.
Often hated during its lifespan in product (1996-2006), Clippy – Microsoft’s Office Assistant, became a pop-culture icon in its afterlife. Delving into the plethora of memes featuring Clippy, we ask: why should a questionable character from a software program that has been out of use for well over a decade have so vibrant an afterlife? If Clippy has become a rhetorical resource, what is it being used to do? We propose that Clippy’s dual status as the original natural-language digital assistant, one that fell critically short in its ability to actually assist, makes it an ideal vehicle for critique of today’s ubiquitous assistants. An analysis of 1,148 meme instances collected from five sites led to a twofold argument: First, Clippy humor relies on the contrast between types of intelligence; Clippy is often too good at one kind, while lacking in another. In particular, Clippy lacks interpersonal intelligence: it serves as a disruptive mediator between its user and the world, as well as other human beings. Yet this failure in “knowing its limits” and adapting to its environment is also what gives Clippy character. This suggests that digital assistants must attend to multiple kinds of intelligences; attending to any one over others may create an endearing character but not an effective digital assistant. Furthermore, the fact that the unbending yet personality-filled character of Clippy remains ungendered or male gives us insight into the pliant and empty characters of the female gendered Alexa, Siri, and Cortana.
Communication technologies, from social media to video conferencing, are used by billions of people globally and contribute to shaping relationships between people. As these technologies become increasingly ubiquitous, the tech workers building them are increasingly making product decisions that can have far-reaching interpersonal ramifications. At the same time, few workplace tools and support exist to help tech workers understand and navigate these potential ramifications, and tech worker perspectives on such tools are not fully understood. In this work, we explore the needs, challenges, and opportunities encountered by tech workers in thinking through the interpersonal implications of their products. To do this, we ran a semi-structured interview study with 10 diverse tech workers. To ground the discussion, study participants interacted with a design probe prototype, InterAct, which provides research-grounded information about interpersonal implications of product features. Our findings suggest a desire by tech workers to consider the social implications of the technologies they build, and the potential for structured tooling to help provide the required knowledge and build organizational support. Based on these findings, we provide design considerations for creating future workplace tools to support thinking about the social implications of technologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.