The coda retraces the genealogy of the algorithm to consider our future prospects for achieving the twinned desires embedded in the heart of effective computability: the quest for universal knowledge and perfect self-knowledge. Central to this is the question of algorithmic imagination, particularly given the startling advances in the field of machine learning. The metaphors we use to access and influence the complexity and processes of computational systems will ultimately determine our prospects for true collaboration with intelligent machines. These questions are particularly vital for the humanities, and the chapter argues for a new mode of scholarly and public engagement with computation: the experimental humanities. This is how we can begin to understand the figure of the algorithm as a new territory for cultural imagination and become true collaborators with culture machines rather than their worshippers or, worse, their pets.
This paper addresses the gap between familiar popular narratives describing Artificial Intelligence (AI), such as the trope of the killer robot, and the realistic near-future implications of machine intelligence and automation for technology policy and society. The authors conducted a series of interviews with technologists, science fiction writers, and other experts, as well as a workshop, to identify a set of key themes relevant to the near future of AI. In parallel, they led the analysis of almost 100 recent works of science fiction stories with AI themes to develop a preliminary taxonomy of AI in science fiction. These activities informed the commissioning of six original works of science fiction and non-fiction response essays on the themes of “intelligence” and “justice” that were published as part of the Slate Future Tense Fiction series in 2019 and 2020. Our findings indicate that artificial intelligence remains deeply ambiguous both in the policy and cultural contexts: we struggle to define the boundaries and the agency of machine intelligence, and consequently find it difficult to govern or interact with such systems. However, our findings also suggest more productive avenues of inquiry and framing that could foster both better policy and better narratives around AI.
As one of the best known science narratives about the consequences of creating life, Mary Shelley's Frankenstein; or, The Modern Prometheus (1818) is an enduring tale that people know and understand with an almost instinctive familiarity. It has become a myth reflecting people's ambivalent feelings about emerging science: they are curious about science, but they are also afraid of what science can do to them. In this essay, we argue that the Frankenstein myth has evolved into a stigma attached to scientists that focalizes the public's as well as the scientific community's negative reactions towards certain sciences and scientific practices. This stigma produces ambivalent reactions towards scientific artifacts and it leads to negative connotations because it implies that some sciences are dangerous and harmful. We argue that understanding the Frankenstein stigma can empower scientists by helping them revisit their own biases as well as responding effectively to people's expectations for, and attitudes towards, scientists and scientific artifacts. Debunking the Frankenstein stigma could also allow scientists to reshape their professional identities so they can better show the public what ethical and moral values guide their research enterprises.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.