This paper explores the history of ELIZA, a computer programme approximating a Rogerian therapist, developed by Jospeh Weizenbaum at MIT in the 1970s, as an early AI experiment. ELIZA's reception provoked Weizenbaum to re-appraise the relationship between 'computer power and human reason' and to attack the 'powerful delusional thinking' about computers and their intelligence that he understood to be widespread in the general public and also amongst experts. The root issue for Weizenbaum was whether human thought could be 'entirely computable' (reducible to logical formalism). This also provoked him to re-consider the nature of machine intelligence and to question the instantiation of its logics in the social world, which would come to operate, he said, as a 'slow acting poison'. Exploring Weizenbaum's 20th Century apostasy, in the light of ELIZA, illustrates ways in which contemporary anxieties and debates over machine smartness connect to earlier formations. In particular, this article argues that it is in its designation as a computational therapist that ELIZA is most significant today. ELIZA points towards a form of human-machine relationship now pervasive, a precursor of the 'machinic therapeutic' condition we find ourselves in, and thus speaks very directly to questions concerning modulation, autonomy, and the new behaviorism that are currently arising.
Big Data promises informational abundance – something that might be useful to cultures and communities in times of austerity. However many local organizations lack the skills to develop expertise in new forms of computation or the desire to develop them; Big Data is often viewed as the terrain of Big Business and Big Government. Drawing on issues arising from action research into Big Data and community in Brighton, England, this article explores questions of technological expertise in relation to Big Data, everyday life and critical practice – the latter understood as something that may be undertaken not only as a theoretical but also as an operational endeavour. The outcome of the article is thus not a prescription for training but a series of questions concerning desirable forms of co-constitution: How should expertise be shared between humans and machines?
This article discusses the cyclical nature of automation anxiety and examines ways of thinking about the recurrence of automation debates in culture, particularly with reference to the 1950s, 1960s and today. It draws on the concept of topos, developed by Erkki Huhtamo, to explore the
return of automation anxieties (and fevers) and the relationship between material formations and technological imaginaries. We focus in particular on recent left thinking where automation is used to invoke a postcapitalist utopia. Examples include Nick Srnicek and Alex Williams's Inventing
the Future: Postcapitalism and a World Without Work (2015) and Aaron Bastani's Fully Automated Luxury Communism: A Manifesto (2018). This strand of contemporary thinking is re-framed through our return to early automation scares emerging in the late 1960s. We explore engagements
between labour, civil rights, left public intellectuals, and emerging industrial figures, over questions of automation and work. We pay particular attention to questions of 'who benefits and when?' These are germane to the question of utopian futures or non-reformist reformism as it recurs
today. What interests us here is the concept of revived salience: not only how the tropes evident in these debates are revived and re-embedded today, but how do they find their force, and what do they imply.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.