We live in an emerging hyper-connected era in which people are in contact and interacting with an increasing number of other people and devices. Increasingly, modern IT systems form networks of humans and machines that interact with one another. As machines take a more active role in such networks, they exert an increasing level of influence on other participants. We review the existing literature on agency and propose a definition of agency that is practical for describing the capabilities and impact human and machine actors may have in a human-machine network. On this basis, we discuss and demonstrate the impact and trust implications for machine actors in human-machine networks for emergency decision support, healthcare and future smart homes. We maintain that machine agency not only facilitates human to machine trust, but also interpersonal trust; and that trust must develop to be able to seize the full potential of future technology.
In the current hyperconnected era, modern Information and Communication Technology (ICT) systems form sophisticated networks where not only do people interact with other people, but also machines take an increasingly visible and participatory role. Such Human-Machine Networks (HMNs) are embedded in the daily lives of people, both for personal and professional use. They can have a significant impact by producing synergy and innovations. The challenge in designing successful HMNs is that they cannot be developed and implemented in the same manner as networks of machines nodes alone, or following a wholly humancentric view of the network. The problem requires an interdisciplinary approach. Here, we review current research of relevance to HMNs across many disciplines. Extending the previous theoretical concepts of sociotechnical systems, actor-network theory, cyber-physical-social systems, and social machines, we concentrate on the interactions among humans and between humans and machines. We identify eight types of HMNs: public-resource computing, crowdsourcing, web search engines, crowdsensing, online markets, social media, multiplayer online games and virtual worlds, and mass collaboration. We systematically select literature on each of these types and review it with a focus on implications for designing HMNs. Moreover, we discuss risks associated with HMNs and identify emerging design and development trends.
This article presents a part of the ongoing Economic and Social Research Council (ESRC)-funded project “FloraGuard: Tackling the illegal trade in endangered plants” that relies on cross-disciplinary approaches to analyze online marketplaces for the illegal trade in endangered plants, and explores strategies to develop digital resources to assist law enforcement in countering and disrupting this criminal market. This contribution focuses on how the project brought together computer science, criminology, conservation science, and law enforcement expertise to create a tool for the automatic gathering of relevant online information to be used for research, intelligence, and investigative purposes. The article also discusses the ethical standards applied and proposes the concept of “artificial intelligence (AI) review” to provide a sociotechnical solution that builds trustworthiness in the AI approaches used for this type of cross-disciplinary information and communications technology (ICT)-enabled methodology.
Abstract. In this paper we outline an initial typology and framework for the purpose of profiling human-machine networks, that is, collective structures where humans and machines interact to produce synergistic effects. Profiling a human-machine network along the dimensions of the typology is intended to facilitate access to relevant design knowledge and experience. In this way the profiling of an envisioned or existing human-machine network will both facilitate relevant design discussions and, more importantly, serve to identify the network type. We present experiences and results from two case trials: a crisis management system and a peer-to-peer reselling network. Based on the lessons learnt from the case trials we suggest potential benefits and challenges, and point out needed future work.
Abstract. Human-machine networks affect many aspects of our lives: from sharing experiences with family and friends, knowledge creation and distance learning, and managing utility bills or providing feedback on retail items, to more specialised networks providing decision support to human operators and the delivery of health care via a network of clinicians, family, friends, and both physical and virtual social robots. Such networks rely on increasingly sophisticated machine algorithms, e.g., to recommend friends or purchases, to track our online activities in order to optimise the services available, and assessing risk to help maintain or even enhance people's health. Users are being offered ever increasing power and reach through these networks by machines which have to support and allow users to be able to achieve goals such as maintaining contact, making better decisions, and monitoring their health. As such, this comes down to a synergy between human and machine agency in which one is dependent in complex ways on the other. With that agency questions arise about trust, risk and regulation, as well as social influence and potential for computer-mediated self-efficacy. In this paper, we explore these constructs and their relationships and present a model based on review of the literature which seeks to identify the various dependencies between them. IntroductionA definition of agency based on the notion of non-deterministic behaviours [1] fails to recognise the increasing variety and complexity of human-machine networks 1 (HMNs) [2], the intention of technology designers [3], and active intervention by bots within social networks [4,5]. The concept of agency is particularly problematic in human-machine interactions [6]. Machine or material agency may be seen as automation, which originally required some tolerance from human agents [7]. But this is no longer true: technology can actively support human activity [8], and manifests increasingly complex interaction types [9]. Machine and human agency may not be the same and yet equally valid [10]; machine agency may be just "perceived autonomy" [11]; and it 1 In the following we use human-machine network and network interchangeably.certainly enables human agency [12]. Indeed, agency may well be becoming a social and group construct where both humans and machines play a part [13,14]; and used effectively, agency may even lead to innovative review of working practice [15].The enabling contribution of machine agents within a network may have an effect on self-efficacy. Bandura's original definition of self-efficacy as an individual's belief in their ability to be able to achieve a given objective [16][17][18] has also been applied to technology [19,20] and its acceptance [21]. There are, however, constraints on the support and positive contribution of technology to human self-efficacy, not least in terms of anxiety and suspicion around technology use [22,23]. This may be further exacerbated by increasing machine animism: it may not always be obvious what machines ar...
Efficient human-machine networks require productive interaction between human and machine actors. In this study, we address how a strengthening of machine agency, for example through increasing levels of automation, affect the human actors of the networks. Findings from case studies within air traffic management, crisis management, and crowd evacuation are presented, exemplifying how automation may strengthen the agency of human actors in the network through responsibility sharing and task allocation, and serve as a needed prerequisite of innovation and change.
With ever-increasing technology complexity, there is a need to consider how technology integrates within typical and specific environments. Empirical work with technology acceptance models has to date focused largely on perceived or expected ease-of-use along with the perceived or expected usefulness of the technology. These constructs have been examined extensively via quantitative methods. Other factors have received less attention. There is some evidence, for instance, that technology adoption may depend on how technology contributes to self-efficacy and agency. Less accessible perhaps to standard quantitative instruments, it is time to consider a mixed-methods approach to examine these aspects of technology acceptance. For this exploratory study, we have begun to evaluate a security modeller tool within a healthcare. We asked IT professionals working in hospital environments in Italy and Spain to work with the technology as part of a limited ethnographic study, and to complete a standard ease-of-use questionnaire. Comparing the results, we found that the quantitative measures to be poor predictors of a willingness to explore the affordances presented by the technology. Although limited at this time, we maintain that a more nuanced picture of technology adoption must allow potential adopters to be creative in response to how they believe the technology could be exploited in their environment.
On 16 July 2020, the Court of Justice of the European Union issued their decision in the Schrems II case concerning Facebook’s transfers of personal data from the EU to the US. The decision may have significant effects on the legitimate transfer of personal data for health research purposes from the EU. This article aims: (i) to outline the consequences of the Schrems II decision for the sharing of personal data for health research between the EU and third countries, particularly in the context of the COVID-19 pandemic; and, (ii) to consider certain options available to address the consequences of the decision and to facilitate international data exchange for health research moving forward.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.