In this paper, we foreground some of the key research challenges that arise in the design of trustworthy human-AI partnerships. In particular, we focus on the challenges in designing human-AI partnerships that need to be addressed to help humans and organizations trust their machine counterparts individually or as a collective (e.g., as robot teams or groups of software agents). We also aim to identify the risks associated with human-AI partnerships and therefore determine the associated measures to mitigate these risks. By so doing, we will trigger new avenues of research that will address the key barriers to the adoption of AI-based systems more widely in our daily lives and in industry.
INTRODUCTIONRecent advances in Artificial Intelligence (AI), Machine Learning (ML), and Robotics have significantly enhanced the capabilities of machines. Machine intelligence is now able to support human decision making, augment human capabilities, and, in some cases, take over control from humans and act fully autonomously. Real-world examples of autonomous systems including self-driving vehicles, recommender systems, facial recognition systems, and automated trading are only just beginning to demonstrate the value machine intelligence can deliver to society and the wider economy. However, unfortunately, there have also been failures in the deployment of such autonomous systems that have resulted in fatal car crashes, plane accidents, and stock market failures (Brynjolfsson and McAfee, 2014;Daugherty and Wilson, 2018). Such failures are often attributed to a poor understanding of how to weave AI systems into our societal and industrial fabric.Given this, we believe that the next big advance for AI and ML systems will involve them being significantly more tightly embedded into systems alongside humans, interacting and influencing each other in a number of ways. Such human-AI partnerships are a new form of socio-technical system in which the potential synergies between humans and machines are much more fully utilized. To achieve this, AI systems will need to leave their currently solipsistic nature behind and be able to cooperate, coordinate, and compete with one another and their human interlocutors. Such partnerships will combine their complementary skills and capabilities to make the best use of the distinctive strengths of humans and machines (Licklider, 1960), while also acknowledging their potentially diverging preferences, purpose, and objectives that may give rise to conflict or cause them to attempt to influence (intentionally or not) each other's decision-making. Likewise, humans will likely be challenged to work and live with AI systems as fully autonomous partners, rather than purely tools that they can manipulate or query. The modalities through which they engage with AI systems will also vary greatly, shifting away from typical screen-based or tactile interfaces to voice or brain-controlled, opening new opportunities and risks for interactions between humans and AI systems. These elements of the human-AI partnersh...