This paper explores the tradeoffs between different types of mixed reality robotic communication under different levels of user workload. We present the results of a within-subjects experiment in which we systematically and jointly vary robot communication style alongside level and type of cognitive load, and measure subsequent impacts on accuracy, reaction time, and perceived workload and effectiveness. Our preliminary results suggest that although humans may not notice differences, the manner of load a user is under and the type of communication style used by a robot they interact with do in fact interact to determine their task effectiveness
End-to-end machine learning (ML) in Internet of Things (IoT) Cloud systems consists of multiple processes, covering data, model, and service engineering, and involves multiple stakeholders. Therefore, to be able to explain ML to relevant stakeholders, it is important to identify explainability requirements in a holistic manner. In this paper, we present our methodology to address explainability requirements for endto-end ML in developing ML services to be deployed within IoT Cloud systems. We identify and classify explainability requirements engineering through (i) involvement of relevant stakeholders, (ii) end-to-end data, model, and service engineering processes, and (iii) multiple explainability aspects. We present our work with a case study of predictive maintenance for Base Transceiver Stations (BTS) in the telco domain.
Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) has been gaining considerable attention in HRI research in recent years. However, the HRI community lacks a set of shared terminology and framework for characterizing aspects of mixed reality interfaces, presenting serious problems for future research. Therefore, it is important to have a common set of terms and concepts that can be used to precisely describe and organize the diverse array of work being done within the field. In this paper, we present a novel taxonomic framework for different types of VAM-HRI interfaces, composed of four main categories of virtual design elements (VDEs). We present and justify our taxonomy and explain how its elements have been developed over the last 30 years as well as the current directions VAM-HRI is headed in the coming decade. CCS Concepts: • Human-centered computing → Virtual reality; Mixed / augmented reality; User interface design; • Computer systems organization → External interfaces for robotics.
Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) has been gaining considerable attention in HRI research in recent years. However, the HRI community lacks a set of shared terminology and framework for characterizing aspects of mixed reality interfaces, presenting serious problems for future research. Therefore, it is important to have a common set of terms and concepts that can be used to precisely describe and organize the diverse array of work being done within the field. In this paper, we present a novel taxonomic framework for different types of VAM-HRI interfaces, composed of four main categories of virtual design elements (VDEs). We present and justify our taxonomy and explain how its elements have been developed over the last 30 years as well as the current directions VAM-HRI is headed in the coming decade.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.