Abstract:Human multi-robot interaction exploits both the human operator's high-level decision-making skills and the robotic agents' vigorous computing and motion abilities. While controlling multi-robot teams, an operator's attention must constantly shift between individual robots to maintain sufficient situation awareness. To conserve an operator's attentional resources, a robot with self-reflect capability on its abnormal status can help an operator focus her attention on emergent tasks rather than unneeded routine c… Show more
“…While our model of confidence could drive overt feedback to the operator or be applied only to internal processes of the robot, the implementation presented here is directed at minimally intrusive adjustment of physical behavior to mitigate the challenges of human interaction with multiple mobile robots. The online application distinguishes this work from others which estimated operator attention offline [52] or used human eye gaze for training [128].…”
Section: Discussionmentioning
confidence: 99%
“…General approaches to address operator overload due to multitasking include redesigning tasks and interfaces to reduce demands, training operators to develop automaticity, improve attention management, and automating tasks and task management [51]. Research toward interaction with multiple semiautonomous robots includes task switching and operator attention allocation [52][53][54], such as identifying where an operator should focus and influencing the operator's behavior accordingly via visual cues in a graphical user interface [55]. Other work includes determining which aspects of a given task are most suitable for automation [16], measuring and influencing operator trust in team autonomy [19], using intelligent agents to help human operators manage a team of multiple robots [13], and augmented reality interfaces to integrate information from multiple sources and project it into a view of the real world using a common frame of reference [35,56,57].…”
Section: Human Interaction With Multiple Robotsmentioning
confidence: 99%
“…Their main recommendation is that a robot learns specific policies based on examples. Chien, Lin, and Lee [52] proposed a hidden Markov model (HMM) to examine operator intent, and performed offline HMM analysis of multirobot interaction queuing mechanisms. Several groups [59][60][61] (including our own) have used eye-gaze tracking to determine the user intent for zooming the camera.…”
Section: Understanding the User's Intentmentioning
There is considerable interest in multirobot systems capable of performing spatially distributed, hazardous, and complex tasks as a team leveraging the unique abilities of humans and automated machines working alongside each other. The limitations of human perception and cognition affect operators’ ability to integrate information from multiple mobile robots, switch between their spatial frames of reference, and divide attention among many sensory inputs and command outputs. Automation is necessary to help the operator manage increasing demands as the number of robots (and humans) scales up. However, more automation does not necessarily equate to better performance. A generalized robot confidence model was developed, which transforms key operator attention indicators to a robot confidence value for each robot to enable the robots’ adaptive behaviors. This model was implemented in a multirobot test platform with the operator commanding robot trajectories using a computer mouse and an eye tracker providing gaze data used to estimate dynamic operator attention. The human-attention-based robot confidence model dynamically adapted the behavior of individual robots in response to operator attention. The model was successfully evaluated to reveal evidence linking average robot confidence to multirobot search task performance and efficiency. The contributions of this work provide essential steps toward effective human operation of multiple unmanned vehicles to perform spatially distributed and hazardous tasks in complex environments for space exploration, defense, homeland security, search and rescue, and other real-world applications.
“…While our model of confidence could drive overt feedback to the operator or be applied only to internal processes of the robot, the implementation presented here is directed at minimally intrusive adjustment of physical behavior to mitigate the challenges of human interaction with multiple mobile robots. The online application distinguishes this work from others which estimated operator attention offline [52] or used human eye gaze for training [128].…”
Section: Discussionmentioning
confidence: 99%
“…General approaches to address operator overload due to multitasking include redesigning tasks and interfaces to reduce demands, training operators to develop automaticity, improve attention management, and automating tasks and task management [51]. Research toward interaction with multiple semiautonomous robots includes task switching and operator attention allocation [52][53][54], such as identifying where an operator should focus and influencing the operator's behavior accordingly via visual cues in a graphical user interface [55]. Other work includes determining which aspects of a given task are most suitable for automation [16], measuring and influencing operator trust in team autonomy [19], using intelligent agents to help human operators manage a team of multiple robots [13], and augmented reality interfaces to integrate information from multiple sources and project it into a view of the real world using a common frame of reference [35,56,57].…”
Section: Human Interaction With Multiple Robotsmentioning
confidence: 99%
“…Their main recommendation is that a robot learns specific policies based on examples. Chien, Lin, and Lee [52] proposed a hidden Markov model (HMM) to examine operator intent, and performed offline HMM analysis of multirobot interaction queuing mechanisms. Several groups [59][60][61] (including our own) have used eye-gaze tracking to determine the user intent for zooming the camera.…”
Section: Understanding the User's Intentmentioning
There is considerable interest in multirobot systems capable of performing spatially distributed, hazardous, and complex tasks as a team leveraging the unique abilities of humans and automated machines working alongside each other. The limitations of human perception and cognition affect operators’ ability to integrate information from multiple mobile robots, switch between their spatial frames of reference, and divide attention among many sensory inputs and command outputs. Automation is necessary to help the operator manage increasing demands as the number of robots (and humans) scales up. However, more automation does not necessarily equate to better performance. A generalized robot confidence model was developed, which transforms key operator attention indicators to a robot confidence value for each robot to enable the robots’ adaptive behaviors. This model was implemented in a multirobot test platform with the operator commanding robot trajectories using a computer mouse and an eye tracker providing gaze data used to estimate dynamic operator attention. The human-attention-based robot confidence model dynamically adapted the behavior of individual robots in response to operator attention. The model was successfully evaluated to reveal evidence linking average robot confidence to multirobot search task performance and efficiency. The contributions of this work provide essential steps toward effective human operation of multiple unmanned vehicles to perform spatially distributed and hazardous tasks in complex environments for space exploration, defense, homeland security, search and rescue, and other real-world applications.
“…Pilots often must deal with time critical situations, it is important that pilots can distribute their attention effectively between the raw data and its relevant modes, as failures to manage a high-priority task in a timely manner could lead to potentially disastrous consequences (Bybee et al, 2011). Therefore, it is important to apply cognitive assistance to support pilot's attention resources on the flight deck (Chien et al, 2018).…”
There have been many aviation accidents and incidents related to mode confusion on the flight deck. The aim of this research is to evaluate human-computer interactions on a newly designed augmented visualization Primary Flight Display (PFD) compared with the traditional design of PFD. Based on statistical analysis of 20 participants interaction with the system, there are significant differences on pilots' pupil dilation, fixation duration, fixation counts and mental demand between the traditional PFD design and augmented PFD. The results demonstrated that augmented visualisation PFD, which uses a green border around the "raw data" of airspeed, altitude or heading indications to highlight activated mode changes, can significantly enhance pilots' situation awareness and decrease perceived workload. Pilots can identify the status of flight modes more easily, rapidly and accurately compared to the traditional PFD, thus shortening the response time on cognitive information processing. This could also be the reason why fixation durations on augmented PFDs were significantly shorter than traditional PFDs. The augmented visualization in the flight deck improves pilots' situation awareness as indicated by increased fixation counts related to attention distribution. Simply highlighting the parameters on the PFD with a green border in association with relevant flight mode changes will greatly reduce pilots' perceived workload and increase situation awareness. Flight deck design must focus on methods to provide pilots with enhanced situation awareness, thus decreasing cognitive processing requirements by providing intuitive understanding in time limited situations.
“…Multi-agent systems are used for implementing HEMSs to illustrate the communication of agents among devices for energy sources in sequence. [43][44][45][46][47][48][49][50] Presently, no interacting low-cost and highly autonomous robots are found in the green houses. In particular, sensors are placed and data manipulation has been done manually or with the help of web interfaces.…”
Nowadays, smart farming involves the integration of advanced technologies that incorporate low-cost robots to meet the required knowledge and maintain the health of plants in farming. Technologies like precision agriculture are also used to optimize resources based on the field condition. Internet of Green Things is also one of the technologies to integrate and share the information between people and healthy farm things. Internet of Green Things gives the information like soil moisture, temperature, humidity, and nutrient level by means of respective sensors. Monitoring and information gathering in green houses with the help of robots is a tedious and expensive process. In this connection, information is shared among low-cost robots encouraging data availability of the current state of a plant with other robots. This will emphasize the monitoring of green houses in a well-organized way. In this article, a Flask-based framework through Raspberry Pi is proposed for interoperability among the low-cost ESP8266 robots. Data gathering is performed by smart robots that are accessible through Message Queuing Telemetry Transport subscribes by means of Representational State Transfer Application Programming Interface. A cloud-like database server is provided to stock up the data. The integration of robotics with Internet of Green Things gains more advantage in gathering about spatial information data that are connected with the irrigation. Visualization techniques and perspectives based on Internet of Green Things for precision agriculture in the field of farming are highlighted.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.