Typically to a roboticist, a plan is the outcome of other work, a synthesized object that realizes ends defined by some problem; plans qua plans are seldom treated as first-class objects of study. Plans designate functionality: a plan can be viewed as defining a robot’s behavior throughout its execution. This informs and reveals many other aspects of the robot’s design, including: necessary sensors and action choices, history, state, task structure, and how to define progress. Interrogating sets of plans helps in comprehending the ways in which differing executions influence the interrelationships between these various aspects. Revisiting Erdmann’s theory of action-based sensors, a classical approach for characterizing fundamental information requirements, we show how plans (in their role of designating behavior) influence sensing requirements. Using an algorithm for enumerating plans, we examine how some plans for which no action-based sensor exists can be transformed into sets of sensors through the identification and handling of features that preclude the existence of action-based sensors. We are not aware of those obstructing features having been previously identified. Action-based sensors may be treated as standalone reactive plans; we relate them to the set of all possible plans through a lattice structure. This lattice reveals a boundary between plans with action-based sensors and those without. Some plans, specifically those that are not reactive plans and require some notion of internal state, can never have associated action-based sensors. Even so, action-based sensors can serve as a framework to explore and interpret how such plans make use of state.
Robotic assistive devices are popular in research and medical fields for their potential to automate tasks or to improve the quality of life of disabled users. They may be used for physical therapy, as exoskeletons, teleoperation devices, or to assist users in tasks in their home. Common methods for controlling these devices use both broad and difficult-to-maintain gesturing or peripherals such as mouse pointers or joysticks which are not designed for the task required of them. In addition, these devices are often not adaptive to the user and can only be minimally customized. This article proposes a fusion of infrared camera data for stress detection with Kinect body tracking to develop a customizable control method for a robotic limb. Devices such as the Microsoft Kinect have seen use in physical therapy applications but only some use in teleoperation. In addition, studies have shown potential in using infrared imaging to detect human stress. The objectives of this study are to design and build an interactive interface and adaptive control system and to evaluate its performance. Stress detection using infrared was tested using a Compix 222 and neural networks to categorize emotional states. Kinect v2 accuracy and reliability was tested by comparing joint positions to detected angles and perceived output angles. Our work suggests that infrared imaging and the Kinect v2 show potential to make a real-time adaptive system in which a control program can adapt its output when it detects stress from its user.
In studying robots and planning problems, a basic question is what is the minimal information a robot must obtain to guarantee task completion. Erdmann's theory of action-based sensors is a classical approach to characterizing fundamental information requirements. That approach uses a plan to derive a type of virtual sensor which prescribes actions that make progress toward a goal. We show that the established theory is incomplete: the previous method for obtaining such sensors, using backchained plans, overlooks some sensors. Furthermore, there are plans, that are guaranteed to achieve goals, where the existing methods are unable to provide any action-based sensor. We identify the underlying feature common to all such plans. Then, we show how to produce action-based sensors even for plans where the existing treatment is inadequate, although for these cases they have no single canonical sensor. Consequently, the approach is generalized to produce sets of sensors. Finally, we show also that this is a complete characterization of action-based sensors for planning problems and discuss how an action-based sensor translates into the traditional conception of a sensor.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.