Robots are becoming interactive and robust enough to be adopted outside laboratories and in industrial scenarios as well as interacting with humans in social activities. However, the design of engaging robotbased applications requires the availability of usable, flexible and accessible development frameworks, which can be adopted and mastered by researchers and practitioners in social sciences and adult end users as a whole. This paper surveys Visual Programming Environments aimed at enabling a paradigm fostering the so-called End-User Development of applications involving robots with social capabilities. The focus of this article is on those Visual Programming Environments that are designed to support social research goals as well as to cater for professional needs of people not trained in more traditional text-based computer programming languages. This survey excludes interfaces aimed at supporting expert programmers, at allowing industrial robots to perform typical industrial tasks (such as pick and place operations), and at teaching children how to code. After having performed a systematic search, sixteen programming environments have been included in this survey. Our goal is two-fold: first, to present these software tools with their technical features and Authoring Artificial Intelligence modeling approaches, and second, to present open challenges in the development of Visual Programming Environments for end users and social researchers, which can be informative and valuable to the community. The results show that the most recent such tools are adopting distributed and Component-Based Software Engineering approaches and web technologies. However, few of them have been designed to enable the independence of end users from high-tech scribes. Moreover, findings indicate the need for (i) more objective and comparative evaluations, as well as usability and user experience studies with real end users; and (ii) validations of these tools for designing applications aimed at working "in-the-wild" rather than only in laboratories and structured settings.
We address the problem of executing tool-using manipulation skills in scenarios where the objects to be used may vary. We assume that point clouds of the tool and target object can be obtained, but no interpretation or further knowledge about these objects is provided. The system must interpret the point clouds and decide how to use the tool to complete a manipulation task with a target object; this means it must adjust motion trajectories appropriately to complete the task. We tackle three everyday manipulations: scraping material from a tool into a container, cutting, and scooping from a container. Our solution encodes these manipulation skills in a generic way, with parameters that can be filled in at runtime via queries to a robot perception module; the perception module abstracts the functional parts of the tool and extracts key parameters that are needed for the task. The approach is evaluated in simulation and with selected examples on a PR2 robot.
This paper proposes an end-to-end deep-learning based method for textindependent writer identification. In this approach, convolutional neural networks (CNNs) are trained initially to extract the local features which represent characteristics of individual handwriting in the whole and sub-regions of character images. We make randomly sampled tuples of images from the training set to train CNNs and aggregate the extracted local features of images from the tuples to form the global features. By alternating the images to make the tuples, we create a large number of training patterns required by textindependent writer identification as well as the training process of CNNs. Experiments on the JEITA-HP database of offline handwritten Japanese character patterns show the effectiveness of this approach to overcome the difficulties of gathering handwritten character patterns of the same categories as the specimens of the writer. When we can use 200 characters, the method realizes the accuracy of 99.97% to classify 100 writers. Even if we can only use 50 characters, the method achieves the accuracy of 92.8%, which shows the ability to retain the accuracy despite the more number of writers or the less number of characters for training. Moreover, we made experiments on the Firemaker database and the IAM database of offline handwritten English text. When we use one page per writer to train, the method exceeds the accuracy of 91.5% to classify 900 writers. This result, as well as results for other conditions, show better performance than the previously published best result using handcrafted features and clustering algorithms, which shows the effectiveness of the method also for handwritten English text.
Telling lies and faking emotions is quite common in human-human interactions: though there are risks, in many situations such behaviours provide social benefits. In recent years, there have been many social robots and chatbots that fake emotions or behave deceptively with their users. In this paper, I present a few examples of such robots and chatbots, and analyze their ethical aspects. Three scenarios are presented where some kind of lying or deceptive behaviour might be justified. Then five approaches to deceptive behaviours -no deception, blatant deception, tactful deception, nudging, and self deceptionare discussed and their implications are analyzed. I conclude by arguing that we need to develop localized and culturespecific solutions to incorporating deception in social robots and chatbots.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.