This paper proposes the design and application of an immersive virtual reality system to improve and train the emotional skills of students with autism spectrum disorders. It has been designed for primary school students between the ages of 7-12 and all participants have a confirmed diagnosis of autism spectrum disorder. The immersive environment allows the student to train and develop different social situations in a structured, visual and continuous manner. The use of a computer vision system to automatically determine the child's emotional state is proposed. This system has been created with two goals in mind, the first to update the social situations, with the student's emotional mood taken into account, and the second to confirm, automatically, if the child's behavior is appropriate in the represented social situation. The results described in this paper show a significant improvement in the children's emotional competences, in comparison with the results obtained until now using earlier virtual reality systems.
Abstract-In this paper, a new approach for fusing visual and force information is shown. First, a new method for tracking trajectories, called movement flow-based visual servoing system, which presents the correct behavior in the image and in the three-dimensional space, is described. The information obtained from this system is fused with that obtained from a force control system in unstructured environments. To do so, a new method of recognizing the contact surface and a system for fusing visual and force information are described. The latter method employs variable weights for each sensor system, depending on a criteria based on the detection of changes in the interaction forces processed by a Kalman filter.
Tactile sensors play an important role in robotics manipulation to perform dexterous and complex tasks. This paper presents a novel control framework to perform dexterous manipulation with multi-fingered robotic hands using feedback data from tactile and visual sensors. This control framework permits the definition of new visual controllers which allow the path tracking of the object motion taking into account both the dynamics model of the robot hand and the grasping force of the fingertips under a hybrid control scheme. In addition, the proposed general method employs optimal control to obtain the desired behaviour in the joint space of the fingers based on an indicated cost function which determines how the control effort is distributed over the joints of the robotic hand. Finally, authors show experimental verifications on a real robotic manipulation system for some of the controllers derived from the control framework.
The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.
Abstract:The free hardware platforms have become very important in engineering education in recent years. Among these platforms, Arduino highlights, characterized by its versatility, popularity and low price. This paper describes the implementation of four laboratory experiments for Automatic Control and Robotics courses at the University of Alicante, which have been developed based on Arduino and other existing equipment. Results were evaluated taking into account the views of students, concluding that the proposed experiments have been attractive to them, and they have acquired the knowledge about hardware configuration and programming that was intended.
Flexible multisensorial systems are a very important issue in the current industry when disassembling and recycling tasks have to be performed. These tasks can be performed by a human operator or by a robot system. In the current paper a robotic system to perform the required tasks is presented. This system takes into consideration the distribution of the necessary tasks to perform the disassembly of a component using several robots in a parallel or in a cooperative way. The algorithm proposed to distribute the task among robots takes into consideration the characteristics of each task and the sequence that needs to be followed to perform the required disassembly of the product. Furthermore, this paper presents a disassembly system based on a sensorized cooperative robots interaction framework for the planning of movements and detections of objects in the disassembly tasks. To determine the sequence of the disassembly of some products, a new strategy to distribute a set of tasks among robots is presented. Subsequently, the visual detection system used for detecting targets and characteristics is described. To carry out this detection process, different well known strategies, such as matching templates, polygonal approach and edge detection, are applied. Finally, a visual-force control system has been implemented in order to track disassembly trajectories. An important aspect of this system is the processing of the sensorial information in order to guarantee coherence. This aspect allows the application of both sensors, visual and force sensors, co-ordinately to disassembly tasks. The proposed system is validated by experiments using several types of components such as the covers of batteries and electronic circuits from toys, and drives and screws from PCs.
In this article, a personal computer disassembly cell is presented. With this cell, a certain degree of automatism is afforded for the non-destructive disassembly process and for the recycling of these kinds of mass-produced electronic products. Each component of the product can be separated. The disassembly cell is composed of several sub-systems, each of which is dedicated to the planning and execution of one type of task. A computer vision system is employed for the recognition and localisation of the product and of each of its components. The disassembly system proposed here also has a modelling system for the products and each of its components, the information necessary for the planning of tasks, generating the disassembly sequence and planning of the disassembly movements. These systems co-operate with each other to achieve a semi-automatic disassembly of the product.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.