In recent years, deep learning algorithms have become increasingly more prominent for their unparalleled ability to automatically learn discriminant features from large amounts of data. However, within the field of electromyographybased gesture recognition, deep learning algorithms are seldom employed as they require an unreasonable amount of effort from a single person, to generate tens of thousands of examples.This work's hypothesis is that general, informative features can be learned from the large amounts of data generated by aggregating the signals of multiple users, thus reducing the recording burden while enhancing gesture recognition. Consequently, this paper proposes applying transfer learning on aggregated data from multiple users, while leveraging the capacity of deep learning algorithms to learn discriminant features from large datasets. Two datasets comprised of 19 and 17 able-bodied participants respectively (the first one is employed for pre-training) were recorded for this work, using the Myo Armband. A third Myo Armband dataset was taken from the NinaPro database and is comprised of 10 able-bodied participants. Three different deep learning networks employing three different modalities as input (raw EMG, Spectrograms and Continuous Wavelet Transform (CWT)) are tested on the second and third dataset. The proposed transfer learning scheme is shown to systematically and significantly enhance the performance for all three networks on the two datasets, achieving an offline accuracy of 98.31% for 7 gestures over 17 participants for the CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw EMG-based ConvNet. Finally, a use-case study employing eight able-bodied participants suggests that real-time feedback allows users to adapt their muscle activation strategy which reduces the degradation in accuracy normally experienced over time.
Abstract-Novel computing systems are increasingly being composed of large numbers of heterogeneous components, each with potentially different goals or local perspectives, and connected in networks which change over time. Management of such systems quickly becomes infeasible for humans. As such, future computing systems should be able to achieve advanced levels of autonomous behaviour. In this context, the system's ability to be self-aware and be able to self-express becomes important. This paper surveys definitions and current understanding of self-awareness and self-expression in biology and cognitive science. Subsequently, previous efforts to apply these concepts to computing systems are described. This has enabled the development of novel working definitions for selfawareness and self-expression within the context of computing systems.
The use of robots in health care has increased dramatically over the last decade. One area of research has been to use robots to conduct ultrasound examinations, either controlled by a physician or autonomously. This paper examines the possibility of using the commercial robot UR5 from Universal Robots to make a tele-operated robotic ultrasound system. Physicians diagnosing patients using ultrasound probes are prone to repetitive strain injuries, as they are required to hold the probe in uncomfortable positions and exert significant static force. The main application for the system is to relieve the physician of this strain by letting the them control a robot that holds the probe. A set of requirements for the system is derived from the state-of-the-art systems found in the research literature. The system is developed through a low-level interface for the robot, effectively building a new software framework for controlling it. Compliance force control and forward flow haptic control of the robot was implemented. Experiments are conducted to quantify the performance of the two control schemes. The force control is estimated to have a bandwidth of 16.6 Hz, while the haptic control is estimated to have a bandwidth of 65.4 Hz for the position control of the slave and 13.4 Hz for the force control of the master. Overall, the system meets the derived requirements and the main conclusion is that it is feasible to use the UR5 robot for robotic ultrasound applications.
We present a study on morphological traits of evolved modular robots. We note that the evolutionary search space-the set of obtainable morphologies-depends on the given representation and reproduction operators and we propose a framework to assess morphological traits in this search space regardless of a specific environment and/or task. To this end, we present eight quantifiable morphological descriptors and a generic novelty search algorithm to produce a diverse set of morphologies for any given representation. With this machinery, we perform a comparison between a direct encoding and a generative encoding. The results demonstrate that our framework permits to find a very diverse set of bodies, allowing a morphological diversity investigation. Furthermore, the analysis showed that despite the high levels of diversity, a bias to certain traits in the population was detected. Surprisingly, the two encoding methods showed no significant difference in the diversity levels of the evolved morphologies or their morphological traits.
A dvanced computing systems generally contain many heterogeneous subsystems, each with a local perspective and goal set, which interconnect in changing network topologies. The subsystems must interact with each other and with humans in ways that are difficult to understand and predict while robustly maintaining performance, reliability, and security even with unforeseen dynamics, such as system failures or changing goals.To meet these stringent requirements, computational systemsranging from robot swarms and personal music devices to Web services and sensor networks-must achieve sophisticated autonomous behavior by adapting themselves at runtime and through learning processes that enable ongoing self-change. Managing tradeoffs among conflicting local and global goals at runtime requires considerable awareness of both the system's current state and its environment. Yet researchers have only recently begun to understand the implications of selfawareness principles and how to translate them into system engineering. Consequently, there is no general methodology for architecting self-aware systems or for comparing their self-awareness capabilities.To address this need, we examined how human selfawareness can serve as a source of inspiration for a new notion of computational self-awareness and associated self-expression, and we developed a general framework for describing a computing system's self-awareness properties. As part of this work, we created a reference architecture, which we used to derive architectural patterns for RESEARCH FEATURE
For robots to handle the numerous factors that can affect them in the real world, they must adapt to changes and unexpected events. Evolutionary robotics tries to solve some of these issues by automatically optimizing a robot for a specific environment. Most of the research in this field, however, uses simplified representations of the robotic system in software simulations. The large gap between performance in simulation and the real world makes it challenging to transfer the resulting robots to the real world. In this paper, we apply real world multi-objective evolutionary optimization to optimize both control and morphology of a four-legged mammal-inspired robot. We change the supply voltage of the system, reducing the available torque and speed of all joints, and study how this affects both the fitness, as well as the morphology and control of the solutions. In addition to demonstrating that this realworld evolutionary scheme for morphology and control is indeed feasible with relatively few evaluations, we show that evolution under the different hardware limitations results in comparable performance for low and moderate speeds, and that the search achieves this by adapting both the control and the morphology of the robot.
Abstract.Creating gaits for physical robots is a longstanding and open challenge. Recently, the HyperNEAT generative encoding was shown to automatically discover a variety of gait regularities, producing fast, coordinated gaits, but only for simulated robots. A follow-up study found that HyperNEAT did not produce impressive gaits when they were evolved directly on a physical robot. A simpler encoding hand-tuned to produce regular gaits was tried on the same robot, and outperformed HyperNEAT, but these gaits were first evolved in simulation before being transferred to the robot. In this paper, we tested the hypothesis that the beneficial properties of HyperNEAT would outperform the simpler encoding if HyperNEAT gaits are first evolved in simulation before being transferred to reality. That hypothesis was confirmed, resulting in the fastest gaits yet observed for this robot, including those produced by nine different algorithms from three previous papers describing gaitgenerating techniques for this robot. This result is important because it confirms that the early promise shown by generative encodings, specifically HyperNEAT, are not limited to simulation, but work on challenging real-world engineering challenges such as evolving gaits for real robots.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.