In this work we explore the use of latent representations obtained from multiple input sensory modalities (such as images or sounds) in allowing an agent to learn and exploit policies over different subsets of input modalities. We propose a three-stage architecture that allows a reinforcement learning agent trained over a given sensory modality, to execute its task on a different sensory modality-for example, learning a visual policy over image inputs, and then execute such policy when only sound inputs are available. We show that the generalized policies achieve better out-of-the-box performance when compared to different baselines. Moreover, we show this holds in different OpenAI gym and video game environments, even when using different multimodal generative models and reinforcement learning algorithms.
Humans interact in rich and diverse ways with the environment. However, the representation of such behavior by artificial agents is often limited. In this work we present motion concepts, a novel multimodal representation of human actions in a household environment. A motion concept encompasses a probabilistic description of the kinematics of the action along with its contextual background, namely the location and the objects held during the performance. Furthermore, we present Online Motion Concept Learning (OMCL), a new algorithm which learns novel motion concepts from action demonstrations and recognizes previously learned motion concepts. The algorithm is evaluated on a virtual-reality household environment with the presence of a human avatar. OMCL outperforms standard motion recognition algorithms on an one-shot recognition task, attesting to its potential for sampleefficient recognition of human actions.
In this work, a new tool was developed, the MORIA program that readily translates Rutherford backscattering spectrometry (RBS) output data into visual information, creating a display of the distribution of elements in a true three-dimensional (3D) environment. The program methodology is illustrated with the analysis of yeast Saccharomyces cerevisiae cells, exposed to copper oxide nanoparticles (CuO-NP) and HeLa cells in the presence of gold nanoparticles (Au-NP), using different beam species, energies and nuclear microscopy systems. Results demonstrate that for both cell types, the NP internalization can be clearly perceived. The 3D models of the distribution of CuO-NP in S. cerevisiae cells indicate the nonuniform distribution of NP in the cellular environment and a relevant confinement of CuO-NP to the cell wall. This suggests the impenetrability of certain cellular organelles or compartments for NP. By contrast, using a high-resolution ion beam system, discretized agglomerates of Au-NP were visualized inside the HeLa cell. This is consistent with the mechanism of entry of these NPs in the cellular space by endocytosis enclosed in endosomal vesicles. This approach shows RBS to be a powerful imaging technique assigning to nuclear microscopy unparalleled potential to assess nanoparticle distribution inside the cellular volume.
Revealing the internal workings of a robot can help a human better understand the robot's behaviors. How to reveal such workings, e.g., via explanation generation, remains a significant challenge. This gets even more complex when these explanations are targeted towards children. Therefore, we propose a search-based approach to generate contrastive explanations using optimal and sub-optimal plans and implement it in a scenario for children. In the application scenario, the child and the robot learn together how to play a zero-sum game that requires logical and mathematical thinking. We report results around our explanation generation system that was successfully deployed among seven-year-old children. Our results show trends that the generated explanations were able to positively affect the children's perceived difficulty in learning the zero-sum game.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.