We present neural models of one of humans' most astonishing cognitive feats: the ability to interpret linguistic instructions in order to perform novel tasks with just a few practice trials. Models are trained on a set of commonly studied psychophysical tasks, and receive linguistic instructions embedded by transformer architectures pre-trained on natural language processing. Our best performing models can perform an unknown task with a performance of 80% correct on average based solely on linguistic instructions (i.e. 0-shot learning), and 90% after 3 learning updates. We found that the resulting neural representations capture the semantic structure of interrelated tasks even for novel tasks, allowing for the composition of practiced skills in unseen settings. Finally, we also demonstrate how this model can generate a linguistic description of a task it has identified using motor feedback, which, when communicated to another network, leads to near perfect performance (95%). To our knowledge, this is the first experimentally testable model of how language can structure sensorimotor representations to allow for task compositionality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.