Huge language models (LMs) have ushered in a new era for AI, serving as a gateway to natural-language-based knowledge tasks. Although an essential element of modern AI, LMs are also inherently limited in a number of ways. We discuss these limitations and how they can be avoided by adopting a systems approach. Conceptualizing the challenge as one that involves knowledge and reasoning in addition to linguistic processing, we define a flexible architecture with multiple neural models, complemented by discrete knowledge and reasoning modules. We describe this neuro-symbolic architecture, dubbed the Modular Reasoning, Knowledge and Language (MRKL, pronounced "miracle") system, some of the technical challenges in implementing it, and Jurassic-X, AI21 Labs' MRKL system implementation.
Neurons are characterized by elaborate tree-like dendritic structures that support local computations by integrating multiple inputs from upstream presynaptic neurons. It is less clear whether simple neurons, consisting of a few or even a single neurite, may perform local computations as well. To address this question, we focused on the compact neural network of
Caenorhabditis elegans
animals for which the full wiring diagram is available, including the coordinates of individual synapses. We find that the positions of the chemical synapses along the neurites are not randomly distributed nor can they be explained by anatomical constraints. Instead, synapses tend to form clusters, an organization that supports local compartmentalized computations. In mutually synapsing neurons, connections of opposite polarity cluster separately, suggesting that positive and negative feedback dynamics may be implemented in discrete compartmentalized regions along neurites. In triple-neuron circuits, the nonrandom synaptic organization may facilitate local functional roles, such as signal integration and coordinated activation of functionally related downstream neurons. These clustered synaptic topologies emerge as a guiding principle in the network, presumably to facilitate distinct parallel functions along a single neurite, which effectively increase the computational capacity of the neural network.
Neurons are characterized by elaborate tree-like dendritic structures that support local computations by integrating multiple inputs from upstream presynaptic neurons. It is less clear if simple neurons, consisting of a few or even a single neurite, may perform local computations as well. To address this question, we focused on the compact neural network of C. elegans animals for which the full wiring diagram is available, including the coordinates of individual synapses. We find that the positions of the chemical synapses along the neurites are not randomly distributed, nor can they be explained by anatomical constraints. Instead, synapses tend to form clusters, an organization that supports local compartmentalized computations. In mutually-synapsing neurons, connections of opposite polarity cluster separately, suggesting that positive and negative feedback dynamics may be implemented in discrete compartmentalized regions along neurites. In triple-neuron circuits, the non-random synaptic organization may facilitate local functional roles, such as signal integration and coordinated activation of functionally-related downstream neurons. These clustered synaptic topologies emerge as a guiding principle in the network presumably to facilitate distinct parallel functions along a single neurite, effectively increasing the computational capacity of the neural network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.