2022
DOI: 10.48550/arxiv.2205.00445
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning

Abstract: Huge language models (LMs) have ushered in a new era for AI, serving as a gateway to natural-language-based knowledge tasks. Although an essential element of modern AI, LMs are also inherently limited in a number of ways. We discuss these limitations and how they can be avoided by adopting a systems approach. Conceptualizing the challenge as one that involves knowledge and reasoning in addition to linguistic processing, we define a flexible architecture with multiple neural models, complemented by discrete kno… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 6 publications
0
10
0
Order By: Relevance
“…To collaborate with the agent, users begin the process by typing their editing objectives. The agent interprets the user's objectives and formulates an action plan to fulfill them [31,60,62,81]. The agent operates in two modes: Planning and Executing.…”
Section: 31mentioning
confidence: 99%
“…To collaborate with the agent, users begin the process by typing their editing objectives. The agent interprets the user's objectives and formulates an action plan to fulfill them [31,60,62,81]. The agent operates in two modes: Planning and Executing.…”
Section: 31mentioning
confidence: 99%
“…This information is then passed to an external symbolic planner, which efficiently determines the optimal sequence of actions from the current state to the target state. MRKL [71] is a modular, neural-symbolic AI architecture, where LLMs process the input texts, routes them to each of the experts, and then pass them through the LLMs' outputs. CO-LLM [156] consider that LLMs are good at generating high-level plans, but not good at low-level controlling.…”
Section: Planning Modulementioning
confidence: 99%
“…ChemCrow [8] presents an LLM-based chemical agent aimed at accomplishing tasks in the fields of organic synthesis, drug discovery, and material design, with the help of seventeen expert-designed tools. MRKL Systems [71], OpenAGI [51] incorporates various expert systems such as knowledge bases and planners, invoking them to access domain-specific information in a systematic manner. (3) Language Models.…”
Section: Action Modulementioning
confidence: 99%
“…Understanding the tradeoffs around training a specialized FM versus utilizing an existing off-the-shelf FM is a critical research question. Beyond academic efforts, AI21 Labs recently released MRKL-a system for augmenting FMs with external knowledge sources and symbolic reasoning experts-demonstrating how FMs can be utilized to ingest, reason and operate over varied, domain-specific data stores [39].…”
Section: Domain-specific Datamentioning
confidence: 99%