2012
DOI: 10.1098/rstb.2012.0077
|View full text |Cite
|
Sign up to set email alerts
|

Formal language theory: refining the Chomsky hierarchy

Abstract: The first part of this article gives a brief overview of the four levels of the Chomsky hierarchy, with a special emphasis on context-free and regular languages. It then recapitulates the arguments why neither regular nor context-free grammar is sufficiently expressive to capture all phenomena in the natural language syntax. In the second part, two refinements of the Chomsky hierarchy are reviewed, which are both relevant to the extant research in cognitive science: the mildly context-sensitive languages (whic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
123
1
2

Year Published

2012
2012
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 122 publications
(138 citation statements)
references
References 26 publications
0
123
1
2
Order By: Relevance
“…Recent research using artificial grammar learning to probe the specific types of rules that organisms are capable of learning has offered a clearer picture (see ten Cate, 2016), and the use of formal language theory provides a useful metric and common description language for analyzing these rule types in terms of computational complexity (Jäger & Rogers, 2012). The most important distinction at present appears to be between the levels of regular or Bfinite state^grammars, some types of which are accessible to multiple animal species, and supra-regular grammars that go beyond this (including both context-sensitive and context-free grammars, sometimes termed Bphrase structure grammars^).…”
Section: Hierarchical Syntaxmentioning
confidence: 99%
“…Recent research using artificial grammar learning to probe the specific types of rules that organisms are capable of learning has offered a clearer picture (see ten Cate, 2016), and the use of formal language theory provides a useful metric and common description language for analyzing these rule types in terms of computational complexity (Jäger & Rogers, 2012). The most important distinction at present appears to be between the levels of regular or Bfinite state^grammars, some types of which are accessible to multiple animal species, and supra-regular grammars that go beyond this (including both context-sensitive and context-free grammars, sometimes termed Bphrase structure grammars^).…”
Section: Hierarchical Syntaxmentioning
confidence: 99%
“…However, because temporal abstraction is defined in logical time, a word is always less temporally abstract than a sentence containing the word. The temporal abstraction view on sequence processing is a novel computational or a semiformal approach to artificial grammars, natural language, as well as other cognitive domains with sequential structure (see Jäger & Rogers [64] for more extended formal approaches to sequence processing and an explanation of the rationale). We will now focus on a possible temporal abstraction gradient within the LIFG.…”
Section: A Rostro-caudal Abstraction Gradient In Lateral Prefrontal Cmentioning
confidence: 99%
“…A grammar G is roughly a finite set of rules that specifies how items in a lexicon (alphabet) are combined into well-formed sequences, thus generating a formal language L(G) [39,[62][63][64]. The sequence set L(G) is called G's weak generative capacity and two grammars G 1 and…”
Section: Recursion Competence Grammars and Performance Modelsmentioning
confidence: 99%
“…Two grammars G 1 and G 2 are strongly equivalent if their sets of generated structural descriptions are equal, SD(G 1 ) ¼ SD(G 2 ). Many classes of grammars are described in the literature (see [63] for a review of some normal forms and the (extended) Chomsky hierarchy; grammar/language formalisms are however not restricted to these, [64,66,67]). Some important types of grammars generate classes of sequence (or string) sets that can be placed in a class hierarchy, the (extended) Chomsky hierarchy.…”
Section: Recursion Competence Grammars and Performance Modelsmentioning
confidence: 99%