How could language have evolved? What is the key innovation underlying the evolution of human language? This Essay argues that the ability to “merge” two syntactic elements uniquely explains the recentness and stability of language. [SK to check before publishing on the homepage] CM 16/7
The present dissertation is a study of language development in children. From a biological perspective, the development of language, as the development of any other organic systems, is an interaction between internal and external factors; specifically, between the child's internal knowledge of linguistic structures and the external linguistic experience he receives. Drawing insights from the study of biological evolution, we put forth a quantitative model of language acquisition that make this interaction precise, by embedding a theory of knowledge, the Universal Grammar, into a theory of learning from experience. In particular, we advance the idea that language acquisition should be modeled as a population of grammatical hypotheses, competing to match the external linguistic experiences, much like in a natural selection process. We present evidenceconceptual, mathematical, and empirical, and from a number of independent areas of linguistic research, including the acquisition of syntax and morphophology, and historical language changeto demonstrate the model's correctness and utility.
Language serves as a cornerstone of human cognition. However, our knowledge about its neural basis is still a matter of debate, partly because ‘language’ is often ill-defined. Rather than equating language with ‘speech’ or ‘communication’, we propose that language is best described as a biologically determined computational cognitive mechanism that yields an unbounded array of hierarchically structured expressions. The results of recent brain imaging studies are consistent with this view of language as an autonomous cognitive mechanism, leading to a view of its neural organization, whereby language involves dynamic interactions of syntactic and semantic aspects represented in neural networks that connect the inferior frontal and superior temporal cortices functionally and structurally. Our conceptions of the neural mechanisms of language have developed in tandem with our understanding of the nature of the language faculty as a cognitive system. Initially, research focused on frontal and temporal cortical regions as being involved in vocal production and speech perception, respectively. Since speech is the main medium of language used for communication, it may seem natural to equate language with speech or even ‘acoustic communication’1. This view, however, is too narrow. Speech is just one possible way of externalizing language (with sign or writing being other examples), ancillary to the internal computational system. In addition, ‘communication’ is merely a possible function of the language faculty, and cannot be equated with it. We argue that language is a species- and domain-specific human cognitive capacity (Box 1)2,3,4,5,6. In essence, language is an internal computational mechanism that yields an unbounded array of structured phrases and sentences. These must be minimally interpreted at two interfaces—that is, internal thoughts on the one hand, and externalization via sounds, writing or signs on the other (Box 1)4,5,7,8. Neurolinguistics focuses on the study of the neural substrates underlying the computational cognitive mechanism that lies at the core of human language. From a theoretical linguistic standpoint—that of generative grammar—language is posited to be a process described at a formal level, divided into functionally separable or autonomous components, such as syntax, morphology, and so on. The immediate question of interest that then arises is whether the formal representations exploited in generative grammar correspond to actual brain architecture. We will discuss independent lines of research converging on the result that syntactic processes are in fact independently computed in the brain
A central goal of modern generative grammar has been to discover invariant properties of human languages that reflect ''the innate schematism of mind that is applied to the data of experience'' and that ''might reasonably be attributed to the organism itself as its contribution to the task of the acquisition of knowledge '' (Chomsky, 1971). Candidates for such invariances include the structure dependence of grammatical rules, and in particular, certain constraints on question formation. Various ''poverty of stimulus'' (POS) arguments suggest that these invariances reflect an innate human endowment, as opposed to common experience: Such experience warrants selection of the grammars acquired only if humans assume, a priori, that selectable grammars respect substantive constraints. Recently, several researchers have tried to rebut these POS arguments. In response, we illustrate why POS arguments remain an important source of support for appeal to a priori structure-dependent constraints on the grammars that humans naturally acquire.
Understanding the evolution of language requires evidence regarding origins and processes that led to change. In the last 40 years, there has been an explosion of research on this problem as well as a sense that considerable progress has been made. We argue instead that the richness of ideas is accompanied by a poverty of evidence, with essentially no explanation of how and why our linguistic computations and representations evolved. We show that, to date, (1) studies of nonhuman animals provide virtually no relevant parallels to human linguistic communication, and none to the underlying biological capacity; (2) the fossil and archaeological evidence does not inform our understanding of the computations and representations of our earliest ancestors, leaving details of origins and selective pressure unresolved; (3) our understanding of the genetics of language is so impoverished that there is little hope of connecting genes to linguistic processes any time soon; (4) all modeling attempts have made unfounded assumptions, and have provided no empirical tests, thus leaving any insights into language's origins unverifiable. Based on the current state of evidence, we submit that the most fundamental questions about the origins and evolution of our linguistic capacity remain as mysterious as ever, with considerable uncertainty about the discovery of either relevant or conclusive evidence that can adjudicate among the many open hypotheses. We conclude by presenting some suggestions about possible paths forward.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.