Sprouse, Wagers, & Phillips (in press) carried out two experiments in which they measured individual differences in memory to test processing accounts of island effects. They found that these individual differences failed to predict the magnitude of island effects and construe these findings as counterevidence to processing-based accounts of island effects. Here, we take up several problems with their methods, their findings, and their conclusions.First, the arguments against processing accounts are based on null results using tasks that may be ineffective or inappropriate measures of working memory (the n-back and serial recall tasks). The authors provide no evidence that these two measures predict judgments for other constructions that are difficult to process and yet are clearly grammatical. They assume that other measures of working memory would have yielded the same result, but provide no justification that they should. We further show that whether a working memory measure relates to judgments of grammatical, hard-to-process sentences depends on how difficult the sentences are. In this light, the stimuli used by the authors present processing difficulties other than the island violations under investigation and may have been particularly hard to process. Second, the Sprouse et al. results are statistically in line with the hypothesis that island sensitivity varies with working memory. Three out of the four island types in their Experiment 1 show a significant relation between memory scores and island sensitivity, but the authors discount these findings on the grounds that the variance accounted for is too small to have much import. This interpretation, however, runs counter to standard practices in linguistics, psycholinguistics, and psychology. *
People often accommodate to each other's speech by aligning their linguistic production with their partner's. According to an influential theory, the Interactive Alignment Model, alignment is the result of priming. When people perceive an utterance, the corresponding linguistic representations are primed and become easier to produce. Here we tested this theory by investigating whether pitch (F0) alignment shows two characteristic signatures of priming: dose dependence and persistence. In a virtual reality experiment, we manipulated the pitch of a virtual interlocutor's speech to find out (1) whether participants accommodated to the agent's F0, (2) whether the amount of accommodation increased with increasing exposure to the agent's speech, and (3) whether changes to participants' F0 persisted beyond the conversation. Participants accommodated to the virtual interlocutor, but accommodation did not increase in strength over the conversation and disappeared immediately after the conversation ended. Results argue against a priming-based account of F0 accommodation and indicate that an alternative mechanism is needed to explain alignment along continuous dimensions of language such as speech rate and pitch.
Linguistic acceptability judgments are widely agreed to reflect constraints on real-time language processing. Nonetheless, very little is known about how processing costs affect acceptability judgments. In this paper, we explore how processing limitations are manifested in acceptability judgment data. In a series of experiments, we consider how two factors relate to judgments for sentences with varying degrees of complexity: (1) the way constraints combine (i.e., additively or super-additively), and (2) the way a comprehender's memory resources influence acceptability judgments. Results indicate that multiple sources of processing difficulty can combine to produce super-additive effects, and that there is a positive linear relationship between reading span scores and judgments for sentences whose unacceptability is attributable to processing costs. These patterns do hold for sentences whose unacceptability is attributable to factors other than processing costs, e.g. grammatical constraints. We conclude that tests of (super)-additivity and of relationships to reading span scores can help to identify the effects of processing difficulty on acceptability judgments, although these tests cannot be used in contexts of extreme processing difficulty.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.