Three eye-tracking experiments explored limits to prediction in language processing. • Speech rates, visual context preview time, and participant instructions were manipulated. • A normal speech rate only afforded prediction if participants had an extensive preview. • Even explicit instructions to predict led only to a small anticipation effect with a normal speech rate and short preview. • These findings are problematic for theoretical proposals that assume that prediction pervades cognition.
a b s t r a c tA large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants' reading times for sentences that convey similarity or difference between two abstract nouns (e.g., 'Peace and war are certainly different...'). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., 'peace', 'war'). In Experiments 2 and 3, they turned but remained blank. Participants' reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences.
Over the past two decades, 'visually situated' language comprehension (the interplay between language comprehension, attention, and non-linguistic visual context) has emerged as an increasingly active area of research. One important result in this area is that both linguistic and world knowledge, as well as visual cues, can rapidly inform the unfolding interpretation as ref lected by comprehenders' eye movements to objects during spoken language comprehension. However, upon closer inspection, temporal delays of object-directed gaze are not infrequent and emerge for the processing of non-canonical (vs. canonical) structures, for scalar implicatures and for recently learned world-language associations. While it may further be tempting to assume that the different knowledge sources and visual cues are on a par in guiding visual attention, comprehenders' eye movements in many instances reveal a robust referential priority (more looks go to the referent of a word than to other objects). Should this priority be taken as a trivial observation? In the present article, we argue that the tension between this referential priority and other world-language relations constitutes an important constraint on the linking hypotheses and mechanisms implicated in situated language comprehension and should be considered when conceptualizing models and accounts of visually situated language comprehension.
This paper presents the development of an easy-to-deploy and smart monitoring IoT system that utilizes vibration measurement devices to assess real-time condition of bulldozers, power shovels and backhoes, in non-stationary operations in the mining industry. According to operating experience data and the type of mining machine, total loss failure rates per machine fleet can reach up to 30%. Vibration analysis techniques are commonly used for condition monitoring and early detection of unforeseen failures to generate predictive maintenance plans for heavy machinery. However, this maintenance strategy is intensively used only for stationary machines and/or mobile machinery in stationary operations. Today, there is a lack of proper solutions to detect and prevent critical failures for non-stationary machinery. This paper shows a cost-effective solution proposal for implementing a vibration sensor network with wireless communication and machine learning data-driven capabilities for condition monitoring of non-stationary heavy machinery in mining operations. During the machine operation, 3-axis accelerations were measured using two sensors deployed across the machine. The machine accelerations (amplitudes and frequencies) are measured in two different frequency spectrums to improve each sensing location's time resolution. Multiple machine learning algorithms use this machine data to assess conditions according to manufacturer recommendations and operational benchmarks Proposed data-driven machine learning models classify the machine condition in states according to the ISO 2372 standards for vibration severity: Good, Acceptable, Unsatisfactory, or Unacceptable. After performing field tests with bulldozers and backhoes from different manufacturers, the machine learning algorithms are able to classify machine health status with an accuracy between 85% -95%. Moreover, the system allows early detection of ''Unacceptable'' states between 120 to 170 hours prior to critical failure. These results demonstrate that the proposed system will collect relevant data to generate predictive maintenance plans and avoid unplanned downtimes.
During the fourth age, a marked physiological deterioration and critical points of dysfunction are observed, during which cognitive performance exhibits a marked decline in certain skills (fluid intelligence) but good performance of others (crystallized intelligence). Experimental evidence describes important constraints on word production during old age, accompanied by a relative stabilization of speech comprehension. However, cognitive changes associated with advanced aging could also affect comprehension, particularly word recognition. The present study examines how the visual recognition of words is affected during the fourth age when tasks involving different cognitive loads are applied. Through linear regression models, performance was compared between two third-age groups and a fourth-age group on reaction time (RT) and accuracy in naming, priming and lexical decision experiments. The fourth-age group showed a significant RT increase in all experiments. In contrast, accuracy was good when the task involved a low cognitive demand (Experiments 1 and 2); however, when a decisional cognitive factor was included (Experiment 3), the fourth-age group performed significantly worse than the younger third-age group. We argue that the behavior observed among fourth-age individuals is consistent with an unbalanced cognitive configuration, in which the fluid intelligence deficit significantly reduces the speed necessary to recognize words, independent of the cognitive load associated with the test. In contrast, the maintenance in crystallized intelligence improves the accuracy of the process, strengthening linguistic functionality in the advanced stages of old age.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.