This paper describes a computer simulation of reading that is strongly driven by eye fixation data from human readers. The simulation, READER, is a natural language understanding system that reads a text word by word and whose processing cycles on each word have some correspondence with the human gaze duration on that word. READER operates within a newly developed information processing architecture, a Collaborative, Activation-based, Production System (CAPS) that permits the modeling of the temporal properties of human comprehension. CAPS allows for concurrent, collaborative execution of processes operating at different levels of analysis. As READER encounters each successive word, the word is operated on by processes at the levels of word encoding, lexical access, syntactic and semantic analysis, and referential and schema-level processes. Like human readers, READER uses a strategy of immediacy of comprehension, attempting to interpret each word as soon as it is encountered, rather than unnecessarily buffering information. A major contribution of this simulation is its use of human performance characteristics in constraining and determining the model's mechanisms. This paper describes a computer simulation of reading that is strongly driven by what we know about reading. The data consist of the readers' gaze durations on each word of a text. People reading a text at the rate of 250 words per minute could spend a quarter of a second on each word. But they don't. Instead, their time on different words shows considerable systematic vari-
When Ss attend to one auditory message, they have no permanent memory for a second auditory message received simultaneously. Generally, it has been argued that a similar effect would occur crossmodally. This hypothesis was tested in the present experiment for messages presented to visual and auditory modalities. All Ss were tested for recognition of information presented either while shadowing or while hearing but not shadowing a passage of prose presented to one ear. One group heard a list of concrete nouns in their other ear. Three other groups received (1) printed words, (2) pictures of objects easily labeled, or (3) pictures of objects difficult to label. The shadowing task produced a decrement in recognition scores for the first three groups but not for the group receiving pictures of objects difficult to label. Further, the shadowing task interfered more with information received auditorily than with any form of visual information. These results suggest that information received visually is stored in a long-term modality-specific memory that may operate independently of the auditory modality.
Abstrac tWe have developed an innovative ray-tracing simulation algorithm to describe Relativistic Effects in SpaceTime ("REST") . Our algorithm, called REST frame, models light rays that hav e assumed infinite speed in conventional ray-tracing to have a finite speed in spacetime, and uses the non-Newtonian Lorent z Transformation to relate measurements of a single event in different inertial coordinate systems (inertial frames) . Our earlie r work [5][6][7] explored the power of REST frame as an experimentation tool to study the rich visual properties in natura l world modeled by Special Relativity . Non-intuitive images o f the anisotropic deformation ("warping " ) of space, the intensity concentration/spreading of light sources in spacetime, and th e relativistic Doppler shift were visualized from our simulations .REST frame simulations are computationally expensive . Several hours of CPU time may be needed to generate one intricate image on a relatively powerful DECStation 3100 . Thi s high simulation cost of REST frame precludes its application i n interactive, real-time graphics environments .In this paper, we report a scanline based REST frame rendering method that provides a faster alternative to the original ray-tracing based RESTframe implementation . This ne w method operates in the spirit of the classical Z-buffer in computer graphics[2] and the inter-inertial frames point-mappin g method investigated in physics in the early 1960's [14][12], an d determines the visibility of points in spacetime by their spatial and temporal visibility . Specifically, all spacetime even t points that are potentially visible from the viewpoint at th e imaging time are geometrically projected in three dimensiona l (3D) space to the image plane pixel buffer . Multiple points with a same pixel affiliation are sorted by their time distance
This paper has to do with the visual perception of actions that are discretely conceptualized. The intent is to develop a vision system that produces causal or intentional descriptions of actions, thus providing the conceptual underpinnings of natural language descriptions. The computational theory is developed in linking a “point of action definition” analysis to an analysis of how the physical events will elicit appropriate verbal descriptions. Out of this theory of direct computational linkages between physical events, points of action definition, and verbal descriptions, comes a theory of perception that provides some insight into how to go about constructing systems that can watch the world and report on what they are watching.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.