Are movement sequences executed in a hierarchically controlled fashion? We first state explicitly what such control would entail, and we observe that if a movement sequence is planned hierarchically, that does not imply that its execution is hierarchical. To find evidence for hierarchically controlled execution, we require subjects to perform memorized sequences of finger responses like those used in playing the piano. The error data we obtain are consistent with a hierarchical planning as well as execution model, but the interresponse-time data provide strong support for a hierarchical execution model. We consider three alternatives to the hierarchical execution model and reject them. We also consider the implications of our results for the role of timing in motor programs, the characteristics of motor buffers, and the relations between memory for symbolic and motor information.
Most techniques for relating textual information rely on intellectually created links such as author-chosen keywords and titles, authority indexing terms, or bibliographic citations. Similarity of the semantic content of whole documents, rather than just titles, abstracts, or overlap of keywords, offers an attractive alternative. Latent semantic analysis provides an effective dimension reduction method for the purpose that reflects synonymy and the sense of arbitrary word combinations. However, latent semantic analysis correlations with human text-to-text similarity judgments are often empirically highest at Ϸ300 dimensions. Thus, two-or threedimensional visualizations are severely limited in what they can show, and the first and͞or second automatically discovered principal component, or any three such for that matter, rarely capture all of the relations that might be of interest. It is our conjecture that linguistic meaning is intrinsically and irreducibly very high dimensional. Thus, some method to explore a high dimensional similarity space is needed. But the 2.7 ؋ 10 7 projections and infinite rotations of, for example, a 300-dimensional pattern are impossible to examine. We suggest, however, that the use of a high dimensional dynamic viewer with an effective projection pursuit routine and user control, coupled with the exquisite abilities of the human visual system to extract information about objects and from moving patterns, can often succeed in discovering multiple revealing views that are missed by current computational algorithms. We show some examples of the use of latent semantic analysis to support such visualizations and offer views on future needs. M ost techniques for relating textual information rely on intellectually created links such as author-chosen keywords and titles, authority indexing terms, or bibliographic citations (1). Similarity of the semantic content of whole documents, rather than just titles, abstracts, or an overlap of keywords, offers an attractive alternative. Latent semantic analysis (LSA) provides an effective dimension reduction method for the purpose that reflects synonymy and the sense of arbitrary word combinations (2, 3). Latent Semantic AnalysisLSA is one of a growing number of corpus-based techniques that employ statistical machine learning in text analysis. Other techniques include the generative models of Griffiths and Steyvers (4) and Erosheva et al. (5), and the string-edit-based method of S. Dennis (6) and several new computational realizations of LSA. Unfortunately, to date none of the other methods scales to text databases of the size often desired for visualization of domain knowledge. The linear singular value decomposition (SVD) technique described here has been applied to collections of as many as a half billion documents containing 750,000 unique word types, all of which are used in measuring the similarity of two documents. LSA presumes that the overall semantic content of a passage, such as a paragraph, abstract, or full coherent document, can be useful...
A model was quantified to describe the integration of vowel duration, fricative duration, and fundamental frequency (F 0) contour as cues to final position fricatives differing in voicing. The basic assumptions are that perceived vowel duration and perceived frication duration are cues to the identity of final position fricatives and that both F o contour and vowel duration influence perceived vowel duration. Binary choice and rating responses to synthetic stimuli varying independently along the three dimensions were collected. The results were consistent with the assumption that F0 contour operates by modifying perceived vowel duration, which is a direct cue. Unfortunately, the nature of the modification appears to be very similar in form to that which results from the integration of two independent cues in syllable identification. Therefore, the results do not allow a rejection of the idea that the perception of F o contour may directly cue the identity of final position fricatives.
Abstract.We describe the design and implementation of the Glue-Nail deductive database system. Nail is a purely declarative query language; Glue is a procedural language used for non-query activities. The two languages combined are sufficient to write a complete application. Nail and Glue code are both compiled into the target language IGlue. The Nail compiler uses variants of the magic sets algorithm and supports well-founded models. The Glue compiler's static optimizer uses peephole techniques and data flow analysis to improve code. The IGlue interpreter features a run-time adaptive optimizer that reoptimizes queries and automatically selects indexes. We also describe the Glue-Nail benchmark suite, a set of applications developed to evaluate the Glue-Nail language and to measure the performance of the system.
While team tasks provide a wealth of data on individual and team performance, techniques for modeling team communication can be quite effortful and time-consuming. Automated techniques of analyzing team discourse provide the promise of quickly judging team performance and permitting feedback to teams both in training and in operations. In previous research, techniques using Latent Semantic Analysis (LSA) have proven successful for analyzing team transcripts. However, converting the audio discourse into transcripts often requires hand transcription. In this work, we describe applying automated speech recognition (ASR) to team transcripts and using the output of the ASR to predict overall team performance. Results indicate that ASR can be used in conjunction with semantic methods of modeling team communication to provide accurate predictions of performance. The work has potential for assisting operators in the performance of their tasks because it can "listen"' and in real-time evaluate fieform verbal communication from a variety of sources.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.