The study examined whether words are misperceived during natural fluent reading and the extent to which contextual and lexical properties bias perception. Target words were pairs of orthographic neighbors that differed in frequency. Pretarget context was neutral (Experiment 1) or biased toward the higher frequency member of the pair (Experiments 2 and 3), and posttarget context was neutral, congruent, or incongruent. Critically, incongruent context was constructed so that it was congruent with the target's neighbor. First-pass viewing showed only effects of target frequency. During silent reading (Experiments 1 and 2), rereading measures showed that the target frequency effect was smaller in the incongruent posttarget context condition than in the neutral and congruent conditions, and this occurred irrespective of prior context. Presumably, lower frequency words were less impeded by incongruent context because they were often misperceived as a congruent higher frequency neighbor. An oral reading task (Experiment 3) showed that the lower frequency target was more often misread than the higher frequency neighbor, and this proneness to error was influenced by posttarget context. Although target frequency influenced proneness to error, biased prior sentence context appeared to influence the construal of sentence meaning to accommodate incongruent targets and posttarget context. (PsycINFO Database Record
Spoken word recognition models incorporate the temporal unfolding of word information by assuming that positional match constrains lexical activation. Recent findings challenge the linearity constraint. In the visual world paradigm, Toscano, Anderson, and McMurray observed that listeners preferentially viewed a picture of a target word’s anadrome competitor (e.g., competitor bus for target sub) compared with phonologically unrelated distractors (e.g., well) or competitors sharing an overlapping vowel (e.g., sun). Toscano et al. concluded that spoken word recognition relies on coarse grain spectral similarity for mapping spoken input to a lexical representation. Our experiments aimed to replicate the anadrome effect and to test the coarse grain similarity account using competitors without vowel position overlap (e.g., competitor leaf for target flea). The results confirmed the original effect: anadrome competitor fixation curves diverged from unrelated distractors approximately 275 ms after the onset of the target word. In contrast, the no vowel position overlap competitor did not show an increase in fixations compared with the unrelated distractors. The contrasting results for the anadrome and no vowel position overlap items are discussed in terms of theoretical implications of sequential match versus coarse grain similarity accounts of spoken word recognition. We also discuss design issues (repetition of stimulus materials and display parameters) concerning the use of the visual world paradigm in making inferences about online spoken word recognition.
Assessing performance in the workplace typically relies on subjective evaluations, such as, peer ratings, supervisor ratings and self assessments, which are manual, burdensome and potentially biased. We use objective mobile sensing data from phones, wearables and beacons to study workplace performance and offer new insights into behavioral patterns that distinguish higher and lower performers when considering roles in companies (i.e., supervisors and non-supervisors) and different types of companies (i.e., high tech and consultancy). We present initial results from an ongoing year-long study of N=554 information workers collected over a period ranging from 2-8.5 months. We train a gradient boosting classifier that can classify workers as higher or lower performers with AUROC of 0.83. Our work opens the way to new forms of passive objective assessment and feedback to workers to potentially provide week by week or quarter by quarter guidance in the workplace.
We know that reading involves coordination between textual characteristics and visual attention, but research linking eye movements during reading and comprehension assessed after reading is surprisingly limited, especially for reading long connected texts. We tested two competing possibilities: (a) the weak association hypothesis: Links between eye movements and comprehension are weak and short-lived, versus (b) the strong association hypothesis: The two are robustly linked, even after a delay. Using a predictive modeling approach, we trained regression models to predict comprehension scores from global eye movement features, using participant-level crossvalidation to ensure that the models generalize across participants. We used data from three studies in which readers (Ns = 104, 130, 147) answered multiple-choice comprehension questions 30 min after reading a 6,500-word text, or after reading up to eight 1,000-word texts. The models generated accurate predictions of participants' text comprehension scores (correlations between observed and predicted comprehension: 0.384, 0.362, 0.372, ps < .001), in line with the strong association hypothesis. We found that making more, but shorter fixations, consistently predicted comprehension across all studies. Furthermore, models trained on one study's data could successfully predict comprehension on the others, suggesting generalizability across studies. Collectively, these findings suggest that there is a robust link between eye movements and subsequent comprehension of a long connected text, thereby connecting theories of low-level eye movements with those of higher order text processing during reading.
Several psychologists posit that performance is not only a function of personality but also of situational contexts, such as day-level activities. Yet in practice, since only personality assessments are used to infer job performance, they provide a limited perspective by ignoring activity. However, multi-modal sensing has the potential to characterize these daily activities. This paper illustrates how empirically measured activity data complements traditional effects of personality to explain a worker's performance. We leverage sensors in commodity devices to quantify the activity context of 603 information workers. By applying classical clustering methods on this multisensor data, we take a person-centered approach to describe workers in terms of both personality and activity. We encapsulate both these facets into an analytical framework that we call organizational personas. On interpreting these organizational personas we find empirical evidence to support that, independent of a worker's personality, their activity is associated with job performance. While the effects of personality are consistent with the literature, we find that the activity is equally effective in explaining organizational citizenship behavior and is less but significantly effective for task proficiency and deviant behaviors. Specifically, personas that exhibit a daily-activity pattern with fewer location visits, batched phone-use, shorter desk-sessions and longer sleep duration, tend to perform better on all three performance metrics. Organizational personas are a descriptive framework to identify the testable hypotheses that can disentangle the role of malleable aspects like activity in determining the performance of a worker population.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.