Systems Factorial Technology is a powerful framework for investigating the fundamental properties of human information processing such as architecture (i.e., serial or parallel processing) and capacity (how processing efficiency is affected by increased workload). The Survivor Interaction Contrast (SIC) and the Capacity Coefficient are effective measures in determining these underlying properties, based on response-time data. Each of the different architectures, under the assumption of independent processing, predicts a specific form of the SIC along with some range of capacity. In this study, we explored SIC predictions of discrete-state (Markov process) and continuous-state (Linear Dynamic) models that allow for certain types of cross-channel interaction. The interaction can be facilitatory or inhibitory: one channel can either facilitate, or slow down processing in its counterpart. Despite the relative generality of these models, the combination of the architecture-oriented plus the capacity oriented analyses provide for precise identification of the underlying system.
Systems Factorial Technology (SFT) comprises a set of powerful nonparametric models and measures, together with a theory-driven experiment methodology termed the Double Factorial Paradigm (DFP), for assessing the cognitive information processing mechanisms supporting the processing of multiple sources of information in a given task (Townsend & Nozawa, 1995). We provide an overview of the model-based measures of SFT together with a tutorial on designing a DFP experiment to take advantage of all SFT measures in a single experiment. Illustrative examples are given to highlight the breadth of applicability of these techniques across psychology. We further introduce and demonstrate a new package for performing SFT analyses using R for Statistical Computing.
A critical component of how we understand a mental process is given by measuring the effect of varying the workload. The capacity coefficient (Townsend & Nozawa, 1995; Townsend & Wenger, 2004) is a measure on response times for quantifying changes in performance due to workload. Despite its precise mathematical foundation, until now rigorous statistical tests have been lacking. In this paper, we demonstrate statistical properties of the components of the capacity measure and propose a significance test for comparing the capacity coefficient to a baseline measure or two capacity coefficients to each other.
Is there a point within a self-report questionnaire where participants will start responding carelessly? If so, then after how many items do participants reach that point? And what can researchers do to encourage participants to remain careful throughout the entirety of a questionnaire? We conducted two studies (Study 1 N = 358; Study 2 N = 129) to address these questions. Our results found (a) consistent evidence that participants responded more carelessly as they progressed further into a questionnaire, (b) mixed evidence that participants who were warned that carelessness would be punished displayed smaller increases in carelessness, and (c) mixed evidence that increases in carelessness were greater within an unproctored online study (Study 1) than within a proctored laboratory study (Study 2). These findings help address when and why careless responding is likely to occur, and they suggest effective preventive strategies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.