Standard approaches for identifying task-completion strategies, such as precrastination and procrastination, reduce behavior to single markers that oversimplify the process of task completion. To illustrate this point, we consider three task-completion strategies and introduce a new method to identify their use. This approach was tested using an archival data set (N = 8,655) of the available electronic records of research participation at Kansas State University. The approach outperformed standard diagnostic approaches and yielded an interesting finding: Several strategies were associated with negative outcomes. Specifically, both procrastinators and precrastinators struggled to finish tasks on time. Together, these findings underscore the importance of using holistic approaches to determine the relationship among task characteristics, individual differences, and task completion.
Objective: We used this experiment to determine the degree to which cues to difficulty are used to make judgments of difficulty (JODs). Background: Traditional approaches involve seeking to standardize the information people used to evaluate subjective workload; however, it is likely that conscious and unconscious cues underlie peoples’ JODs. Method: We designed a video game task that tested the degree to which time-on-task, performance-based feedback, and central cues to difficulty informed JODs. These relationships were modeled along five continuous dimensions of difficulty. Results: Central cues most strongly contributed to JODs; judgments were supplemented by peripheral cues (performance-based feedback and time-on-task) even though these cues were not always valid. In addition, participants became more likely to rate the task as “easier” over time. Conclusion: Although central cues are strong predictors of task difficulty, people confuse task difficulty (central cues), effort allocation and skill (performance-based feedback), and proxy cues to difficulty (time) when making JODs. Application: Identifying the functional relationships between cues to difficulty and JODs will provide valuable insight regarding the information that people use to evaluate tasks and to make decisions.
Procrastination is a chronic and widespread problem; however, emerging work raises questions regarding the strength of the relationship between self-reported procrastination and behavioral measures of task engagement. This study assessed the internal reliability, concurrent validity, predictive validity, and psychometric properties of 10 self-report procrastination assessments using responses collected from 242 students. Participants’ scores on each self-report instrument were compared to each other using correlations and cluster analysis. Lasso estimation was used to test the self-report scores’ ability to predict two behavioral measures of delay (days to study completion; pacing style). The self-report instruments exhibited strong internal reliability and moderate levels of concurrent validity. Some self-report measures were predictive of days to study completion. No self-report measures were predictive of deadline action pacing, the pacing style most commonly associated with procrastination. Many of the self-report measures of procrastination exhibited poor fit. These results suggest that researchers should exercise caution in selecting self-report measures and that further study is necessary to determine the factors that drive misalignment between self-reports and behavioral measures of delay.
While it is easy to assume that university students who wait until the last minute to complete surveys for their class research requirements provide low-quality data, this issue has not been empirically examined. The goal of the present study was to examine the relation between student research procrastination and two important data quality issues-careless responding and measurement noninvariance. Data gathered from university students across two semesters tentatively indicated that procrastination is related to low-quality survey data. Procrastination was slightly more problematic for certain data quality issues (measurement noninvariance) than others (careless responding). These relations, however, were small and contingent on how procrastination and careless responding were measured. Accordingly, it seems more beneficial for researchers to select a sampling window that supports their research goals and statistical power requirements rather than select a sampling window that attempts to minimize careless survey responding or other measurement issues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.