Integrative complexity broadly measures the structural complexity of statements. This breadth, although beneficial in multiple ways, can potentially hamper the development of specific theories. In response, the authors developed a model of complex thinking, focusing on 2 different ways that people can be complex within the integrative complexity system and subsequently developed measurements of each of these 2 routes: Dialectical complexity focuses on a dialectical tension between 2 or more competing perspectives, whereas elaborative complexity focuses on complexly elaborating on 1 singular perspective. The authors posit that many variables have different effects on these 2 forms of complexity and subsequently test this idea in 2 different theoretical domains. In Studies 1a, 1b, and 2, the authors demonstrate that variables related to attitude strength (e.g., domain importance, extremism, domain accessibility) decrease dialectical complexity but increase elaborative complexity. In Study 3, the authors show that counterattitudinal lying decreases dialectical complexity but increases elaborative complexity, implicating a strategic (as opposed to a cognitive strain) view of the lying-complexity relationship. The authors argue that this dual demonstration across 2 different theoretical domains helps establish the utility of the new model and measurements as well as offer the potential to reconcile apparent conflicts in the area of cognitive complexity.
Integrative complexity is a conceptually unique and very popular measurement of the complexity of human thought. We believe, however, that it is currently being underutilized because it takes quite a bit of time to score. More time-efficient computer-based measurements of complexity that are currently available are correlated with integrative complexity at fairly low levels. To help fill in this gap, we developed a novel automated integrative complexity system designed specifically from the integrative complexity theoretical framework. This new automated IC system achieved an alpha of .72 on the standard integrative complexity coding test. In addition, across nine datasets covering over 1,300 paragraphs, this new automated system consistently showed modest relationships with human-scored integrative complexity (average alpha = .62; average r = .46). Further analyses revealed that this relationship consistently remained significant when controlling for superficial markers of complexity and that the new system accounted for both the differentiation and integration components of integrative complexity. Although the overlap between the automated and human-scored systems is only modest (and thus suggests the continued usefulness of human scoring), it nonetheless provides the best automated integrative complexity measurement to date.
Methods assessing non-daily smoking are of concern because biochemical measures can not verify self-reports beyond 7 days. This study compares two self-reported smoking measures for non-daily smokers. A total of 389 college students, (48% female, 96% white, mean age of 19) smoking between 1 and 29 days out of the past 30, completed computer assessments in three cohorts with the order of administration of the measures counterbalanced. Values from the two measures were highly correlated. Comparisons of Timeline Follow-Back (TLFB) with the global questions for the total sample of non-daily smokers yielded statistically significant differences (p<.001), albeit small, between measures with the TLFB resulting on average in 2.38 more total cigarettes smoked out of the past 30 days, 0.46 less smoking days, and 0.21 more cigarettes smoked per day. Analyses by level of smoking showed that the discordance between the measures differed by frequency of smoking. Global questions of days smoked resulted in frequent reporting in multiples of five days, suggesting digit bias. Overall the two measures of smoking were highly correlated and equally effective for identifying any smoking in a 30-day period among non-daily smokers. KeywordsAssessment; Timeline Follow-back; tobacco; smoking; college students Correspondence concerning this article should be addressed to: Kari Jo Harris, 32 Campus Drive, School of Public and Community Health Sciences, The University of Montana, Missoula, Montana 59812., Kari.Harris@umontana.edu;. Publisher's Disclaimer: The following manuscript is the final accepted manuscript. It has not been subjected to the final copyediting, fact-checking, and proofreading required for formal publication. It is not the definitive, publisher-authenticated version. The American Psychological Association and its Council of Editors disclaim any responsibility or liabilities for errors or omissions of this manuscript version, any version derived from this manuscript by NIH, or other third parties. The published version is available at www.apa.org/journals/adb NIH Public Access Methods to assess non-daily smoking have been a topic of concern and debate among researchers particularly because there are no biochemical methods to verify self-reported smoking over a 30-day period (Mermelstein et al., 2002). Similar to younger students and an increasing proportion of adult smokers, many college students smoke irregularly and infrequently. Of the college students who smoked at least once in the past 30 days, only about one in four smoked every day Wetter et al., 2004). Researchers have used multiple methods to assess self-reported smoking among college students, including the use of single-item global questions. Single-item global questions have focused on establishing any smoking in the past 30 days (Substance Abuse and Mental Health Services Administration, 2007) and assessing days of smoking (Core Institute, 1999). Single-item questions that conflate smoking days and number of cigarettes smoked in one question (Wechsle...
Computer algorithms that analyze language (natural language processing systems) have seen a great increase in usage recently. While use of these systems to score key constructs in social and political psychology has many advantages, it is also dangerous if we do not fully evaluate the validity of these systems. In the present article, we evaluate a natural language processing system for one particular construct that has implications for solving key societal issues: Integrative complexity. We first review the growing body of evidence for the validity of the Automated Integrative Complexity (AutoIC) method for computer-scoring integrative complexity. We then provide five new validity tests: AutoIC successfully distinguished fourteen classic philosophic works from a large sample of both lay populations and political leaders (Test 1) and further distinguished classic philosophic works from the rhetoric of Donald Trump at higher rates than an alternative system (Test 2). Additionally, AutoIC successfully replicated key findings from the hand-scored IC literature on smoking cessation (Test 3), U.S. Presidents’ State of the Union Speeches (Test 4), and the ideology-complexity relationship (Test 5). Taken in total, this large body of evidence not only suggests that AutoIC is a valid system for scoring integrative complexity, but it also reveals important theory-building insights into key issues at the intersection of social and political psychology (health, leadership, and ideology). We close by discussing the broader contributions of the present validity tests to our understanding of issues vital to natural language processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.