Language is a psychologically rich medium for human expression and communication. While it is often used in moral psychology as an intermediary between researcher and participant, much of the human experience that occurs through language — our relationships, conversations, and, in general, the everyday transmission of our thoughts — has yet to be studied in association with moral concerns. In order to understand how moral concerns relate to observed language usage, we paired Facebook status updates (N = 107,798) from English-speaking participants (n = 2,691) with their responses on the Moral Foundations Questionnaire, which measures Care, Fairness, Loyalty, Authority, and Purity concerns. Overall, we found consistent evidence that participants’ self-reported moral concerns can be predicted from their language, though the magnitude of this effect varied considerably among concerns. Across a diverse selection of Natural Language Processing methods, cross-validated R2 values ranged from 0.04 for predicting Fairness concerns to 0.21for predicting Purity concerns. In follow-up analyses, each moral concern was found to be related to distinct patterns of relational, emotional, and social language. Our results are the first finding relating internally valid measures of moral concerns to observations of language, motivating several new avenues for exploring and investigating how the moral domain intersects with language usage.
Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been raising the alarm on why we should be cautious in interpreting and using these models: they are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested and interpreted. As psychologists, we thus face a fork in the road; Down the first path, we can continue to use these models without examining and addressing these critical flaws, and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias towards this growing field, collaborating with computer scientists to mitigate the deleterious outcomes associated with these models. This paper serves to light the way down the second path by identifying how extant psychological research can help examine and mitigate bias in ML models.
Over the past decades, text-analysis methods have been slowly integrated into the toolbox of methods used to reliably measure psychological constructs. Yet, many of the existing computational methods in psychological text analysis remain atheoretical and lack the interpretability that social sciences are accustomed to and desire. Here, we introduce a novel method for theory-driven text analysis by bridging the power of contextual language models and common psychometric scales. The new technique, which we call Contextualized Construct Representation (CCR), retains high levels of interpretability and top-down flexibility, but makes use of state-of-the-art language models developed in natural language processing (NLP). CCR is a flexible technique that will be able to adapt to the continuously progressing set of tools for language modeling. We discuss how our proposed technique quantifies psychological information in textual data, and demonstrate in two studies (N = 2,996) that CCR outperforms other top-down methods (i.e., word-counting and word-embedding representations) in predicting an array of psychological outcomes common in social and personality psychology, including moral values, the need for cognition, political ideology, strength of norms, and cultural orientation. We provide an accompanying R package, Python library, and develop an interface for researchers to conveniently use CCR in their research.
This study aimed to identify the best genotypes using the genotype × yield × trait (GYT) method. To investigate the relationships was performed between yield × traits in four regions of Karaj, Birjand, Shiraz and Arak in two cropping years in a randomized complete block design (RCBD) with three replications. The average grain yield in four regions and two years of the experiment was calculated as 5966 kg/ha, and GYT was obtained based on the multiplication of grain yield with different traits. Comparing the average effect of genotype × year in different environments showed that KSC703 and KSC707 hybrids are among the most productive hybrids among the studied genotypes in grain yield. By examining the correlation coefficients between yield × traits in the tested areas, Y × TWG with Y × GW, Y × NRE, Y × NGR and Y × EL, Y × ED with Y × NGR, Y × NRE with Y × GW and the combination of Y × GW with Y × GL had a positive and significant correlation in all regions. The correlation diagrams were drawn on the evaluated areas' data and showed the correlation of most compounds except Y × GT with each other. Based on the analysis of the main components, the first three components explained the greatest diversity in the population. They were named the component ear grain profile, grain thickness component and plant height profile component.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.