The psychological burden of the COVID-19 pandemic may have a lasting effect on emotional well-being of healthcare workers. Medical personnel working at the time of the pandemic may experience elevated occupational stress due to the uncontrollability of the virus, high perceived risk of infection, poor understanding of the novel virus transmission routes and unavailability of effective antiviral agents. This study used path analysis to analyze the relationship between stress and alexithymia, emotional processing and negative/positive affect in healthcare workers. The sample included 167 nurses, 65 physicians and 53 paramedics. Sixty-two (21.75 %) respondents worked in COVID-19-designated hospitals. Respondents were administered the Toronto Alexithymia Scale-20, Cohen’s Perceived Stress Scale, Emotional Processing Scale, and the Positive and Negative Affect Schedule. The model showed excellent fit indices (χ2 (2)=2.642, p=0.267; CFI=0.999, RMSEA=0.034, SRMR=0.015). Multiple group path analysis demonstrated physicians differed from nurses and paramedics at the model level (X2diff (7)=14.155, p<0.05 and X2diff (7)=18.642, p<0.01, respectively). The relationship between alexithymia and emotional processing was stronger in nurses than in physicians (difference in beta=0.27; p<0.05). Individual path χ2 tests also revealed significantly different paths across these groups. The results of the study may be used to develop evidence-based intervention programs promoting healthcare workers’ mental health and well-being.
In 2011, Diederik Stapel’s fraud was discovered. It turned out that not only did Stapel forge data but also journals failed to notice many obvious errors and encouraged distortions (e.g., not reporting studies with non-significant results). Simultaneously, Simmons et al. (2011) published an article dedicated to questionable research practices that could significantly increase the number of false-positive results through arbitrary decisions pertaining to data analysis and presentation. Shortly after, there appeared results of studies suggesting that a large number of researchers confess to such practices and that they are, in fact, commonly accepted. These events sparked off a wide debate about the reliability of data in psychology. The author of the present paper discusses the most important points of this debate, showing how the low level of theoretical maturity, the lack of consensus on the rules of applying research techniques and interpreting results, and the unrealistic demands of editors of empirical journals may have contributed to this crisis.
The article presents the results of a study on the role of social participation (Reinders) in shaping the identity (Luyckx et al.) of people with mild intellectual disability in late adolescence and emerging adulthood compared to those in intellectual norm (N = 127). Three waves of measurement were carried out at semi-annual intervals, using the Dimensions of Identity Development Scale (DIDS / PL-1) and the Social Participation Questionnaire (SPQ-S). In all the waves people with intellectual disability had a higher level of the moratorium orientation, and at Wave 3 they had a higher level of the transitive orientation. Differences in the levels of identity dimensions were observed in only one wave and only in the case of exploration in depth. The type of social participation has proved to be a factor differentiating the levels of identity dimensions, especially commitment making and identification with commitment, the highest level of which was observed in people with integration and assimilation types. The study responds to the need, expressed in the literature, to focus on specific groups in identity development studies.
In 2011, Diederik Stapel’s fraud was discovered. It turned out that not only did Stapel forge data but also journals failed to notice many obvious errors and encouraged distortions (e.g., not reporting studies with non-significant results). Simultaneously, Simmons et al. (2011) published an article dedicated to questionable research practices that could significantly increase the number of false-positive results through arbitrary decisions pertaining to data analysis and presentation. Shortly after, there appeared results of studies suggesting that a large number of researchers confess to such practices and that they are, in fact, commonly accepted. These events sparked off a wide debate about the reliability of data in psychology. The author of the present paper discusses the most important points of this debate, showing how the low level of theoretical maturity, the lack of consensus on the rules of applying research techniques and interpreting results, and the unrealistic demands of editors of empirical journals may have contributed to this crisis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.