Background Mental health concerns of university students are gaining more attention since the emergence of the coronavirus disease. Consequently, scholars in education, health and psychology‐related fields have attributed the dwindling subjective well‐being (SWB) of students to their low levels of digital health literacy (DHL). However, little attention has been paid to an important variable like pocket money (PM) which might serve as a buffer against reduced levels of SWB. In this study, we explored the dynamics of PM and its linkage with DHL and SWB among university students in Ghana. Methods With a cross‐sectional design, a convenient sample of 1160 students was obtained from the University of Education, Winneba, Ghana. The COVID‐DHL and WHO‐5 Well‐being instruments were used for the data collection for a 2 months period (February–March, 2021). Chi‐square test, multivariate regression, simple linear regression, and PROCESS mediation analyses were performed with the use of SPSS software version 25. Results The study found that while most of the students were financially supported by their parents ( n = 715, 61.6%), a larger proportion of them reported that their PM was either less sufficient or not sufficient ( n = 550; 76.9%). Findings revealed a positive relationship between PM and SWB ( B = −36.419, p < 0.001; B = −13.146, p = 0.012; B = −10.930, p = 0.043), with this relationship mediated by DHL ( B = −1.139, confidence interval [CI] [−2.073, −0.263] vs. −2.300, CI [−4.290, −0.532] vs. −8.366, CI [−14.863, −1.908]). Conclusions Students with little to insufficient PM were vulnerable to mental health problems, although this could be buffered by the high DHL levels. In practical terms, not only should the PM of university students be increased, but the sources of PM should be complemented since the sufficiency level of PM was associated with the source of finance. More importantly, parents should be empowered through job creation so that sufficient levels of PM can be provided to university students.
There is no need or point to testing of knowledge, attributes, traits, behaviours or abilities of an individual if information obtained from the test is inaccurate. However, by and large, it seems the estimation of psychometric properties of test items in classroomshas been completely ignored otherwise dying slowly in most testing environments. In the quest to obtain sound and efficient test results, it is imperative that assessorsrely on somepsychometric properties to make informed classrooms decisions. These psychometric properties can be estimated using Kuder-Richardson20 Formula. In this study, 30 multiple-choice items were administered and used for the study. The strength of each item was analysed by looking at their difficulty level and how theydiscriminated among the students. Reliability tests were also conducted in addition to the item analysis to observe the quality of the test as a whole. With lucid prose, KR-20 was used to estimate the psychometric properties of 30 set integrated science test items (which werescored dichotomously)to serve as a primer for assessorsin higher institutions.The procedure produced coefficient value of 0.6915which is approximately 0.7 implying that the reliability of the test was high.The procedure we used to arrive at the obtained coefficient is extensively outlined in the paper. We concluded thatthe suggested procedure (KR-20) for estimating psychometric properties may have a paradigm shift in classroom testing situations where it will communicate to teachers on the efficiency and process of teachermade tests. In essence, this could enhance the quest of obtaining the real knowledge, attributes, traits, behaviours or abilities of students by using test items that are reliable and dependable.
There is seemingly anonymity and observation among consumers of educational research that multivariate analysis of variance (MANOVA) is one of the most predominant and commonly used statistical models for the analysis of data in the discipline of education. Despite the large use of MANOVA in research studies, most research practitioners within the educational research trail still face difficulties in reporting and meaningfully interpreting MANOVA results. In analysis, multivariate analysis of variance tests for two or more independent variables and two or more dependent variables. In practice, the one-way multivariate analysis of variance is used to determine whether there are any differences between independent groups on more than one continuous dependent variable. This paper provides a data driven example of reporting and interpreting presentation of multivariate analysis of variance for consumers of research by capitalizing on how it can be reported and interpreted using the APA format. In the paper, the researcher used an example data-driven set throughout the write up to illustrate how MANOVA is reported and interpreted in educational studies. The paper emphasized that if researchers do not run the statistical tests on assumptions of using MANOVA correctly, the accrued results that will be might not be valid. These could therefore have structural effects on the conclusions and implications that will be drawn from the analysis. The paper concludes with remarks that throws more lights and emphasis on the relevant and consequent implications of using MANOVA in educational studies.
Viewing from the psychometric models of modern assessment, it is imperative to note that assessment has been undergoing major constructive modifications. One of the main modifications is the transition from the use of classical test theory (CTT) to modern test models (IRT) as well as methods used in test construction. This is to say that the pendulum swing in test construction techniques is from CTT to IRT. Clearly, the properties of IRT create several benefits in the area of test design and item banking over CTT. This paper estimated the IRT psychometric proprieties (difficulty-1PL, discrimination-2PL and guessing indices-3PL) of West African Examinations Council (WAEC) 2020 core mathematics objectives test examination in Ghana within the remit of IRT. The study revealed that averagely, the 2020 core mathematics objectives test examination were within the difficulty level (1PL) of the examines. In terms of discrimination level (2PL), the psychometric properties indicated that most of the items discriminated among the examinees. However, on the Pseudo-guessing parameter (3PL), it was found that most items were subjected to pseudo-guessing. Theoretically and practically, the way items are constructed could have influenced outcome. Therefore, we strongly advocate and recommend that item writers for examination body like West African Examinations Council (WAEC) should rigorously be trained to be more conscious of theoretical and practical implications of the guessing parameter. This will help in avoiding test items that are prone to pseudo-guessing.
Background: Test score pollution explains how multifaceted factors affect the truthfulness of a test score interpretation. The pressure to raise test scores has resulted in practices which pollute the inferences we make from these scores. Issues of accurate testing remains relevant in the space of any testing environment in Ghana and beyond. This study explored the different sources of test score pollution considered test preparation practices (teacher factor), test administration situations (testing environment), and external factors (parents and community pressure). Methods:The study was nested into the quantitative approach using descriptive survey. Basic school teachers (n=353) and parents (n=123) were selected from three districts (South, North and Central Tongu) in the Volta region using G*Power software. A validated and standardized instrument (with alpha coefficient of .783 and correlation coefficient of .823) was used to obtain the data. The obtained data was analyzed using SPSS v.25 and interpreted with linear multiple regression after the data had met all the required assumptions. Findings:The results revealed that all the predictive factors that is test preparation practices (t=4.73, Sig.=.007, CI95%), test administration situations (t=4.20, Sig.=.006, CI95%) and parents and community pressure (t=2.69, Sig.=.000, CI95%) predicted test score pollution in the selected districts. However, among all the predictor variables, test administration situations (testing environment or conditions) were identified as having much influence on test score pollutions in the districts (R 2 =.652, 65.2%, Sig.=.000, ꞵ=.616, CI95%). Conclusion:The study concluded that due to test score pollution, most test practices in Ghana are not at its optimal best. Clearly, the demand and the pressure to raise test scores results pollute and contaminate the interpretations, inferences and decisions that are made from these test scores.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.