Massive Open Online Courses appear to have high attrition rates, involve students in peer-assessment with patriotic bias and promote education for already educated people. This paper suggests a formative assessment model which takes into consideration these issues. Specifically, this paper focuses on the assessment of open-format questions in Massive Open Online Courses. It describes the current assessment methods in Massive Open Online Courses and it argues that self-assessment should be the only way of formative assessment for the essays of xMOOCs and replace the peer-assessment.
This paper presents the results of a large-scale survey of undergraduates in England, concerning their envisaged career choices and how they made them. This gives a more complete account of those who do and do not want to be teachers than usual in the existing literature based primarily on prospective/existing teacher accounts. The paper looks at the issue of shortages, the reasons why people might be deterred from teaching, and summarises the methods used in our new study, followed by the results. The results cover descriptive analyses, and a comparison of responses from those who considered being a teacher (or not), those who had applied to train as a teacher (or not), and those intending to teach. These results are put together in two logistic regression models, one predicting/explaining who considered teaching, and the second explaining who then intends to become a teacher. Conclusions are drawn in the final section.
The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that: • a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.
The aim of this paper is the validation of measurement tools which assess critical thinking and creativity as general constructs instead of subject-specific skills. Specifically, this research examined whether there is convergent and discriminant (or divergent) validity between measurement tools of creativity and critical thinking. For this purpose, the multi-trait and multi-method matrix suggested by Campbell and Fiske (1959) was used. This matrix presented the correlation of scores that students obtain in different assessments in order to reveal whether the assessments measure the same or different constructs. Specifically, the two methods used were written and oral exams, and the two traits measured were critical thinking and creativity. For the validation of the assessments, 30 secondaryschool students in Greece and 21 in England completed the assessments. The sample in both countries provided similar results. The critical thinking tools demonstrated convergent validity when compared with each other and discriminant validity with the creativity assessments. Furthermore, creativity assessments which measure the same aspect of creativity demonstrated convergent validity. To conclude, this research provided indicators that critical thinking and creativity as general constructs can be measured in a valid way. However, since the sample was small, further investigation of the validation of the assessment tools with a bigger sample is recommended.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.