Hypothesis testing is a prevalent method of inference used to test a claim about a population parameter based on sample data, and it is a central concept in many introductory statistics courses. At the same time, the use of hypothesis testing to interpret experimental data has raised concerns due to common misunderstandings by both scientists and students. With statistics education reform on the rise, as well as an increasing number of students enrolling in introductory statistics courses each year, there is a need for research to investigate students' understanding of hypothesis testing. In this study we used APOS Theory to investigate twelve introductory statistics students' reasoning about one-sample population hypothesis testing while working two real-world problems. Data were analyzed and compared against a preliminary genetic decomposition, which is a conjecture for how an individual might construct an understanding of a concept. This report presents examples of Actions, Processes, and Objects in the context of one-sample hypothesis testing as exhibited through students' reasoning. Our results suggest that the concepts involved in hypothesis testing are related through the construction of higher-order, coordinated Processes operating on Objects. As a result of our data analysis, we propose refinements to our genetic decomposition and offer suggestions for instruction of one-sample population hypothesis testing. We conclude with appendices containing a comprehensive revised genetic decomposition along with a set of guided questions that are designed to help students make the constructions called for by the genetic decomposition.
The goal of this paper is to propose a new method to generate multiple-choice items that can make creating quality assessments faster and more efficient, solving a practical issue that many instructors face. There are currently no systematic, efficient methods available to generate quality distractors (plausible but incorrect options that students choose), which are necessary for multiple-choice assessments that accurately assess students' knowledge. We propose two methods to use technology to generate quality multiplechoice assessments: (1) manipulating the mathematical problem to emulate common student misconceptions or errors and (2) disguising options to protect the integrity of multiple-choice tests. By linking options to common student misconceptions and errors, instructors can potentially use multiple-choice assessments as personalized diagnostic tools that can target and modify underlying misconceptions. Moreover, using technology to generate these quality distractors would allow for assessments to be developed efficiently, in terms of both time and resources. The method to disguise the options generated would have the added benefit of preventing students from working backwards from options to solution and thus would protect the integrity of the assessment. Preliminary results are included to exhibit the effectiveness of the proposed methods.
The design and facilitation of asynchronous online courses can have notable impacts on students related to persistence, performance, and perspectives. This case study presents current conditions for cognitive load and Community of Inquiry (CoI) presences in an asynchronous online introductory undergraduate STEM course. Researchers present the novel use of Python script to clean and organize data and a simplification of the instructional efficiency calculation for use of anonymous data. Key relationships between cognitive load and CoI presences are found through validated use of NASA-TLX instrument and transcript analysis of discussion posts. The data show that student presences are not consistent throughout a course but are consistent across sections. Instructor presences are not consistent throughout a course or across sections. The study also explored predominant factors within each presence, confirming previous reports of low cognitive presence in discussions. The highest extraneous cognitive load was reported for understanding expectations and preparing an initial post. These results provide support for improvements to course design and instructor professional development to promote Community of Inquiry and reduce extraneous cognitive load.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.