Objective To evaluate the impact – on diagnosis and treatment of malaria – of introducing rapid diagnostic tests to drug shops in eastern Uganda. Methods Overall, 2193 households in 79 study villages with at least one licensed drug shop were enrolled and monitored for 12 months. After 3 months of monitoring, drug shop vendors in 67 villages randomly selected for the intervention were offered training in the use of malaria rapid diagnostic tests and – if trained – offered access to such tests at a subsidized price. The remaining 12 study villages served as controls. A difference-in-differences regression model was used to estimate the impact of the intervention. Findings Vendors from 92 drug shops successfully completed training and 50 actively stocked and performed the rapid tests. Over 9 months, trained vendors did an average of 146 tests per shop. Households reported 22 697 episodes of febrile illness. The availability of rapid tests at local drug shops significantly increased the probability of any febrile illness being tested for malaria by 23.15% ( P = 0.015) and being treated with an antimalarial drug by 8.84% ( P = 0.056). The probability that artemisinin combination therapy was bought increased by a statistically insignificant 5.48% ( P = 0.574). Conclusion In our study area, testing for malaria was increased by training drug shop vendors in the use of rapid tests and providing them access to such tests at a subsidized price. Additional interventions may be needed to achieve a higher coverage of testing and a higher rate of appropriate responses to test results.
Given the complex relationships between patients' demographics, underlying health needs, and outcomes, establishing the causal effects of health policy and delivery interventions on health outcomes is often empirically challenging. The single interrupted time series (SITS) design has become a popular evaluation method in contexts where a randomized controlled trial is not feasible. In this paper, we formalize the structure and assumptions underlying the single ITS design and show that it is significantly more vulnerable to confounding than is often acknowledged and, as a result, can produce misleading results. We illustrate this empirically using the Oregon Health Insurance Experiment, showing that an evaluation using a single interrupted time series design instead of the randomized controlled trial would have produced large and statistically significant results of the wrong sign. We discuss the pitfalls of the SITS design, and suggest circumstances in which it is and is not likely to be reliable.
Given the complex relationships between patients' demographics, underlying health needs, and outcomes, establishing the causal effects of health policy and delivery interventions on health outcomes is often empirically challenging. The single interrupted time series (SITS) design has become a popular evaluation method in contexts where a randomized controlled trial is not feasible. In this paper, we formalize the structure and assumptions underlying the single ITS design and show that it is significantly more vulnerable to confounding than is often acknowledged and, as a result, can produce misleading results. We illustrate this empirically using the Oregon Health Insurance Experiment, showing that an evaluation using a single interrupted time series design instead of the randomized controlled trial would have produced large and statistically significant results of the wrong sign. We discuss the pitfalls of the SITS design, and suggest circumstances in which it is and is not likely to be reliable.
Two-stage examinations consist of a first stage in which students work individually as they typically do in examinations (stage 1), followed by a second stage in which they work in groups to complete another examination (stage 2), which typically consists of a subset of the questions from the first examination. Data from two-stage midterm and final examinations are used to assess the extent to which individuals improve their performance when collaborating with other students. On average, the group (stage 2) score was about one standard deviation above the individual (stage 1) score. While this difference cannot be interpreted as the causal effect of two-stage examinations on learning, it suggests that individuals experienced substantial performance gains when working in groups in an examination. This average performance gain was comparable with the average difference between the top performer of the group in stage 1 and the group's stage 1 average, and was equivalent to about two-thirds of the difference between the "super student" score (i.e. the sum of the maximum score for each question in stage 1) and the group's stage 1 average. This last result suggests that group collaboration takes substantial (albeit partial) advantage of the aggregate knowledge and skills of the group's individual members. Student feedback about their experience with two-stage examinations reveal that that these types of examinations are generally perceived to be more helpful for learning and are less stressful than traditional examinations. Finally, using data on group gender compositions, we investigate the potential role of gender dynamics on group efficiency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.