BackgroundDespite the important role of testing as a measure against the COVID-19 pandemic, user perspectives on SARS-CoV-2 tests remain scarce, inhibiting an improvement of testing approaches. As the world enters the third year of the pandemic, more nuanced perspectives of testing, and opportunities to expand testing in a feasible and affordable manner merit consideration.MethodsConducted amid the second pandemic wave (late 2020–early 2021) during and after a multi-arm trial evaluating SARS-CoV-2 surveillance strategies in the federal state Baden-Württemberg, Germany, this qualitative sub-study aimed to gain a deeper understanding of how test users and test rejectors perceived mail-in SARS-CoV-2 gargle tests. We conducted 67 semi-structured in-depth interviews (mean duration: 60 min) via telephone or video call. Interviews were audio-recorded, transcribed verbatim and analyzed inductively using thematic analysis. The Consolidated Framework for Implementation Research guided the findings' presentation.ResultsRespondents generally described gargle sampling as simple and comfortable. However, individual perceptions of the testing method and its feasibility varied widely from disgusting and complicated to simple and brilliant. Self-sampling was appreciated for lowering infection risks during testing, but also considered more complex. Gargle-sampling increased participants' self-efficacy to sample correctly. Communication (first contact, quantity and content of information, reminders, support system) and trust (in the study, its institutional affiliation and test method) decisively influenced the intervention's acceptability.ConclusionUser-driven insights on how to streamline testing include: consider communication, first impressions of tests and information as key for successful mail-in testing; pay attention to the role of mutual trust between those taking and administering tests; implement gargle self-sampling as a pleasant alternative to swab testing; offer multiple test methods to increase test up-take.
Background e-Learning for health professionals in many low- and middle-income countries (LMICs) is still in its infancy, but with the advent of COVID-19, a significant expansion of digital learning has occurred. Asynchronous e-learning can be grouped into interactive (user-influenceable content) and noninteractive (static material) e-learning. Studies conducted in high-income countries suggest that interactive e-learning is more effective than noninteractive e-learning in increasing learner satisfaction and knowledge; however, there is a gap in our understanding of whether this also holds true in LMICs. Objective This study aims to validate the hypothesis above in a resource-constrained and real-life setting to understand e-learning quality and delivery by comparing interactive and noninteractive e-learning user satisfaction, usability, and knowledge gain in a new medical university in Zambia. Methods We conducted a web-based, mixed methods randomized controlled trial at the Levy Mwanawasa Medical University (LMMU) in Lusaka, Zambia, between April and July 2021. We recruited medical licentiate students (second, third, and fourth study years) via email. Participants were randomized to undergo asynchronous e-learning with an interactive or noninteractive module for chronic obstructive pulmonary disease and informally blinded to their group allocation. The interactive module included interactive interfaces, quizzes, and a virtual patient, whereas the noninteractive module consisted of PowerPoint slides. Both modules covered the same content scope. The primary outcome was learner satisfaction. The secondary outcomes were usability, short- and long-term knowledge gain, and barriers to e-learning. The mixed methods study followed an explanatory sequential design in which rating conferences delivered further insights into quantitative findings, which were evaluated through web-based questionnaires. Results Initially, 94 participants were enrolled in the study, of whom 41 (44%; 18 intervention participants and 23 control participants) remained in the study and were analyzed. There were no significant differences in satisfaction (intervention: median 33.5, first quartile 31.3, second quartile 35; control: median 33, first quartile 30, second quartile 37.5; P=.66), usability, or knowledge gain between the intervention and control groups. Challenges in accessing both e-learning modules led to many dropouts. Qualitative data suggested that the content of the interactive module was more challenging to access because of technical difficulties and individual factors (eg, limited experience with interactive e-learning). Conclusions We did not observe an increase in user satisfaction with interactive e-learning. However, this finding may not be generalizable to other low-resource settings because the post hoc power was low, and the e-learning system at LMMU has not yet reached its full potential. Consequently, technical and individual barriers to accessing e-learning may have affected the results, mainly because the interactive module was considered more difficult to access and use. Nevertheless, qualitative data showed high motivation and interest in e-learning. Future studies should minimize technical barriers to e-learning to further evaluate interactive e-learning in LMICs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.