38.3%) were tested on days 9 to 14. Of the 134 student contacts tested on day 3, 14 (10.4%) were positive for SARS-CoV-2 infection. Of the 839 student contacts tested on days 9 to 14, 40 (4.8%) were positive for SARS-CoV-2 infection. Of the 388 student contacts in high school who were tested, 32 (8.2%) were positive for SARS-CoV-2 infection on days 9 to 14 compared with 8 (1.8%) of 451 student contacts in elementar y and middle school who tested positive (P < .001; Table ).Among 799 student contacts of confirmed COVID-19 cases with a negative test result on days 9 to 14, only 1 student became symptomatic after returning to school and had a positive SARS-CoV-2 test result on day 14 after an initial negative test result on day 9. The virus from this student was genetically distinct from the virus isolated from the confirmed COVID-19 case to which the student had been exposed (GenBank confirmed case: MW307809; GenBank 9-day student contact: MW308137). Loss of instruction decreased by 3649 days with the 9-day testing protocol (8097 days missed) compared with a theoretical 14-day quarantine without testing (11 746 days missed).Discussion | In this study of a 9-day testing protocol for student contacts of confirmed COVID-19 cases in 1 Florida county, a reduction in loss of instructional time was found that was less than what would have occurred with a 14-day quarantine. There was no evidence that an earlier return to school with a negative test result was linked with subsequent symptomatic illness. Had students returned to school before day 14 without testing on day 9 or thereafter, 8.2% of high school contacts would have returned to school with SARS-CoV-2 infection. These findings should be considered when evaluating the December 2020 CDC recommendation for a 10-day quarantine without testing or a 7-day quarantine with testing. 5 Limitations of this study include (1) contact testing ranging from days 9 to 14; (2) lack of testing for students who quarantined for 14 days; and (3) use of symptomatic illness alone for follow-up of negative test results.
REGULATORY ENVIRONMENT Both the US FDA and the EU, largely through the European Medicine Agency (EMA), have long recognized the importance of software's role in diagnostic and therapeutic devices. 17,18 More recently, to address the increasing importance of digital health,
In Search of a Few Good Apps mHealth apps are mobile device applications intended to improve health outcomes, deliver health care services, or enable health research. 1 The number of apps has increased substantially, and more than 40 000 health, fitness, and medical apps currently are available on the market. 2 Because apps can be used to inexpensively promote wellness and manage chronic diseases, their appeal has increased with health reform and the increasing focus on value. The bewildering diversity of apps available has made it difficult for clinicians and the public to discern which apps are the safest or most effective.The US Food and Drug Administration (FDA) has paid close attention to mHealth apps, because it has regulatory authority over their safety. The agency recently clarified that mHealth apps acting as medical devices or as accessories to medical devices will require FDA approval, whereas apps that provide users with the ability to log life events, retrieve medical content, or communicate with clinicians or health centers will not be regulated under its jurisdiction. 3 For example, an app that tracks glucose levels and suggests insulin dosages would be regulated, whereas an app that tracks a patient's weight and makes general suggestions about exercise would not. In general, apps that provide precise treatment recommendations and diagnostic information will receive more regulatory attention. Although the FDA has focused on safety, it has largely left the review and certification of apps to the marketplace.The currently available reviews of mHealth apps have largely focused on personal impressions, rather than evidence-based, unbiased assessments of clinical performance and data security. Although evidencebased reviews are not extensively available for mHealth apps, they are available for other categories of health information technology software. For instance, KLAS has successfully made a business out of producing report cards on the quality of health information technology vendors and enterprise software packages, presumably simplifying the lives of hospital leaders. 4 This model has worked for enterprise software because users of expensive software are seemingly willing to fund unbiased reviews. However, this approach appears unlikely to work for mHealth apps because users of free and inexpensive apps are less financially invested in their decisions than hospitals. Furthermore, certification may be problematic in mHealth because certification companies ordinarily aim to generate revenues by charging the app developers they are evaluating-an inherent conflict of interest. Thus, there is a need for alternative models for app review and certification that are sustainable and free of conflict of interest.However, given the sheer number of mHealth apps, it is unlikely that all will ever be meaningfully reviewed by a single organization. As a start, an organization could
With rising smartphone ownership, mobile health applications (mHealth apps) have the potential to support high-need, high-cost populations in managing their health. While the number of available mHealth apps has grown substantially, no clear strategy has emerged on how providers should evaluate and recommend such apps to patients. Key stakeholders, including medical professional societies, insurers, and policy makers, have largely avoided formally recommending apps, which forces patients to obtain recommendations from other sources. To help stakeholders overcome barriers to reviewing and recommending apps, we evaluated 137 patient-facing mHealth apps-those intended for use by patients to manage their health-that were highly rated by consumers and recommended by experts and that targeted high-need, high-cost populations. We found that there is a wide variety of apps in the marketplace but that few apps address the needs of the patients who could benefit the most. We also found that consumers' ratings were poor indications of apps' clinical utility or usability and that most apps did not respond appropriately when a user entered potentially dangerous health information. Going forward, data privacy and security will continue to be major concerns in the dissemination of mHealth apps.
The proposed conceptual framework supports the integration of available evidence in considering the full range of effects from e-prescribing design alternatives. More research is needed into the effects of specific e-prescribing functional alternatives. Until more is known, e-prescribing initiatives should include provisions to monitor for unintended hazards.
Mucocutaneous reactions, such as pruritus, urticaria, and angioedema, may occur after COVID-19 messenger RNA (mRNA) vaccination. To our knowledge, the incidence of these reactions and recurrence with subsequent vaccination has not been described. Cutaneous reactions may contribute to unnecessary avoidance of future vaccination doses.Methods | We prospectively studied Mass General Brigham employees who received an mRNA COVID-19 vaccine (first dose: December 16, 2020, to January 20, 2021; follow-up through February 26, 2021; eMethods in the Supplement). Institutional review board approval was provided by the Mass General Brigham human research committee with a waiver of informed consent. For 3 days after vaccination, employees completed daily symptom surveys through a multipronged approach, including email, text message, phone, and smartphone application links. Cutaneous reactions included rash or itching (other than the injection site), hives, and/or swelling of the lips, tongue, eyes, or face (eAppendix in the Supplement).We calculated the number and frequency of selfreported cutaneous reactions with 95% confidence intervals using symptom survey respondents by dose as the denomi-nator. We compared frequencies using χ 2 tests. Statistical analyses were conducted using SAS, version 9.4 (SAS Institute), and statistical significance was set at P< .05.
BackgroundThere are over 165,000 mHealth apps currently available to patients, but few have undergone an external quality review. Furthermore, no standardized review method exists, and little has been done to examine the consistency of the evaluation systems themselves.ObjectiveWe sought to determine which measures for evaluating the quality of mHealth apps have the greatest interrater reliability.MethodsWe identified 22 measures for evaluating the quality of apps from the literature. A panel of 6 reviewers reviewed the top 10 depression apps and 10 smoking cessation apps from the Apple iTunes App Store on these measures. Krippendorff’s alpha was calculated for each of the measures and reported by app category and in aggregate.ResultsThe measure for interactiveness and feedback was found to have the greatest overall interrater reliability (alpha=.69). Presence of password protection (alpha=.65), whether the app was uploaded by a health care agency (alpha=.63), the number of consumer ratings (alpha=.59), and several other measures had moderate interrater reliability (alphas>.5). There was the least agreement over whether apps had errors or performance issues (alpha=.15), stated advertising policies (alpha=.16), and were easy to use (alpha=.18). There were substantial differences in the interrater reliabilities of a number of measures when they were applied to depression versus smoking apps.ConclusionsWe found wide variation in the interrater reliability of measures used to evaluate apps, and some measures are more robust across categories of apps than others. The measures with the highest degree of interrater reliability tended to be those that involved the least rater discretion. Clinical quality measures such as effectiveness, ease of use, and performance had relatively poor interrater reliability. Subsequent research is needed to determine consistent means for evaluating the performance of apps. Patients and clinicians should consider conducting their own assessments of apps, in conjunction with evaluating information from reviews.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.