Background The spread of technology and dissemination of knowledge across the World Wide Web has prompted the development of apps for American Sign Language (ASL) translation, interpretation, and syntax recognition. There is limited literature regarding the quality, effectiveness, and appropriateness of mobile health (mHealth) apps for the deaf and hard-of-hearing (DHOH) that pose to aid the DHOH in their everyday communication and activities. Other than the star-rating system with minimal comments regarding quality, the evaluation metrics used to rate mobile apps are commonly subjective. Objective This study aimed to evaluate the quality and effectiveness of DHOH apps using a standardized scale. In addition, it also aimed to identify content-specific criteria to improve the evaluation process by using a content expert, and to use the content expert to more accurately evaluate apps and features supporting the DHOH. Methods A list of potential apps for evaluation was generated after a preliminary screening for apps related to the DHOH. Inclusion and exclusion criteria were developed to refine the master list of apps. The study modified a standardized rating scale with additional content-specific criteria applicable to the DHOH population for app evaluation. This was accomplished by including a DHOH content expert in the design of content-specific criteria. Results The results indicate a clear distinction in Mobile App Rating Scale (MARS) scores among apps within the study’s three app categories: ASL translators (highest score=3.72), speech-to-text (highest score=3.6), and hard-of-hearing assistants (highest score=3.90). Of the 217 apps obtained from the search criteria, 21 apps met the inclusion and exclusion criteria. Furthermore, the limited consideration for measures specific to the target population along with a high app turnover rate suggests opportunities for improved app effectiveness and evaluation. Conclusions As more mHealth apps enter the market for the DHOH population, more criteria-based evaluation is needed to ensure the safety and appropriateness of the apps for the intended users. Evaluation of population-specific mHealth apps can benefit from content-specific measurement criteria developed by a content expert in the field.
Early evidence has shown that Accountable Care Organizations (ACOs) have achieved some success in improving the quality of care and reducing Medicare costs. However, it has been argued that the ACO rewarding model may disproportionately affect relatively low-spending (LS; considered as efficient) organizations that have fewer options to cut unnecessary services compared with high-spending (HS; inefficient) organizations. We conducted a cross-sectional retrospective study to compare ACO financial and quality of care performance between HS-ACO and LS-ACO. After adjusting for ACO organizational factors and beneficiary characteristics, we found that HS-ACOs generated greater savings per beneficiary than LS-ACOs ($501 vs. -$108, p < .001); however, HS-ACOs had a lower quality of care performance (48.79 vs. 53.29, p = .002). Specifically, LS-ACOs had better quality performance than HS-ACOs in patient experience/satisfaction (p = .02), preventive care services (p = .004), and hospitalization management (p = .001), whereas HS-ACOs better performed in routine checkup/follow-up (p < .001) and risk population management (p = .048). Our findings indicated that Medicare ACO rewarding model seems to be advantageous for HS-ACOs regardless of the overall quality of care performance.
BACKGROUND The spread of technology and dissemination of knowledge across the World Wide Web has prompted the development of apps for American Sign Language (ASL) translation, interpretation, and syntax recognition. There is limited literature regarding the quality, effectiveness, and appropriateness of mobile health (mHealth) apps for the deaf and hard-of-hearing (DHOH) that pose to aid the DHOH in their everyday communication and activities. Other than the star-rating system with minimal comments regarding quality, the evaluation metrics used to rate mobile apps are commonly subjective. OBJECTIVE This study aimed to evaluate the quality and effectiveness of DHOH apps using a standardized scale. In addition, it also aimed to identify content-specific criteria to improve the evaluation process by using a content expert, and to use the content expert to more accurately evaluate apps and features supporting the DHOH. METHODS A list of potential apps for evaluation was generated after a preliminary screening for apps related to the DHOH. Inclusion and exclusion criteria were developed to refine the master list of apps. The study modified a standardized rating scale with additional content-specific criteria applicable to the DHOH population for app evaluation. This was accomplished by including a DHOH content expert in the design of content-specific criteria. RESULTS The results indicate a clear distinction in Mobile App Rating Scale (MARS) scores among apps within the study’s three app categories: ASL translators (highest score=3.72), speech-to-text (highest score=3.6), and hard-of-hearing assistants (highest score=3.90). Of the 217 apps obtained from the search criteria, 21 apps met the inclusion and exclusion criteria. Furthermore, the limited consideration for measures specific to the target population along with a high app turnover rate suggests opportunities for improved app effectiveness and evaluation. CONCLUSIONS As more mHealth apps enter the market for the DHOH population, more criteria-based evaluation is needed to ensure the safety and appropriateness of the apps for the intended users. Evaluation of population-specific mHealth apps can benefit from content-specific measurement criteria developed by a content expert in the field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.