Tasks that precede a recognition probe induce a more liberal response criterion than do probes without tasks-the "revelation effect." For example, participants are more likely to claim that a stimulus is familiar directly after solving an anagram, relative to a condition without an anagram. Revelation effect hypotheses disagree whether hard preceding tasks should produce a larger revelation effect than easy preceding tasks. Although some studies have shown that hard tasks increase the revelation effect as compared to easy tasks, these studies suffered from a confound of task difficulty and task presence. Conversely, other studies have shown that the revelation effect is independent of task difficulty. In the present study, we used new task difficulty manipulations to test whether hard tasks produce larger revelation effects than easy tasks. Participants (N = 464) completed hard or easy preceding tasks, including anagrams (Exps. 1 and 2) and the typing of specific arrow key sequences (Exps. 3-6). With sample sizes typical of revelation effect experiments, the effect sizes of task difficulty on the revelation effect varied considerably across experiments. Despite this variability, a consistent data pattern emerged: Hard tasks produced larger revelation effects than easy tasks. Although the present study falsifies certain revelation effect hypotheses, the general vagueness of revelation effect hypotheses remains.
Background: As training programs implement competency-based models of training oriented around entrustable professional activities (EPAs), the role of traditional assessment tools remains unclear. While rating scales remain emphasized, few empirical studies have explored the utility of narrative comments between methods and models of training. Objective: Compare the quality of narrative comments between in-training evaluation reports (ITERs) and workplace-based assessments (WBAs) of EPAs before and after the formal implementation of a competencybased model of training. Methods: Retrospective analysis of assessment data from 77 residents in the core Internal Medicine (IM) residency program at the University of Calgary between 2015 and 2020, including data collected during a 2-year pilot of WBAs before the official launch of Competency by Design on July 1, 2019. The quality of narrative comments from 2,928 EPAs and 3,608 ITERs was analyzed using the standardized Completed Clinical Evaluation Report Rating (CCERR). Results: CCERR scores were higher on EPAs than ITERs [F (26,213) = 210, MSE = 4,541, p < 0.001, η2 = 0.064]. CCERR scores for EPAs decreased slightly upon formal implementation of Competence by Design but remained higher than the CCERR scores for ITERs completed at that period of time. Conclusions: The quality of narrative comments may be higher on EPAs than traditional ITER evaluations. While programmatic assessment requires the use of multiple tools and methods, programs must consider whether such methods lead to complementarity or redundancy. Résumé Contexte: Alors que les programmes de formation mettent en œuvre des modèles de formation axés sur les compétences et orientés en fonction des activités professionnelles confiables (APC), le rôle des outils d’évaluation traditionnels reste flou. Si les échelles de notation restent privilégiées, peu d’études empiriques explorent l’utilité des commentaires narratifs entre les méthodes et les modèles de formation. Objectif: Comparer la qualité des commentaires narratifs entre les fiches d’évaluation en cours de formation (FECF) et les évaluations des APC sur le lieu de travail avant et après la mise en œuvre officielle d’un modèle de formation axé sur les compétences. Méthodologie: Analyse rétrospective des données d’évaluation de 77 résidents du programme de résidence en médecine interne tronc commun de l’Université de Calgary entre 2015 et 2020, comprenant les données recueillies au cours d’un projet pilote de deux ans d’évaluations en milieu de travail avant le lancement officiel de l’initiative La compétence par conception, le 1er juillet 2019. La qualité des commentaires narratifs de 2,928 APC et de 3,608 FECF a été analysée à l’aide du Completed Clinical Evaluation Report Rating (CCERR) normalisé. Résultats: Les scores du CCERR sont plus élevés pour les APC que pour les FECF [F (26,213) = 210, rétroac-tion multisources = 4,541, p < 0.001, η2 = 0.064]. Les scores du CCERR pour les APC diminuent légèrement au moment de la mise en œuvre officielle de l’initiative La compétence par conception, mais demeurent plus élevés que ceux pour les FECF effectuées à cette période. Conclusions: La qualité des commentaires narratifs serait meilleure pour les APC que pour les FECF traditionnelles. Bien que l’évaluation des programmes nécessite l’utilisation de multiples outils et méthodes, les programmes doivent se demander si l’utilisation de telles méthodes se veut complémentaire ou redondante.
Background Digital health promises numerous value-creating outcomes. These include improved health, reduced costs, and the creation of lucrative markets, which, in turn, provide high-quality employment, productivity growth, and a climate that attracts investment. For this value creation and capture, the activities of a diverse set of stakeholders within a digital health ecosystem require coordination. However, the antecedents of the coordination needed for an effective digital health ecosystem are not well understood. Objective The purpose of this study was to investigate the systemic conditions of the digital health ecosystem in Alberta, Canada, as critical antecedents to ecosystem coordination from the perspective of the authors as applicants to an innovative digital health funding program embedded within the larger digital health ecosystem of innovators or entrepreneurs, health system leaders, support partners, and funders. Methods We employed a qualitative embedded case study of the systemic conditions within the digital health ecosystem in Alberta, Canada (main case) using semistructured interviews with 36 stakeholders representing innovators or entrepreneurs, health system leaders, support partners, and funders (subcases). The interviews were conducted over a 2-month period between May 26 and July 22, 2021. Data were coded for key themes and synthesized around 5 propositions developed from academic publications and policy reports. Results The findings indicated varying levels of support for each proposition, with moderate support for accessing real problems, data, training, and space for evaluations. However, the most fundamental gap appears to be in ecosystem navigation, in particular, the absence of intermediaries (eg, individuals, organizations, and technology) to provide guidance on the available support services and dependencies among the various ecosystem actors and programs. Conclusions Navigating the systemic conditions of the digital health ecosystem is extremely challenging for entrepreneurs, especially those without prior health care experience, and this remains an issue even for those with such experience. Policy interventions aimed at increasing collaboration among ecosystem support providers, along with tools and incentives to ensure coordination, are essential as the ecosystem and those dependent on it grow.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.