Background Since the outbreak of COVID-19, the development of dashboards as dynamic, visual tools for communicating COVID-19 data has surged worldwide. Dashboards can inform decision-making and support behavior change. To do so, they must be actionable. The features that constitute an actionable dashboard in the context of the COVID-19 pandemic have not been rigorously assessed. Objective The aim of this study is to explore the characteristics of public web-based COVID-19 dashboards by assessing their purpose and users (“why”), content and data (“what”), and analyses and displays (“how” they communicate COVID-19 data), and ultimately to appraise the common features of highly actionable dashboards. Methods We conducted a descriptive assessment and scoring using nominal group technique with an international panel of experts (n=17) on a global sample of COVID-19 dashboards in July 2020. The sequence of steps included multimethod sampling of dashboards; development and piloting of an assessment tool; data extraction and an initial round of actionability scoring; a workshop based on a preliminary analysis of the results; and reconsideration of actionability scores followed by joint determination of common features of highly actionable dashboards. We used descriptive statistics and thematic analysis to explore the findings by research question. Results A total of 158 dashboards from 53 countries were assessed. Dashboards were predominately developed by government authorities (100/158, 63.0%) and were national (93/158, 58.9%) in scope. We found that only 20 of the 158 dashboards (12.7%) stated both their primary purpose and intended audience. Nearly all dashboards reported epidemiological indicators (155/158, 98.1%), followed by health system management indicators (85/158, 53.8%), whereas indicators on social and economic impact and behavioral insights were the least reported (7/158, 4.4% and 2/158, 1.3%, respectively). Approximately a quarter of the dashboards (39/158, 24.7%) did not report their data sources. The dashboards predominately reported time trends and disaggregated data by two geographic levels and by age and sex. The dashboards used an average of 2.2 types of displays (SD 0.86); these were mostly graphs and maps, followed by tables. To support data interpretation, color-coding was common (93/158, 89.4%), although only one-fifth of the dashboards (31/158, 19.6%) included text explaining the quality and meaning of the data. In total, 20/158 dashboards (12.7%) were appraised as highly actionable, and seven common features were identified between them. Actionable COVID-19 dashboards (1) know their audience and information needs; (2) manage the type, volume, and flow of displayed information; (3) report data sources and methods clearly; (4) link time trends to policy decisions; (5) provide data that are “close to home”; (6) break down the population into relevant subgroups; and (7) use storytelling and visual cues. Conclusions COVID-19 dashboards are diverse in the why, what, and how by which they communicate insights on the pandemic and support data-driven decision-making. To leverage their full potential, dashboard developers should consider adopting the seven actionability features identified.
Background: The COVID-19 pandemic is a complex global public health crisis presenting clinical, organisational and system-wide challenges. Different research perspectives on health are needed in order to manage and monitor this crisis. Performance intelligence is an approach that emphasises the need for different research perspectives in supporting health systems' decision-makers to determine policies based on well-informed choices. In this paper, we present the viewpoint of the Innovative Training Network for Healthcare Performance Intelligence Professionals (HealthPros) on how performance intelligence can be used during and after the COVID-19 pandemic. Discussion: A lack of standardised information, paired with limited discussion and alignment between countries contribute to uncertainty in decision-making in all countries. Consequently, a plethora of different non-data-driven and uncoordinated approaches to address the outbreak are noted worldwide. Comparative health system research is needed to help countries shape their response models in social care, public health, primary care, hospital care and long-term care through the different phases of the pandemic. There is a need in each phase to compare context-specific bundles of measures where the impact on health outcomes can be modelled using targeted data and advanced statistical methods. Performance intelligence can be pursued to compare data, construct indicators and identify optimal strategies. Embracing a system perspective will allow countries to take coordinated strategic decisions while mitigating the risk of system collapse.A framework for the development and implementation of performance intelligence has been outlined by the HealthPros Network and is of pertinence. Health systems need better and more timely data to govern through a pandemic-induced transition period where tensions between care needs, demand and capacity are exceptionally high worldwide. Health systems are challenged to ensure essential levels of healthcare towards all patients, including those who need routine assistance.
Objective of this study was to better understand the use of performance data for evidencebased decision-making by managers in hospitals and other healthcare organisations in Europe in 2019. In order to explore why, what and how performance data is collected, reported and used, we conducted a cross-sectional study based on a self-reported online questionnaire and a follow-up interactive workshop. Our study population were participants of a pan-European professional Exchange Programme and their hosts (n = 125), mostly mid-level hospital managers. We found that a substantial amount of performance data is collected and reported, but could be utilised better for decision-making purposes. Motivation to collect and report performance data is equally internal and external, for improvement as well as for accountability purposes. Benchmarking between organisations is recognised as being important but is still underused. A plethora of different data sources are used, but more should be done on conceptualising, collecting, reporting and using patient-reported data. Managers working for privately owned organisations reported greater use of performance data than those working for public ones. Strategic levels of management use performance data more for justifying their decisions, while managers on operational and clinical levels use it more for day-today decision-making. Our study showed that, despite the substantial and increasing use of performance data for evidence-based management, there is room and need to further explore and expand its role in strategic decision-making and supporting a shift in healthcare from organisational accountability towards the model of learning organisations.
Background National health information (HI) systems provide data on population health, the determinants of health and health system performance within countries. The evaluation of these systems has traditionally focused on statistical practices and procedures, and not on data use or reuse for policy and practice. This limits the capacity to assess the impact of HI systems on healthcare provision, management and policy-making. On the other hand, the field of Knowledge Translation (KT) has developed frameworks to guide evidence into practice. Methods A scoping review of the KT literature to identify the essential mechanisms and determinants of KT that could help monitor the impact of HI systems. Results We examined 79 publications and we identified over 100 different KT frameworks but none of these were focused on HI systems per se. There were specific recommendations on disseminating evidence to stakeholders at the institutional and organizational level, and on sustaining the use of evidence in practice and the broader community setting. Conclusions We developed a new model, the HI-Impact framework, in which four domains are essential for mapping the impact of national HI systems: (i) HI Evidence Quality, (ii) HI System Responsiveness, (iii) Stakeholder Engagement and (iv) Knowledge Integration. A comprehensive impact assessment of HI systems requires addressing the use of HI in public health decision-making, health service delivery and in other sectors which might have not been considered previously. Monitoring Stakeholder Engagement and Knowledge Integration certifies that the use of HI in all policies is an explicit point of assessment.
Background Governments across the World Health Organization (WHO) European Region have prioritised dashboards for reporting COVID-19 data. The ubiquitous use of dashboards for public reporting is a novel phenomenon. Objective This study explores the development of COVID-19 dashboards during the first year of the pandemic and identifies common barriers, enablers and lessons from the experiences of teams responsible for their development. Methods We applied multiple methods to identify and recruit COVID-19 dashboard teams, using a purposive, quota sampling approach. Semi-structured group interviews were conducted from April to June 2021. Using elaborative coding and thematic analysis, we derived descriptive and explanatory themes from the interview data. A validation workshop was held with study participants in June 2021. Results Eighty informants participated, representing 33 national COVID-19 dashboard teams across the WHO European Region. Most dashboards were launched swiftly during the first months of the pandemic, February to May 2020. The urgency, intense workload, limited human resources, data and privacy constraints and public scrutiny were common challenges in the initial development stage. Themes related to barriers or enablers were identified, pertaining to the pre-pandemic context, pandemic itself, people and processes and software, data and users. Lessons emerged around the themes of simplicity, trust, partnership, software and data and change. Conclusions COVID-19 dashboards were developed in a learning-by-doing approach. The experiences of teams reveal that initial underpreparedness was offset by high-level political endorsement, the professionalism of teams, accelerated data improvements and immediate support with commercial software solutions. To leverage the full potential of dashboards for health data reporting, investments are needed at the team, national and pan-European levels.
Background Public web-based COVID-19 dashboards are in use worldwide to communicate pandemic-related information. Actionability of dashboards, as a predictor of their potential use for data-driven decision-making, was assessed in a global study during the early stages of the pandemic. It revealed a widespread lack of features needed to support actionability. In view of the inherently dynamic nature of dashboards and their unprecedented speed of creation, the evolution of dashboards and changes to their actionability merit exploration. Objective We aimed to explore how COVID-19 dashboards evolved in the Canadian context during 2020 and whether the presence of actionability features changed over time. Methods We conducted a descriptive assessment of a pan-Canadian sample of COVID-19 dashboards (N=26), followed by an appraisal of changes to their actionability by a panel of expert scorers (N=8). Scorers assessed the dashboards at two points in time, July and November 2020, using an assessment tool informed by communication theory and health care performance intelligence. Applying the nominal group technique, scorers were grouped in panels of three, and evaluated the presence of the seven defined features of highly actionable dashboards at each time point. Results Improvements had been made to the dashboards over time. These predominantly involved data provision (specificity of geographic breakdowns, range of indicators reported, and explanations of data sources or calculations) and advancements enabled by the technologies employed (customization of time trends and interactive or visual chart elements). Further improvements in actionability were noted especially in features involving local-level data provision, time-trend reporting, and indicator management. No improvements were found in communicative elements (clarity of purpose and audience), while the use of storytelling techniques to narrate trends remained largely absent from the dashboards. Conclusions Improvements to COVID-19 dashboards in the Canadian context during 2020 were seen mostly in data availability and dashboard technology. Further improving the actionability of dashboards for public reporting will require attention to both technical and organizational aspects of dashboard development. Such efforts would include better skill-mixing across disciplines, continued investment in data standards, and clearer mandates for their developers to ensure accountability and the development of purpose-driven dashboards.
BACKGROUND Since the outbreak of COVID-19, the development of dashboards as dynamic, visual tools for communicating COVID-19 data has surged worldwide. Dashboards can inform decision-making and support behavior change. To do so, they must be actionable. The features that constitute an actionable dashboard in the context of the COVID-19 pandemic have not been rigorously assessed. OBJECTIVE The aim of this study is to explore the characteristics of public web-based COVID-19 dashboards by assessing their purpose and users (“why”), content and data (“what”), and analyses and displays (“how” they communicate COVID-19 data), and ultimately to appraise the common features of highly actionable dashboards. METHODS We conducted a descriptive assessment and scoring using nominal group technique with an international panel of experts (n=17) on a global sample of COVID-19 dashboards in July 2020. The sequence of steps included multimethod sampling of dashboards; development and piloting of an assessment tool; data extraction and an initial round of actionability scoring; a workshop based on a preliminary analysis of the results; and reconsideration of actionability scores followed by joint determination of common features of highly actionable dashboards. We used descriptive statistics and thematic analysis to explore the findings by research question. RESULTS A total of 158 dashboards from 53 countries were assessed. Dashboards were predominately developed by government authorities (100/158, 63.0%) and were national (93/158, 58.9%) in scope. We found that only 20 of the 158 dashboards (12.7%) stated both their primary purpose and intended audience. Nearly all dashboards reported epidemiological indicators (155/158, 98.1%), followed by health system management indicators (85/158, 53.8%), whereas indicators on social and economic impact and behavioral insights were the least reported (7/158, 4.4% and 2/158, 1.3%, respectively). Approximately a quarter of the dashboards (39/158, 24.7%) did not report their data sources. The dashboards predominately reported time trends and disaggregated data by two geographic levels and by age and sex. The dashboards used an average of 2.2 types of displays (SD 0.86); these were mostly graphs and maps, followed by tables. To support data interpretation, color-coding was common (93/158, 89.4%), although only one-fifth of the dashboards (31/158, 19.6%) included text explaining the quality and meaning of the data. In total, 20/158 dashboards (12.7%) were appraised as highly actionable, and seven common features were identified between them. Actionable COVID-19 dashboards (1) know their audience and information needs; (2) manage the type, volume, and flow of displayed information; (3) report data sources and methods clearly; (4) link time trends to policy decisions; (5) provide data that are “close to home”; (6) break down the population into relevant subgroups; and (7) use storytelling and visual cues. CONCLUSIONS COVID-19 dashboards are diverse in the why, what, and how by which they communicate insights on the pandemic and support data-driven decision-making. To leverage their full potential, dashboard developers should consider adopting the seven actionability features identified.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.