This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
Public web-based COVID-19 dashboards are in use worldwide to communicate pandemic-related information. Actionability of dashboards, as a predictor of their potential use for data-driven decision-making, was assessed in a global study during the early stages of the pandemic. It revealed a widespread lack of features needed to support actionability. In view of the inherently dynamic nature of dashboards and their unprecedented speed of creation, the evolution of dashboards and changes to their actionability merit exploration.
We aimed to explore how COVID-19 dashboards evolved in the Canadian context during 2020 and whether the presence of actionability features changed over time.
We conducted a descriptive assessment of a pan-Canadian sample of COVID-19 dashboards (N=26), followed by an appraisal of changes to their actionability by a panel of expert scorers (N=8). Scorers assessed the dashboards at two points in time, July and November 2020, using an assessment tool informed by communication theory and health care performance intelligence. Applying the nominal group technique, scorers were grouped in panels of three, and evaluated the presence of the seven defined features of highly actionable dashboards at each time point.
Improvements had been made to the dashboards over time. These predominantly involved data provision (specificity of geographic breakdowns, range of indicators reported, and explanations of data sources or calculations) and advancements enabled by the technologies employed (customization of time trends and interactive or visual chart elements). Further improvements in actionability were noted especially in features involving local-level data provision, time-trend reporting, and indicator management. No improvements were found in communicative elements (clarity of purpose and audience), while the use of storytelling techniques to narrate trends remained largely absent from the dashboards.
Improvements to COVID-19 dashboards in the Canadian context during 2020 were seen mostly in data availability and dashboard technology. Further improving the actionability of dashboards for public reporting will require attention to both technical and organizational aspects of dashboard development. Such efforts would include better skill-mixing across disciplines, continued investment in data standards, and clearer mandates for their developers to ensure accountability and the development of purpose-driven dashboards.
The public reporting of data during a pandemic is a core government function to protect population health and safety [
Public web-based COVID-19 dashboards, as a dynamic means to visually display information at a glance [
In the first half of 2020, our international research network of European and Canadian professionals in health care performance intelligence [
Due to the speed at which the dashboards were first launched, traditional technical and organizational aspects of development cycles were cut short [
Canada provides a relevant context for further investigating the evolution of COVID-19 dashboards for several reasons. First, public health is the remit of federal, provincial or territorial (PT), and local health authorities [
Second, Canada’s experience with COVID-19 intensified in the course of 2020, with an initial peak in early May (about 2500 daily cases) and second peak in November (about 8000 daily cases) [
Third, Canadian dashboards were criticized early on for possible information blind spots, including a failure to report race-based data and other social determinants [
This study explores (1) how public web-based COVID-19 dashboards in the Canadian context evolved in 2020 and (2) whether dashboard actionability increased over time.
Our study adheres to the Standards for Reporting Qualitative Research [
Data collection was conducted by a panel of eight scorers (EB, DI, SW, KJG, MP, CW, NL, and VB). The panel (four women and four men) aligned with the scorers assembled by Ivanković et al [
An assessment tool developed, piloted, and validated by Ivanković et al [
We operationalized the appraisal of a dashboard’s actionability by drawing on the seven features of highly actionable COVID-19 dashboards, as identified in the study by Ivanković et al [
Overview of considerations by the method applied.
Method | Instrument | Considerations assessed/scored: guiding questions/statements |
Descriptive assessment | Assessment toola |
Purpose and audience: Is the purpose and audience mentioned? Indicator themes: What indicators are reported on? Data: Are data sources and metadata specified? Types of analysis: Does the analysis include time trends, and geographic and population break downs? Presentation: How is data visualized, interpreted, simplified, and interacted with? |
Expert appraisal | Seven features of highly actionable dashboards-scoring toolb |
Know the audience and their information needs: The intended audience and their information needs are known and responded to. Manage the type, volume, and flow of information: The type, volume, and flow of information on the dashboard are well managed. Report data sources and methods clearly: The data sources and methods for calculating values are made clear. Link time trends to policy decisions: Information is reported over time and contextualized with policy decisions made. Provide data “close to home”: Data are reported at relevant geographic break downs. Break down the population to relevant subgroups: Data are reported by relevant population subgroups. Use storytelling and visual cues: Brief narratives and visual cues are used to explain the meaning of data. |
aRefer to the study by Ivanković et al [
bRefer to
COVID-19 dashboards for sample inclusion were determined on the basis of the following three criteria: (1) the reporting of key performance indicators related to COVID-19; (2) the use of some form of visualization; and (3) availability in an online web-based format. It means password-protected COVID-19 dashboards for internal use by public authorities were excluded from this study. No restrictions were imposed in terms of a dashboard’s primary level of reporting (eg, national, regional, and local) or the type of organization responsible for its development (eg, government, academia, news or media, industry, and private initiative). Sampling was conducted from May 19 to June 30, 2020, and involved searches of COVID-19 policy monitoring platforms (eg, the North American COVID-19 Policy Response Monitor [
The final sample (N=26) included dashboards reporting at the national level (n=6), PT level (n=16) (including at least one from each of Canada’s 13 provinces and territories), and municipal level (n=4), capturing reporting from the capital (Ottawa) and the three largest cities (Montreal, Toronto, and Vancouver).
Distribution of COVID-19 dashboards sampled and types of organizations responsible for their development. Circles denote municipal-level dashboards included in the sample, and the colors denote the respective organization types. These dashboards are counted in the tally shown per jurisdiction. The Public Health Agency of Canada’s COVID-19 dashboard is hosted on the federal Government of Canada webpage. In other instances, dashboards developed by public health authorities are hosted on dedicated webpages.
Each dashboard was assessed in English or French. The assessments were limited to a dashboard’s main page and to content accessible within one interaction (click). This approach was designed to increase consistency in the content evaluated, and it enabled us to gauge the dashboard’s prioritization and hierarchy of content. Archives were generated to create a record of each dashboard on the date reviewed (see
To assess the presence of the seven defined features of highly actionable COVID-19 dashboards, we organized a series of three-person panels, involving the original scorer of each dashboard joined by two other experts (the first authors or another panel member), in December 2020. Prior to the start of the appraisal by each panel, a workshop with the scorers was organized to calibrate the approach to scoring.
Scoring was informed by the original data records and archives generated in the two descriptive assessments (July and November 2020). Importantly, each of the seven actionability features were appraised with consideration to the dashboard’s stated or inferred purpose and audience. It means the appraisal of each feature differentiated between the intended use of the dashboard by national, PT, or municipal general public audiences, unless further specified. In line with the nominal group technique approach [
Prior to those discussions, partial or full agreement (two- or three-way consensus) had been reached on 83.5% (304/364) of the items scored, with full three-way agreement on 50.0% (182/364) (see
We used descriptive statistics to analyze the data at the two time points. We first determined the number and percentage of dashboards in which each item (ie, each consideration) of the descriptive assessment had been recorded as present in the July or November assessment or both. The net change for each item was calculated as the change in the total number of dashboards and the direction of that change between time points. To analyze score changes for the actionability features, we calculated feature-by-feature totals in both July and November, applying a 3-point ordinal scale (not present, somewhat present, and present). Using the same approach applied to analyze changes over time in the descriptive assessments, we calculated the net change per feature as the change in the total number of positively scored dashboards, noting the direction of that change.
For free-text fields in the descriptive assessment tool, we used both deductive and inductive thematic analysis to identify themes [
This study involved the analysis of publicly available COVID-19 dashboards. Ethics approval was not required.
The 26 Canadian COVID-19 dashboards were assessed in the time frames July 7 to July 20 and November 23 to December 2, 2020, with an average of 135 days between assessments (range 132-140). All dashboards remained active, with regular, typically daily updating, aside from one (City of Vancouver), which was still accessible but last updated in August 2020. As expected, given the wide differences in population size and density across Canadian provinces and territories, the cumulative number of COVID-19 cases reported by the dashboards for their respective geographic areas ranged from 0 cases in Nunavut to more than 55,000 in Quebec in July, and from 15 cases in Northwest Territories to more than 140,000 in Quebec in November. Cumulative numbers of COVID-19 cases and deaths on the assessment dates are reported in
Description of changes to Canadian COVID-19 dashboards (N=26) over time in 2020.
Consideration and description | July value, n (%) | November value, n (%) | Net changea | ||||||||
|
|
|
|
||||||||
|
Purpose: Purpose of use of the dashboard stated | 10 (39%) | 10 (39%) | 0 | |||||||
|
Audience: Intended audience (user) stated | 3 (12%) | 4 (15%) | +1 | |||||||
|
|
|
|
||||||||
|
|
|
|
|
|||||||
|
|
Cases (all confirmed cases) | 25 (96%) | 25 (96%) | 0 | ||||||
|
|
Deaths | 20 (77%) | 21 (81%) | +1 | ||||||
|
|
Recovered (healed, cured) | 17 (65%) | 18 (69%) | +1 | ||||||
|
|
Active cases | 12 (46%) | 12 (46%) | 0 | ||||||
|
|
Mortality rate (case fatality rate) | 4 (15%) | 4 (15%) | 0 | ||||||
|
|
Reproduction rate (attack rate) | 1 (4%) | 5 (19%) | +4 | ||||||
|
|
|
|
|
|||||||
|
|
Testing (total number tested, PCRb tests) | 17 (65%) | 19 (73%) | +2 | ||||||
|
|
Testing rates (positivity, negative tests) | 10 (39%) | 15 (58%) | +5 | ||||||
|
|
Tests pending results | 4 (15%) | 2 (8%) | −2 | ||||||
|
|
Testing turnaround | 0 (0%) | 3 (12%) | +3 | ||||||
|
|
|
|
|
|||||||
|
|
Self-quarantine (isolation notices) | 1 (4%) | 1 (4%) | 0 | ||||||
|
|
Contact tracing | 2 (8%) | 2 (8%) | 0 | ||||||
|
|
|
|
|
|||||||
|
|
Hospitalized (admissions, discharges) | 16 (62%) | 15 (58%) | −1 | ||||||
|
|
Admitted to the ICUc (critical condition) | 10 (39%) | 12 (46%) | +2 | ||||||
|
|
On a ventilator | 3 (12%) | 3 (12%) | 0 | ||||||
|
|
|
|
|
|||||||
|
|
Hospital bed capacity (availability) | 2 (8%) | 2 (8%) | 0 | ||||||
|
|
ICU bed capacity | 3 (12%) | 2 (8%) | −1 | ||||||
|
|
Ventilator capacity (available ventilators) | 3 (12%) | 2 (8%) | −1 | ||||||
|
|
Non-COVID-19 service usage | 1 (4%) | 1 (4%) | 0 | ||||||
|
|
Personal protective equipment stock | 1 (4%) | 1 (4%) | 0 | ||||||
|
|
|
|
|
|||||||
|
|
Employment and hardship relief | 4 (15%) | 4 (15%) | 0 | ||||||
|
|
Transport, trade, and international travel | 2 (8%) | 3 (12%) | +1 | ||||||
|
|
Behavioral: Public risk perception/restriction adherence | 5 (19%) | 3 (12%) | −2 | ||||||
|
|
|
|
|
|||||||
|
|
Future projections (modelling) | 1 (4%) | 1 (4%) | 0 | ||||||
|
|
Risk-level/current phase (composite score) | 2 (8%) | 4 (15%) | +2 | ||||||
|
|
|
|
||||||||
|
Sources: Data sources are noted | 18 (69%) | 18 (69%) | 0 | |||||||
|
Metadata: Metadata are specified | 11 (42%) | 14 (54%) | +3 | |||||||
|
|
|
|
||||||||
|
|
|
|
|
|||||||
|
|
Time trend analysis available | 21 (81%) | 23 (89%) | +2 | ||||||
|
|
Customizable time trend | 4 (15%) | 10 (39%) | +6 | ||||||
|
|
|
|
|
|||||||
|
|
1 level | 6 (23%) | 3 (12%) | −3 | ||||||
|
|
2 levels | 14 (54%) | 15 (58%) | +1 | ||||||
|
|
3 or more levels | 6 (23%) | 8 (31%) | +2 | ||||||
|
|
|
|
|
|||||||
|
|
International | 3 (12%) | 3 (12%) | 0 | ||||||
|
|
National | 9 (35%) | 8 (31%) | −1 | ||||||
|
|
Regional (province/territory) | 22 (85%) | 22 (85%) | 0 | ||||||
|
|
Health regions | 10 (39%) | 15 (58%) | +5 | ||||||
|
|
Municipal (city) | 8 (31%) | 8 (31%) | 0 | ||||||
|
|
Neighborhood (postcode) | 3 (12%) | 2 (8%) | −1 | ||||||
|
|
|
|
|
|||||||
|
|
Age | 18 (69%) | 17 (65%) | −1 | ||||||
|
|
Sex | 14 (54%) | 15 (58%) | +1 | ||||||
|
|
Mode of transmission | 5 (19%) | 6 (23%) | +1 | ||||||
|
|
Long-term care facilities | 5 (19%) | 5 (19%) | 0 | ||||||
|
|
Schools | 2 (8%) | 5 (19%) | +3 | ||||||
|
|
Ethnicity | 0 (0%) | 2 (8%) | +2 | ||||||
|
|
Race | 0 (0%) | 2 (8%) | +2 | ||||||
|
|
Comorbidities | 1 (4%) | 1 (4%) | 0 | ||||||
|
|
Socioeconomic status | 1 (4%) | 1 (4%) | 0 | ||||||
|
|
Health workers | 3 (12%) | 1 (4%) | −2 | ||||||
|
|
|
|
||||||||
|
|
|
|
|
|||||||
|
|
Table | 20 (77%) | 25 (96%) | +5 | ||||||
|
|
Graph/chart | 21 (81%) | 22 (85%) | +1 | ||||||
|
|
Map | 15 (58%) | 18 (69%) | +3 | ||||||
|
|
|
|
|
|||||||
|
|
Yes, to clarify the quality of the data | 13 (50%) | 18 (69%) | +5 | ||||||
|
|
Yes, to clarify the meaning of the data | 12 (46%) | 11 (42%) | −1 | ||||||
|
|
|
|
|
|||||||
|
|
Use of color coding | 15 (58%) | 15 (58%) | 0 | ||||||
|
|
Size variation | 3 (12%) | 4 (15%) | +4 | ||||||
|
|
Icons | 3 (12%) | 7 (27%) | −2 | ||||||
|
|
|
|
|
|||||||
|
|
More information | 18 (69%) | 18 (69%) | 0 | ||||||
|
|
Change of information | 7 (27%) | 10 (39%) | +3 | ||||||
|
|
Change of display | 5 (19%) | 6 (23%) | +1 |
aNet change refers to the total number of dashboards and the direction of overall change between time points. Importantly, no net change (0) can mean both no change or the same number of dashboards increased and decreased for the specific consideration.
bPCR: polymerase chain reaction.
cICU: intensive care unit.
There was no change in the extent to which dashboards stated their purpose of reporting, with just over one-third (10/26, 38%) doing so in both July and November. Where stated, the most frequent specific aims of dashboards were to provide simplified information in an “easy-to-digest, actionable way” [
Across the dashboards, public health and epidemiological indicators, followed by health system management indicators, were the most frequently reported indicators at both time points. Behavioral and socioeconomic indicators were rare. An average of seven indicator themes were reported per dashboard in November (range 2-17), compared with six in July (range 2-15). Several indicators became more prevalent in November, including viral reproduction rates, testing rates, testing turnaround times, and composite scores. Six dashboards (6/26, 23%) reduced the number of indicator themes reported, most often removing indicators on active cases. In some instances, indicators had been moved from the dashboard to new tabs or pages, as in Ottawa [
A third (8/26, 31%) of the dashboards, all government developed, did not explicitly report data sources in July or November. Dashboards typically drew data from jurisdiction-specific health services and public health authorities, hospital databases, and, for comparisons with other countries, the Johns Hopkins University Coronavirus Resource Center dashboard. Dashboards reporting metadata (supplementary details on the calculation of the indicators) increased to more than 50% (14/26, 54%) by November (from 11/26, 42%, in July). Notably, the COVID-19 in Canada dashboard published a detailed technical report on its data set produced by the COVID-19 Canada Open Data Working Group initiative [
A slight increase in the number of dashboards reporting time-trend data was observed between July and November (from 21/26, 81% to 23/26, 88%). Improvements were also made to the availability of customizable time scales, allowing users to zoom in on specific time frames of interest (from 4/26, 15% to 10/26, 38%).
Modifications were made to report subregional geographic breakdowns of data, with more than half (15/26, 58%) of the dashboards including breakdowns by health regions in November, as compared with 10 (10/26, 38%) in July. Age and sex remained the most common population breakdowns in November (17/26, 65%, as against 15/26, 58%, in July), followed by mode of transmission (6/26, 23%) and long-term care facilities (5/26, 19%). Schools emerged as a new type of breakdown in November, though present on only one-fifth (5/26, 19%) of dashboards.
Between July and November, most dashboards slightly improved the number and variety of chart types, simplification techniques, and interactive features they made available. This was mostly done by introducing maps or additional tables and icons, as well as user-directed modifications to the information displayed. New features that emerged in November included options to subscribe to email updates for alerts (eg, #HowsMyFlattening [
Text providing details on data quality was present on more than two-thirds (18/26, 69%) of dashboards in November, compared with half (13/26, 50%) in July. For example, Esri’s dashboard included lay-language explanations of values with statements such as “
Of the 26 dashboards assessed, none was found to fully present all seven of the defined actionability features either in July or November. Overall, 8% (2/26) of dashboards were assessed in July as having five or more actionability features fully present, doubling to 15% (4/26) of dashboards in November. Three quarters (77%, 20/26) of dashboards had two or fewer features fully present in July and 65% (17/26) had two or fewer features fully present in November. Seven dashboards increased their score of fully present features. Although two dashboards scored lower in November, the decrease was largely attributable to modifications in the type of information reported on the main dashboard page, as indicators were moved to other dedicated pages.
The actionability feature most widely present on dashboards in both July and November was the clarity of data sources and methods, while the use of storytelling and visual cues was the feature most frequently absent (
Change in actionability across dashboards (n=26) over time in 2020. Not present: the feature is not found on the dashboard; somewhat present: some elements of the feature are present on the dashboard but room for improvement; present: the specific feature is clearly demonstrated and a good practice example of the feature is present. See
In this study, we explored changes made in the course of 2020 to public web-based COVID-19 dashboards in Canada and appraised their actionability for decision-making purposes. Although the dashboards we sampled varied in their specific geographic focuses, they all shared an increasing relevance in supporting data-driven decision-making in their respective audiences as the severity of the COVID-19 pandemic intensified across the country. Broadly speaking, from the perspective of the health care performance intelligence we applied, we observed that subtle improvements were made to the dashboards between July and November 2020. Improvements were most pronounced with regard to dashboard technology solutions (better customizable time trends, and new charts and graphs) and data provision (new indicators, more transparency on metadata, and more geographic granularity). Modifications to further develop communicative elements were less pronounced or even absent during the period assessed. These results were mirrored in the scoring of actionability features.
COVID-19 dashboards worldwide are powered by a somewhat common range of software service providers (eg, ArcGIS, Tableau, and Power BI). We presume that some improvements observed across our sample can be credited to new technical features rolled out by such providers during 2020. For example, the use of adjustable time trends was a feature introduced on more than a third of the dashboards by November and was evidently an added element in the underlying software. However, while the industry may be credited with spearheading the technical development of dashboards, the current practice from a technological perspective of measuring actionability through
Improved geographic granularity and transparency of methods may be supported by initiatives like the COVID-19 Canada Open Data Working Group [
Our findings also reveal a responsiveness to the evolving nature of the pandemic, with multiple dashboards adding school cases or outbreaks as a data disaggregation option and turnaround times for virus testing as an indicator. Shortly after our second assessment, many dashboards also began reporting on vaccinations. Less advanced dashboards, from areas not seriously affected by the pandemic in the spring of 2020, made considerable progress in the second half of the year, as COVID-19 became more widespread. While such changes confirm that dashboards continued developing with time, the clarity of their intended aims and audiences nevertheless remained an underdeveloped attribute, despite wide recognition of the fundamental importance of data driven by a clear purpose and information need [
To our knowledge, this is the first study to comparatively explore and critically reflect on changes to COVID-19 dashboards over time from a health care performance intelligence perspective. The study was enriched by the expertise of the panel, whose members had prior experience in assessing COVID-19 dashboards internationally, as well as a shared reflexive lens to gauge both the technical and communication aspects of the dashboards. Additionally, given the sustained relevance of COVID-19 dashboards, our findings are pertinent to both short-term improvements in COVID-19 dashboards and their longer-term utility in addressing future public health crises.
We acknowledge several limitations. First, the stages of the pandemic and its severity varied considerably across our sample, possibly contributing to differences with respect to the data available and the prioritization of a dashboard’s development. Despite this, the general direction of change was found to be common, averaging a three-fold increase in COVID-19 cases across locations between our assessment time points (see
Actionable dashboards are needed to enable effective decision-making across audiences. Dashboards are tools of continuing importance during the COVID-19 pandemic, but sustaining their actionability requires responsiveness to the pandemic’s stages. Improvements made to COVID-19 dashboards in the Canadian context from July to November 2020 appear to be driven mainly by certain technological and data improvements. The effective use of communication features remained underdeveloped at both points in time. COVID-19 dashboard developers need to better leverage the expertise of public health and communication specialists, in order to ensure that data will truly become information that is readily accessible and relevant to a public audience. Strategic system improvements to prioritize data standards, for example, those with respect to subpopulation-based data, are needed to achieve more significant gains in actionability. As the pandemic continues to evolve, attention will need to shift toward converting dashboards from their initial status as temporary monitoring and communication tools into instruments that are integrated into routine health system performance monitoring. Accomplishing that will also require improved governance arrangements that clarify roles and responsibilities. In the short term, continued improvements are urgently needed with respect to all seven of the identified actionability features, in order to make COVID-19 dashboards more fit for their purpose and use.
Scoring tool on actionability features.
Overview of Canadian COVID-19 dashboards assessed.
Scoring distribution and extent of agreement prior to joint workshops.
provincial/territorial
World Health Organization
The authors thank and recognize the full team involved in the work on the international sample of COVID-19 dashboards that preceded this study. We also thank the reviewers who commented on an earlier version of the paper and Michael Dallas for language editing. This work was carried out by the Marie Skłodowska-Curie Innovative Training Network Healthcare Performance Intelligence Professionals (HealthPros) that has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement number 765141.
EB and DI contributed equally as co-first authors. EB, DI, SA, NK, and DK designed the study. Data collection was conducted by EB, DI, SW, KJG, MP, CW, NL, and VB. EB and DI drafted the manuscript. All authors revised the article, gave final approval for the version to be published, and agreed to be accountable for all aspects of the work.
None declare.