Review
Abstract
Background: Data dashboards can be a powerful tool for ensuring access for public health decision makers to timely, relevant, and credible data. As their appeal and reach become ubiquitous, it is important to consider how they may be best integrated with public health data systems and the decision-making routines of users.
Objective: This scoping review describes and analyzes the current state of knowledge regarding the design, application, and actionability of US national public health data dashboards to identify critical theoretical and empirical gaps in the literature and clarify definitions and operationalization of actionability as a critical property of dashboards.
Methods: The review follows PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. A search was conducted for refereed journal articles, conference proceedings, and reports that describe the design, implementation, or evaluation of US national public health dashboards published between 2000 and 2023, using a validated search query across relevant databases (CINAHL, PubMed, MEDLINE, and Web of Science) and gray literature sources. Of 2544 documents retrieved, 89 (3.5%) met all inclusion criteria. An iterative process of testing and improving intercoder reliability was implemented to extract data.
Results: The dashboards reviewed (N=89) target a broad range of public health topics but are primarily designed for epidemiological surveillance and monitoring (n=51, 57% of dashboards) and probing health disparities and social determinants of health (n=27, 30%). Thus, they are limited in their potential to guide users’ policy and practice decisions. Nearly all dashboards are created, hosted, and funded by institutional entities, such as government agencies and universities, that hold influence over public health agendas and priorities. Intended users are primarily public health professionals (n=34, 38%), policy makers (n=30, 34%), and researchers or practitioners (n=28, 32%), but it is unclear whether the dashboards are tailored to users’ data capacities or needs, although 30% of articles reference user-centered design. Usability indicators commonly referenced include website analytics (n=22, 25%), expert evaluation (n=19, 21%), and users’ impact stories (n=14, 16%), but only 30% (n=26) of all articles report usability assessment. Usefulness is frequently inferred from presumed relevance to decision makers (n=17, 19%), anecdotal stakeholder feedback (n=16, 18%), and user engagement metrics (n=14, 16%) rather than via rigorous testing. Only 47% (n=42) of dashboards were still accessible or active at the time of review.
Conclusions: The findings reveal fragmentation and a lack of scientific rigor in current knowledge regarding the design, implementation, and utility of public health dashboards. Coherent theoretical accounts and direct empirical tests that link usability, usefulness, and use of these tools to users’ decisions and actions are critically missing. A more complete explication and operationalization of actionability in this context has significant potential to fill this gap and advance future scholarship and practice.
doi:10.2196/65283
Keywords
Introduction
Background
The disjointed public health response to the COVID-19 pandemic in the United States highlighted the critical importance of having robust public health data systems in place and the potential utility of data dashboards for ensuring timely and unrestricted access to critical public health data [
, ]. The ubiquitous and prominent use of dashboards to chronicle the progression and public health response to the COVID-19 pandemic has increased the appeal of these tools to a broad and diverse range of decision makers, including public health leaders and professionals, health care providers, community leaders, policy makers, and advocates [ , ]. Data dashboards are frequently touted as cost-effective means to share and access public health and other types of publicly available data because they transform complex data into intuitive information displays, afford instantaneous and near-universal access of multiple stakeholders to data-based insights, and allow users to explore data on their own to answer questions that are important to them [ - ]. They are also increasingly recognized for their democratizing potential, both in terms of making data available to a wider and more diverse range of audiences and ensuring that diverse stakeholders, particularly those who are less privileged and are most likely to be impacted by how data are interpreted and used in decision-making, have the power and opportunity to shape what and how data are used in this context [ ].Aims and Contributions
As public health data dashboards are poised to become more integral to public health decision-making at the local, state, and federal levels in the United States, it is imperative to proactively consider how they may be best designed, implemented, improved, and sustained to promote sound, equitable, and effective public health policies and practices [
, ]. Progress in this direction is currently impeded by the fragmented nature of research on this topic, specifically the lack of coherence regarding effective dashboard design principles and practices, as well as the mechanisms, factors, and supports that make dashboards usable and useful to diverse user groups and across health and decision-making contexts [ , , , ]. Previous reviews of the literature on the use of data dashboards in public health have generally focused on identifying and assessing the utility of key design features of dashboards but were limited to specific public health applications, such as COVID-19 [ , ], food and nutrition systems [ ], infectious diseases [ ], and environmental hazards [ ], or were limited in focus to specific dashboard design features, such as data visualizations [ ] or usability [ ]. Thus, a systematic review of the literature that is broader and more comprehensive in the scope of health topics and applications considered and that goes beyond design-related research questions to consider different goals of data dashboards (eg, alert, educate, and persuade), theories of action (or how dashboards are presumed or expected to work), and outcomes of use (including impact indicators) has significant potential to advance the scientific study of data dashboards as instruments for promoting sound health-related decisions, policies, and practices. Accordingly, the primary objective of this scoping review is to describe and critically assess the current state of scientific knowledge regarding the design, application, and actionability of US national public health data dashboards; note critical theoretical and empirical gaps; and identify potential venues for improving knowledge integration.An additional unique contribution of this scoping review is the explicit focus on actionability as a critical feature of effective public health data dashboards. There has been a growing interest in the question of what makes public health data dashboards actionable, that is, ensuring they provide an optimal match for both purpose and use [
- ]. However, the concept of actionability in the context of public health data dashboards remains poorly defined and insufficiently developed to effectively guide their design and implementation. Ivanković et al [ ], for example, defined data dashboard actionability according to seven features: (1) knowing and clearly stating the desired consumers of the information, (2) selection and presentation of appropriate indicators, (3) clearly stating the sources of data and methods used to generate indicators, (4) demonstrating variation over time and linking changes to public health interventions, (5) providing as high a spatial resolution as possible to enable consumers to evaluate local risk, (6) disaggregating data to population subgroups to further enable evaluation of risk, and (7) providing narrative information to enhance interpretation of the data by the consumer. This functional conception understands actionability as a function of both usability and degree of match between data and users’ information needs, which is intuitive but may not be equally applicable across audiences and settings [ ]. Other scholars in this space offer a behavior-centered conception of actionability [ ]. In their view, to be actionable, dashboards must prompt or trigger users to act on data by being integrated, via behavioral design, into users’ data use practices or routines, such as assessing performance on tasks or progress on goals. Finally, there are those who advocate for a decision-centered conception of actionability, whereby data dashboards are considered actionable to the extent that they provide data, analyses, and forecasts (eg, predictive analytics), allowing decision makers to make an informed choice among alternatives [ , , ]. Accordingly, an additional important objective of the scoping review is to extract, reconcile, and integrate different conceptions and operationalizations of actionability across studies for the purpose of advancing a more complete explication and a standard approach to the measurement of actionability as a critical design element of public health data dashboards.Methods
Review Methodology and Protocol
This scoping review was designed to generate both descriptive and thematic accounts of the purpose; intended audiences; range of health topics; design elements and characteristics; usability and usefulness measures; theories of action; and logistics of developing, implementing, and sustaining public health data dashboards based on information available from published US case studies. Given the considerable diversity in research questions and methodologies used across disciplines and fields to study public health data dashboards, a scoping review of the literature is most appropriate for producing a systematic evidence synthesis [
]. This study followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews), which is the most up-to-date and advanced approach for conducting and reporting scoping reviews [ ]. In the subsequent sections, we briefly describe the methodological processes implemented. Further details are available in the published protocol [ ].Selection Criteria, Sources, and Search Strategy
For the purposes of this scoping review, we defined public health data dashboard as a publicly accessible, web-based, interactive, and regularly updated information management and data visualization tool that displays and tracks population health indicators, metrics, and data points. This definition is inclusive of a broad range of population health–relevant data, such as epidemiological surveillance, but excludes the use of data dashboards in clinical and health care organizations as well as dashboards incorporated into patient portals.
displays all other inclusion and exclusion criteria used for searching and retrieving relevant publications. Given the rapid advancements in dashboard technology in recent years, adopting a broader historical perspective dating back to the beginning of the century can be useful for determining what, if anything, changed over time regarding the design philosophies and theories of action guiding the development and implementation of these tools. To ensure adequate and inclusive representation of empirical studies, no methodological orientation restrictions were imposed as selection criteria.Inclusion criteria
- Publication type: Full text, peer-reviewed journal articles, conference proceedings, book chapters, or published reports
- Language: English
- Scope and focus: Empirical (qualitative, quantitative, or mixed methods) case studies of design, implementation, and evaluation of a US-located national public health dashboard
- Publication date: 2000 to 2023
Exclusion criteria
- Publication type: Peer-reviewed abstract-only or publications for which full text is not available; non–peer-reviewed publications
- Language: Non-English
- Scope and focus: Commentaries, background papers, or reviews of literature; case studies of dashboards located outside of the United States; case studies of state public health dashboards; or case studies of dashboards in clinical or health care settings
- Publication date: Before 2000 or after 2023
The search methodology (refer to the published protocol for full details [
]) involved a series of steps to minimize potential errors in our search strategies that negatively affect the quality and validity of this scoping review [ ]. First, in collaboration with a research librarian, we searched both the Medical Subject Headings (MeSH) database and keywords listed in recently (2019 and onward) published journal papers on the topic of public health data dashboards to identify the most relevant keywords and terms for searching for relevant publications that meet our inclusion criteria. In the next step, we followed an established procedure [ ] to experiment with different combinations of databases and search queries to optimize the recall (sensitivity) and precision (specificity) of our search strategy. Given the aims of this scoping review, we opted for a search strategy that maximizes coverage, that is, will increase the likelihood of identifying all or as many relevant publications as possible. Accordingly, we searched CINAHL, PubMed, MEDLINE, and Web of Science databases in June 2023 for published research reports using the least restrictive validated search query ([“dashboard” OR “data dashboard” OR “information visualization” OR “data visualization”] AND [“public health” OR “population health”]). These databases were selected because they were identified, via rigorous testing, as providing optimal coverage of research published across a broad range of disciplines and fields [ ]. We conducted supplementary searches of gray literature using the same search query to search OpenGrey for additional documents that met all selection criteria.Data Charting
The list of themes and variables used for data abstraction is presented in
. This list was created following an iterative process of reviewing the strategies and instruments used in previous similar reviews; consultations with an expert advisory group composed of public health data dashboard creators; and pretesting of the instrument with a randomly drawn sample of publications included in the review using the same procedure described in the Selection Criteria, Sources, and Search Strategy section for validating the screening and selection procedure, including training on the task and tests of intercoder agreement (refer to the published protocol for full details [ ]).Data were extracted and recorded using a survey instrument designed to capture a range of closed-ended, multiple, and open-ended responses to facilitate standardized coding by multiple coders. Quantitative data were cleaned, harmonized, and properly labeled before being analyzed using SPSS Statistics (version 29; IBM Corp) for generating descriptive statistics. Open-ended text entries were reviewed and analyzed collectively by the authors and organized into common themes to produce additional insights.
Study identifiers
- Metadata (title, authors, journal, year of publication, and keywords)
- Study type (eg, descriptive, exploratory, and explanatory)
- Research methodology
- Study focus (eg, development, implementation, and evaluation)
- Geographic location (country)
Data characteristics
- Data sources
- Health topics
- Type of data (eg, epidemiological, health services, and behavioral)
- Populations represented in the data
- Indicators or metrics selected for visualizations
- Data level of granularity (eg, national, state, county, and city)
Dashboard design characteristics
- Stated goals or purposes of the dashboard (eg, tracking or monitoring)
- Design philosophy cited (eg, user-friendly, functional, and co-design)
- Design process (eg, iterative and collaborative)
- Dashboard features (eg, customization and search functionalities)
- Data visualization tools (eg, maps, graphs, and tables)
Users and usability
- Intended audiences
- Public access (open, restricted or limited, and requires registration)
- Dissemination channels (eg, social media, news outlets, email, and listserv)
- Reported use- or usability-related barriers or challenges
Logistics or operation
- Ownership or hosting
- Source of funding
- Software tools (commercial and open source)
- Data updating and quality assurance protocols
- Technical support (eg, user manuals, training, and customer service option)
Performance and usefulness or impact evaluation
- Evaluation methodology
- Use or usability indicators captured (eg, website analytics and user ratings)
- Impact indicators or other evidence of impact
- Explanations given for observed effects or impact (or lack of)
Results
Overview
A total of 2529 documents (peer-reviewed journal papers, conference proceedings, and book chapters) were initially retrieved by implementing the search procedure. After the removal of duplicate results (1386/2544, 54.48%) and the addition of “grey literature” sources (10/2544, 0.39%) and additional papers identified through snowballing of sources cited in other related literature reviews (5/2544, 0.2%), 44.34% (1128/2544) of documents were retained for manual screening and 8.49% (216/2544) met the study’s definition of a case study of a public health dashboard. Of these 216 documents, 127 (58.8%) were excluded because they were case studies of public health data dashboards in countries outside the United States and therefore beyond the scope of this scoping review. However, these items were retained for the purpose of conducting a future complementary scoping review to compare findings across international boundaries. Accordingly, 89 US-based case studies of public health data dashboards that met all selection criteria were included in the scoping review. The reasons for exclusion are detailed in the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) diagram (
), and the PRISMA-ScR reporting checklist is presented in .Each published case study of a US-based public health data dashboard was coded using the data charting instrument (
). All coders (n=5) first received training on the task and then were provided with a random sample of 10 documents to code. Agreement among coders was assessed using Krippendorff α [ ], and the omnibus test result was significantly lower (α=.37) than the acceptable standard (α=.70). Coders then received additional training on the task and then independently coded a fresh set of 10 randomly selected documents. Intercoder agreement was reassessed and reached an acceptable standard (α=.78), with any ambiguities regarding coding resolved via a full team review and consensus.
Study Characteristics
A list and basic characteristics of the case studies included in the review are provided in
[ , - ]. Articles reviewed were published in 60 different outlets between 2004 and 2023 and most commonly appeared in the American Journal of Public Health (7/89, 8%), the Journal of the American Medical Informatics Association (6/89, 7%), the Journal of Public Health Management and Practice (5/89, 6%), and JMIR Public Health and Surveillance (3/89, 3%). While the case studies included in this scoping review were published over a period of 19 years (2004-2023) and are quite diverse in terms of health topics and intended users of dashboards, a majority (61/89, 69%) were published after 2019, coinciding with the COVID-19 pandemic. Indeed, 40% (35/89) of the case studies included in the review directly address some aspect of COVID-19 and public health.There was a considerable variation in the type of studies included in the scoping review. Over half (49/89, 55%) provided a description of the dashboard developed, including sources of data, design features, and technical details. About a quarter (23/89, 26%) were more exploratory in nature, reporting the results of usability tests conducted with users and any subsequent refinement of the dashboard developed. A smaller number of case studies (13/89, 15%) were classified as explanatory, as they included qualitative or quantitative assessment of the degree to which use of the dashboard was associated with effects on users’ knowledge, decisions, or actions. A handful of cumulative case studies (4/89, 5%) considered lessons learned from comparing the development or implementation of a dashboard across settings or user groups. Regarding case study methodology, 10% (9/89) of the case studies included in the review used quantitative methods, 37% (33/89) used qualitative methods, and 31% (28/89) combined mixed methods. About 20% (18/89) of the case studies reviewed were a description of a dashboard and the process of developing the dashboard.
Overall, case studies that systematically assess use, usefulness, and outcomes of using public health dashboards remain scarce even as the volume of published empirical research on the topic has sharply risen in recent years. This is also evidenced in the types of information frequently provided in the case studies reviewed. Information typically reported includes features or functionalities of the dashboard (80/89, 91%), sources of data used (78/89, 89%), and the logistics of developing and deploying the dashboard (62/89, 71%). Less frequently reported is information pertaining to assessing use or usability of the dashboard (26/89, 30%), results of usability tests (19/89, 22%), any form of impact evaluation (17/89, 19%), or dissemination procedures (14/89, 16%). This distribution may reflect authors’ decisions about what information to report due to space constraints and the absence of standards for reporting on dashboards, but it may also point to the paucity of efforts to assess the usefulness of these tools for public health decision makers.
Dashboard Hosting and Funding Source
As shown in
, the dashboards represented in the case studies included in the scoping review were most likely to be hosted on university websites, compared to federal government sites, sites maintained by nonprofit or philanthropic organizations, state government sites, and independent hosts. A handful of dashboards (3/89, 3%) were hosted by health care industry organizations. Website hosting information was unavailable for a third of the case studies reviewed, primarily because a web address for the dashboard was not provided.The findings summarized in
also demonstrate that most of the dashboards studied were funded by US government health agencies (eg, Centers for Disease Control and Prevention [CDC], National Institutes of Health, and Agency for Healthcare Research and Quality), followed by universities and foundations, with grants being the most common mechanism for funding the development and deployment of public health dashboards (39/89, 44%). Funding information was not provided for a third of the case studies (30/89, 34%) reviewed. However, for case studies where funding information was provided, 53% (19/36) of federally funded studies used federal data sources, compared to other data sources, such as state agencies (13/36, 36%), research organizations (12/36, 33%), media sources (11/36, 31%), and local agencies (6/36, 17%). Taken together, these findings suggest that institutional actors, such as government agencies, universities, and philanthropic organizations, are the primary funders, developers, and hosts of public health data dashboards in the United States, presumably because they possess the necessary resources and expertise to create and maintain these tools. However, this may be a source of potential selection bias regarding topics, data, and indicators covered by these dashboards.Value, n (%) | ||
Dashboard hosting | ||
University websites | 23 (26) | |
Federal government websites | 9 (10) | |
Health management organizations | 9 (10) | |
Nonprofit and philanthropic organizations | 7 (8) | |
State government websites | 6 (7) | |
Independent and nonaffiliated websites | 6 (7) | |
Unknown | 29 (33) | |
Source of funding | ||
Federal health agencies | 36 (41) | |
Universities | 19 (22) | |
Foundations | 13 (15) | |
Unknown | 23 (53) |
Topic, Purpose, and Intended Users
A complete list of public health topics covered by the dashboards represented in this review is included in
. For the purpose of this analysis, case studies of public health dashboards were grouped according to type of data used and purpose of presenting the data such that they map onto key public health functions. As shown in , the primary function or purpose of the dashboards reviewed was surveillance and monitoring. Epidemiological surveillance was the most common purpose of the data presented in dashboards, followed by health outcomes surveillance (eg, births, deaths, life expectancy, and quality of life measures), tracking of use of health services (eg, proportion of population screened or immunized), and analysis of sources or causes of health disparities (eg, social determinants of health). Behavioral surveillance such as tracking self-reported attitudes and behaviors, news and social media content monitoring, health policy or legislation tracking, and tracking availability of health care facilities or health services in a certain geographical area were less common by comparison. These differences may be attributed to the limits imposed by the types of population-level health data available to dashboard developers, which are predominantly of the epidemiological and health services type. Notably, about 22% (20/89) of the dashboards reviewed were equipped to provide predictions of future trends (often based on data extrapolation) or likely effects (positive or adverse) of policies, such as increasing access to health care insurance or services in a community, which may enhance their actionability.Recognizing that public health data dashboards are often created to serve multiple audiences, the intended users of dashboards identified by the authors of the case studies reviewed, as shown in
, were most commonly public health decision makers (eg, public health departments and officials), followed by policy makers (eg, agency, state, and city administrators), researchers (eg, researchers, analysts, and academics), practitioners (eg, clinicians, health care administrators, public health professionals, and first responders), and the general public. By comparison, public health advocates were the least likely to be identified as potential intended users of dashboards. Intended users of the dashboard were not explicitly identified in 17% (15/89) of all case studies did not explicitly identify intended users of the dashboard.Because dashboard actionability is primarily a function of the match between audience needs and the purpose of presenting data (data affordances), a multiple-response cross-tabulation analysis was conducted to probe the degree to which data affordances of dashboards are tailored to various users. As shown in
, the results of this analysis demonstrate no clear pattern of covariation of dashboards’ data affordances by groups of intended users. Thus, case studies of dashboards designed for epidemiological surveillance were equally likely to identify researchers, policy makers, and public health decision makers or practitioners as intended users, and the same was true for dashboards designed for tracking and comparing health outcomes and those designed to highlight the effects of social determinants of health. By comparison, case studies of dashboards designed for tracking access and use of health services were more likely to identify members of the general public as the intended audience compared to researchers, policy makers, and public health decision makers. This inconclusive pattern of association suggests that the design of actionable dashboards tailored to specific audience groups is not a common practice. Conversely, it may reflect dashboard designers’ belief that the dashboards they design are universally usable and useful for diverse audience groups and for diverse purposes.Value, n (%) | ||
Dashboard focus | ||
Epidemiological surveillance | 51 (57) | |
Health outcomes surveillance | 34 (38) | |
Use of health services | 29 (33) | |
Health disparities | 27 (30) | |
Behavioral surveillance | 13 (15) | |
News and social media surveillance | 11 (13) | |
Policy or legislative surveillance | 9 (10) | |
Services availability | 15 (17) | |
Intended audiences | ||
Public health decision makers | 34 (38) | |
Policy makers | 30 (34) | |
Researchers | 28 (32) | |
Practitioners | 28 (32) | |
General public | 27 (30) | |
Advocates | 12 (14) | |
None explicitly referenced | 15 (17) |
aAs dashboards frequently incorporate different types of data that serve multiple functions and cater to multiple user groups, the total percentage across categories exceeds 100%.
Researchers, n (%) | Policy makers, n (%) | Decision makers, n (%) | Practitioners, n (%) | Advocates, n (%) | General public, n (%) | |
Epidemiological surveillance | 16 (57) | 18 (60) | 21 (62) | 14 (50) | 5 (42) | 21 (78) |
Behavioral surveillance | 3 (11) | 8 (27) | 6 (18) | 6 (21) | 2 (17) | 3 (11) |
Policy surveillance | 2 (7) | 4 (13) | 3 (9) | 4 (14) | 1 (8) | 2 (7) |
News and social media surveillance | 3 (11) | 5 (17) | 5 (15) | 4 (14) | 2 (17) | 5 (19) |
Access to services monitoring | 3 (11) | 7 (23) | 5 (15) | 4 (14) | 3 (25) | 9 (33) |
Use of services monitoring | 5 (18) | 7 (23) | 10 (29) | 8 (29) | 4 (33) | 9 (33) |
Health outcomes surveillance | 11 (39) | 13 (43) | 13 (38) | 10 (36) | 6 (50) | 12 (44) |
Health disparities | 10 (36) | 10 (33) | 14 (41) | 11 (39) | 5 (42) | 9 (33) |
Prediction | 7 (25) | 5 (17) | 5 (15) | 2 (7) | 1 (8) | 4 (15) |
aPercentages indicate the distribution within individual subsets rather than across all cases.
Data Source, Focus, and Representation
describes the distribution of sources and types of data of the dashboards included in the review. About half of the dashboards (44/89, 49%) used data from federal agency sources (eg, CDC and Agency for Healthcare Research and Quality), and a third (29/89, 33%) used data collected by state agencies (eg, state department of health). Data obtained from health care facilities (eg, administrative data such as emergency room records and hospitalizations) were used by a quarter of all dashboards (23/89, 26%), and media data and data collected by research organizations such as universities were each a data source used by a fifth of all dashboards (19/89, 21%). Aggregated patient or clinical data (14/89, 16%), municipal data (13/89, 15%), and insurance claims data (7/89, 8%) were less frequently used by the dashboards included in the review.
The type of data featured in dashboards was primarily epidemiological data (eg, incidence of disease, illness, or events such as drug overdoses), health services data (eg, data about services provided by certified health providers, such as hospitalization, ambulatory care, screens, medications, and immunizations), clinical data (eg, data related to patient diagnosis, exposures, and laboratory tests), and health outcomes data (eg, births, deaths, life expectancy, and quality of life indicators). Behavioral data (eg, self-reported measures of beliefs, attitudes, and behaviors), media data (eg, news coverage of health topics or social media posts), and environmental risk data were less frequently integrated into the dashboards reviewed, presumably because such data are not routinely collected or readily available to creators of public health data dashboards.
The same pattern of findings emerged regarding the range of public health issues addressed by the dashboards studied (refer to
for the complete list): common categories of issues included risk factors (eg, chemical exposure, infectious diseases, and tobacco use; 42/89, 47%); disease incidence (eg, obesity and diabetes; 22/89, 25%); health disparities (eg, access or use of health and medical services; 17/89, 19%); and, less frequently, social determinants of health (5/89, 6%) or behavioral or public opinion insights (5/89, 6%). Most dashboards (81/89, 91%) focused on a single topic, with about 40% (35/89) of the dashboards studied exclusively focused on the topic of COVID-19.Regarding representation, the dashboards studied afforded users access to varying levels of international (15/89, 17%), national (41/89, 46%), state (47/89, 53%), and hyperlocal (eg, city or town and county; 62/89, 70%) public health data, with the greatest degree of overlap between state and local data (37/89, 42%). There were also notable variations in the populations represented in the data used by dashboards. Patient populations (48/89, 54%) and the general population (46/89, 52%) were most frequently represented in the data used compared to provider (health care professionals and medical institutions and organizations) populations (7/89, 8%) and data that exclusively represents populations considered vulnerable (10/89, 11%). As may be expected, populations considered vulnerable were more likely to be represented in dashboards focused on health disparities and social determinants of health (6/10, 60%) than dashboards focused on other aspects or dimensions of public health (eg, risk factors and use of health services).
Value, n (%) | |||
Data source | |||
Federal agency | 44 (49) | ||
State agency | 29 (33) | ||
Health care facilities | 23 (26) | ||
Media | 19 (21) | ||
Research organizations | 18 (20) | ||
Aggregated patient data | 14 (16) | ||
Municipal data | 13 (15) | ||
Insurance claims data | 7 (8) | ||
Data type | |||
Epidemiological data | 48 (54) | ||
Health services data | 48 (54) | ||
Clinical data | 41 (46) | ||
Health outcomes data | 38 (43) | ||
Behavioral data | 19 (21) | ||
Media data | 5 (6) | ||
Environmental risk data | 4 (5) |
aAs dashboards used data from multiple sources and of different types, total percentage across categories exceeds 100%.
Dashboard Design Process and Design Principles
On the basis of the information provided by authors, we determined that the design of dashboards included in the review was most frequently driven by the intended purpose or goal of using the dashboard (functional design, represented in 29/89, 33% of case studies) or the needs or preferences of users (user-centered design, represented in 28/89, 32% of case studies), but less frequently to facilitate or support a particular decision-making process (decision-centered design, represented in 13/89, 15% of case studies). No clear design philosophy was discussed by authors in 35% (31/89) of the analyzed case studies.
Variations regarding the collaborative nature, if any, of the design process ranged from creator-driven (no input from users, represented in 39/89, 44% of case studies), through creator-driven with user feedback (27/89, 30% of case studies), and to a partnership-based or co-design process (23/89, 26% of case studies). Reported collaborations on dashboard development were overwhelmingly scientific collaborations with external experts or teams of developers (45/89, 51%) and less likely to involve collaborations with funders (10/89, 11%), community representatives (10/89, 11%), or industry (6/89, 7%). Over one-third of all case studies (32/89, 36%) did not reference any collaboration.
Given that relatively few case studies involved user feedback or collaboration, data visualization choices were presumably made without input from users in many cases. Nevertheless, data visualization tools referenced include graphs and charts (69/89, 78%), maps (54/89, 61%), timelines (36/89, 40%), and tables (32/89, 36%), with information about visualization tools missing from 8% (7/89) of case studies analyzed. Interactive customization options referenced include selecting or sorting cases by ≥1 indicators (54/89, 61%), selecting or grouping cases by location (46/89, 52%), sorting or grouping by time (34/89, 38%), sorting or grouping by demographic characteristics (25/89, 28%), and a searching function (10/89, 11%). Information regarding customization was missing or ambiguous in 23% (20/89) of case studies. In 30% (27/89) of case studies, authors indicated that integrating health data with social determinants of health data for the same group or locality (eg, rural health indicators by rural access to broadband internet) was possible.
Data visualizations implemented in the dashboards studied could be most frequently disaggregated spatially or geographically (52/89, 58%), followed by temporally or time (eg, year and month; 32/89, 36%). Other common disaggregation options reported include demographics (eg, age, gender, race, and ethnicity; 24/89, 27%) and socioeconomic factors (eg, education and income; 21/89, 24%). No disaggregation options were referenced in 23% (20/89) of case studies analyzed, and disaggregation by contextual factors (eg, environmental hazards (11/89, 12%), health services availability (7/89, 8%), and genomic and biological factors (3/89, 3%) was available in <15% of case studies analyzed. Interestingly, we found no reference to data storytelling, simulations, and other more interactive forms of audience engagement with data in the case studies reviewed.
Use, Usability, and Usefulness
To determine whether a dashboard was still active at the time of conducting our review, we used the URL provided by authors, either in the text of the publication or in any supporting materials. URLs of dashboards were not provided in 41% (36/89) of the case studies analyzed. We were able to confirm that 47% (42/89) of all dashboards were still active at the time of our review and that 12% (11/89) were inactive or could no longer be accessed due to broken links. This seems to be influenced more by relevance or data availability than by the time elapsed since the case study was published. Thus, while recent, 45% (16/35) of the case studies of COVID-19 dashboards were no longer available or accessible at the time of producing this scoping review, and many of those that remain accessible have not been recently updated, as COVID-19 cases and death data reporting has been discontinued by CDC with the end of the public health emergency in May 2023. In addition, based on information provided in the case study or by inspecting the URLs provided, we were able to determine whether users had unrestricted or conditional access to dashboards included in the analysis. Open or unrestricted public access to the dashboard was observed in 46% (41/89) of all cases, whereas conditional access (eg, having to register as a user before being granted access) was observed for 14% (12/89) of all cases.
Only sparse information was provided in the case studies reviewed regarding how users were to learn about the availability and intended use of data dashboards, with such information not reported for 75% (67/89) of cases. Dissemination channels referenced when such information was provided include webinars, training, and outreach (13/89, 15%); social media posts (5/89, 6%); newsletters (4/89, 5%); news items, email distribution lists, and blogs (3/89, 3% each); and targeted advertising or website information (2/89, 2% each).
Usability indicators referenced include website analytics (22/89, 25%); experts’ evaluation (19/89, 21%); users’ impact stories (14/89, 16%); user ratings (12/89, 14%); citations, references, and mentions (8/89, 9%); and URL links (eg, external sites that link to or embed dashboards; 2/89, 2%). Usability information was not provided for 47% (42/89) of case studies included in the review. Indicators of usefulness (eg, impact on user knowledge, perceptions, decisions, or actions) mentioned in case studies include expectations regarding public health impact (17/89, 19%), stakeholder feedback or use (16/89, 18%), user engagement metrics (14/89, 16%), citations or references to dashboards in academic publications (12/89, 14%), and anecdotal evidence of association between policy makers’ use of dashboards and policy actions (6/89, 7%). Information about impact indicators was not provided for 51% (45/89) of the case studies reviewed.
Actionability Assessment
In addition to producing an updated, state-of-the-art review and analysis of public health data dashboards in the United States, a primary motivation for conducting this scoping review was to clarify the meaning and significance of actionability as a property of effective public health data dashboards. Our findings distinguish among 3 principal conceptions of dashboard actionability. A common conception, popularized by Ivanković et al [
], understands actionability as the degree of match between purpose and use and associates it with functional design such that an actionable dashboard displays information clearly and efficiently, is intuitive to use, and is easily customizable to allow data exploration. Our analysis revealed that 33% (29/89) of the case studies of dashboards reviewed used functional design. assesses the applicability of the actionability criteria proposed by Ivanković et al [ ] to the case studies included in the review at the aggregate, recognizing that this scheme was developed to assess actual dashboards (as opposed to research reports on dashboards).Several valuable insights emerge from this exercise. First, dashboard actionability critically depends on the availability of the “right data”—not simply in terms of quality, relevance, and timeliness but also the degree of data granularity and adequate representation of both subpopulations and relevant indicators. The “right data” also has much to do with public health focus: most case studies of dashboards reviewed were designed for epidemiological or health services access or use surveillance; only a handful were intentionally designed to support other critical public health missions, such as health education and prevention, health policy advocacy, and improved access to health services. Thus, expanding the types and diversity of data incorporated into dashboards is necessary for enhancing the actionability of these tools. Second, actionability is also a function of match to purpose and use, which varies depending on the goal of data use (eg, surveillance vs analysis or prediction) and the range of questions that can be answered given the data layering and customization possibilities afforded by a dashboard. This dimension of actionability is acutely relevant for exploring or analyzing data in context: <15% (5/89) of case studies of dashboards included in the review afforded users the opportunity to explore the relevance or significance of contextual factors such as social determinants of health. Third, the use of dashboards can result in unintended or undesirable effects [
]. This may be due to bias in the data used for creating a dashboard [ ], bias associated with the presentation of data [ ], or bias (whether intentional or unintentional) that affects the correct interpretation or proper use of insights drawn from data. Therefore, actionability requires acknowledgment of any actual and potential limitations or sources of bias that may influence dashboard use. This necessarily means going beyond mere transparency regarding data sources, methods, and funding to introducing, as a matter of standard practice, built-in guardrails against uninformed or improper use of dashboards in the form of alerts or cautions, disclaimers, and perhaps even recommendations or guidelines regarding acceptable use.Actionability criterion | Scoping review findings |
Knowing and clearly stating the desired consumers of the information | Information about intended users was available for most of the case studies reviewed (74/89, 83%). The primary intended audiences identified were public health decision makers, policy makers, and researchers, with secondary audiences including practitioners, advocates, journalists, and the general public. |
Selection and presentation of appropriate indicators | Virtually all case studies reviewed used indicators that were topic relevant and aligned with the stated purpose of the dashboard. However, in only about half of all cases (50/89, 56%) appropriate indicators were determined after consulting intended users of a dashboard. Choice of indicators appears to be constrained by data availability. |
Clearly stating the sources of data and methods used to generate indicators | Sources and types of data were clearly noted in most case studies reviewed. However, there was less transparency regarding methods (not reported in 31/89, 35% of case studies), software used (not reported in 25/89, 28% of cases), and collaborators, if any (not reported in 32/89, 36% of cases). There was no reference in the case studies reviewed to the inclusion of disclaimers regarding data limitations, although it is possible that disclaimers were included in some or most cases. |
Demonstrating variation over time and linking changes to public health interventions | About 40% (36/89) of case studies reviewed included visualization of visualization of indicators, and 34% (34/89) allowed for temporal customization of data. None were linked to the effect of a public health intervention, although about 22% (20/89) of cases involved dashboards capable of extrapolating predictions. Still, this criterion does not universally apply to all dashboards, given variations in purpose and type of data used. |
Providing as high a spatial resolution as possible to enable consumers to evaluate local risk | Most dashboards represented in the case studies reviewed (62/89, 70%) had a degree of data granularity extending to the local level. Still, this criterion does not universally apply to all dashboards, given variations in purpose and the scope and quality of local data available. |
Disaggregating data to population subgroups to further enable evaluation of risk | About 27% (24/89) of the case studies of dashboards reviewed allowed for data disaggregation by demographics, 24% (21/89) for disaggregation by socioeconomics, and 8% (7/89) for disaggregation based on health insurance status. However, this particular affordance of public health dashboards is likely more common. At the same time, only 11% (10/89) of the cases of dashboards reviewed used data specific to a particular subgroup, which may indicate inadequate representation of minoritized groups and other groups considered vulnerable that are underrepresented in general population data. |
Providing narrative information to enhance interpretation of the data by the consumer | The findings of the scoping review do not reveal a standard approach to the inclusion of narrative information to aid interpretation. Such information was rarely included in the case studies reviewed, and most (67/89, 75%) did not include any information pertaining to dissemination to users. |
A second and equally common conception of actionability emerging from the case studies of dashboards reviewed (28/89, 32%) is behavioral or user-centered design. This conception primarily understands dashboard actionability as a function of both usability and usefulness: dashboards can support evidence-informed decisions and actions only if they are usable (ie, sufficiently easy and intuitive for users to navigate, interact with, and customize data visualizations) and useful in terms of being responsive to users’ information needs and generating valuable insights for guiding users’ understanding, reflection, decisions, and ultimately actions. Of the 2, usefulness appears to be most relevant to operationalizing actionability because usability is closely associated with a user’s technical and data analytical literacy and therefore may be considered a necessary but insufficient determinant of usefulness. However, our findings suggest that use and usability evaluations—whether via use of website analytics (22/89, 25%), experts’ evaluation (19/89, 21%), and user ratings (12/89, 14%)—are more common than evaluations of usefulness. Moreover, we found no evidence of systematic or rigorous evaluations of usefulness across the case studies of dashboards included in the scoping review. When an effort is made to assess usefulness, it is typically based on anecdotal user feedback (16/89, 18%), user engagement metrics derived from website analytics (14/89, 16%), or distal indicators such as citations or references to dashboards in academic publications (12/89, 14%). In this regard, we note that virtually none of the case studies of dashboards included in the review included an explicit theory of action that causally links dashboard use and usability to usefulness and impact of use, including the underlying mechanism that explains how use relates to outcomes (eg, drawing attention, facilitating learning and comprehension, persuading, and guiding choice among alternative actions).
Although user-centered design is frequently referenced in these case studies as the framework guiding the development of usable and useful dashboards, the development of these tools appears to be based mostly on dashboard developers’ expectations regarding how users should interact with, experience, and be influenced by using a dashboard, rather than on robust and thoughtful engagement with potential users and their expectations and needs. The fact that case studies that referenced using a co-design process to develop a dashboard were significantly fewer than case studies in which a dashboard was developed with no or minimal input from intended users (23/89, 26% compared to 39/89, 44% of all case studies, respectively), and that when collaborations were referenced, they most frequently involved scientific collaborations (45/89, 51%) and less frequently collaborations with users (15/89, 17%), appears to support this conclusion.
A third, less common conception of actionability that emerged from the scoping review (28/89, 32%) is focused on the degree of match between the insights that can be drawn from using a dashboard and the nature of the decision facing users. This conception of actionability is based on the recognition that the use of dashboards is often motivated by organizational goals and therefore ought to vary depending on whether strategic, tactical, or operational decisions are involved [
]. Thus, dashboards primarily designed for surveillance and monitoring (representing most of the case studies reviewed) can support operational decisions; dashboards that enable users to probe and analyze causes of health disparities or compare the efficacy of different intervention approaches can support tactical decisions; and dashboards that offer predictions (about 11/89, 12% of case studies of dashboards included in the review) or present data in context (eg, social determinants of health; about 27/89, 30% of case studies reviewed) can support strategic decisions regarding health policy and investments. This conception of actionability appears to be the least developed in the literature but may deserve greater attention from dashboard developers and researchers alike.In summary, actionability assessment as applied to dashboards is more complex and multifaceted than portrayed in the literature on the topic. Among others, actionability is a function of user factors (capacity, needs, motivations, etc), characteristics of available data (quality, completeness, relevance, timeliness, granularity, etc), purpose (surveillance and monitoring, enlightenment, diagnosis, prediction and prognosis, prescription for action, etc), decisional goals (eg, strategic, tactical, or operational), desired impact (eg, on policy, practice, system change, and public education), and design elements (usability, functionality, interactivity, customization, adaptability, etc). It also requires consistent and informed use of dashboards and therefore is likely associated with the quality of dissemination efforts (ie, how users find out about the availability and value of using a dashboard); guidance regarding appropriate (and ethical) use; thoughtful integration with existing systems and users’ professional routines; and sustained sources of funding for technical support, maintenance, and continued improvement. Given this complexity, it is difficult to envision a standard set of metrics or indicators for studying and assessing actionability across applications and users of dashboards. A more productive path forward is to move away from a conception of actionability as a trait or property of usable and useful dashboards in favor of a more dynamic conception that understands actionability as a function of the iterative process used to conceive, design, deploy, evaluate, improve, and sustain dashboards that users find usable and useful given their goals, knowledge needs, and capacity.
Discussion
Principal Findings
Data dashboards can be a useful tool for improving knowledge translation, efficient and timely dissemination of insights from research, and equitable access for diverse users to critical health-related information. They can also support evidence-informed decision-making by serving multiple functions (eg, drawing attention and awareness to emerging challenges and monitoring change on existing ones, promoting more nuanced understanding of problems and potential solutions, facilitating goal setting, prioritizing, and sound allocation of resources) and can be valuable for data-focused collaborations. As public health data dashboards are poised to become more ubiquitous, it is imperative to proactively consider how they may be best designed to leverage public health data systems and meet the information needs of diverse audiences to support sound decisions regarding equitable and sustainable public health policies and practices [
, ]. However, as is evident from the findings of this scoping review, the scientific literature available to inform such efforts is considerably fragmented and lacking a standard, coherent focus regarding the goals, design, use, usefulness, and impact of these tools, as well as regarding factors (ie, conditions, circumstances, and support mechanisms) that explain variations in their use and usefulness across users and applications [ , , , ].The rapid growth in public health data dashboard development in recent years—driven in part by the COVID-19 pandemic—may indicate that dashboard ecosystems are rapidly expanding along with technologies to support them, requiring conscientious approaches to dashboard design and applications, including improving on the adaptive or repurposing potential of these tools when public health priorities shift (as was the case for the COVID-19 dashboards). Despite this growth, our findings show that systematic and rigorously evaluated insights from the available literature regarding the optimal design, implementation, and improvement of public health dashboards are sparse and inconsistent, and therefore, insufficient to advance the future development and successful application of these tools at scale as well as support rigorous evaluations of their efficacy and public health impact.
Most of the case studies (36/89, 41%) included in the review were funded by the US government, with grants being the common funding mechanism used to support the development of public health dashboards. The dashboards considered in the literature were also more commonly hosted on university websites and designed via scientific collaborations. While our sample of case studies may be admittedly biased toward dashboards developed in academia, given that data were extracted from academic publications, legitimate questions nevertheless arise about the sustainability of such dashboards in light of familiar concerns about the long-term sustainability of these tools beyond classic project-based approaches to grant-funded work [
]. Still, as key drivers of innovation [ ], including in public health [ ], universities are uniquely positioned with expertise and institutional capacity to lead dashboard development efforts. This tension point may represent a fruitful area for further investigation and discussion.Our review and synthesis also point to limited application of data dashboards in public health, including in relation to health inequities. The findings show that public health dashboards are primarily used for epidemiological surveillance and monitoring of various health risks. From a health equity perspective, such use of dashboards necessarily invites public health focus on deficits or disparities across subpopulations and communities; however, dashboards can be an equally effective tool for mapping and tracking assets (eg, available community resources that can be tapped in public health emergencies). Similarly, dashboards can have an important role in supporting effective public health advocacy by regularly monitoring the health policy-making and public opinion arenas, but these types of applications are significantly less common based on the findings of the review. In terms of intended users, the dashboards in our study were more commonly geared toward public health decision makers and policy makers than other public health stakeholders, such as the news media, public health advocates, and the general public. While the data used in dashboards are predominantly collected and shared by federal and state public health agencies, with institutional capacity for data management, curation, and interpretation [
], the case studies reviewed suggest that local data are increasingly available for integration into public health dashboards, but it is not clear whether the quality and representativeness of local data are sufficient for supporting sound decisions [ ] or whether the design of dashboards for federal and state policy makers used is equally responsive to local decision makers’ knowledge needs and data use capacity. The finding that the practice of co-designing dashboards with users is rare, at least based on the cases reported in the literature on the topic, may raise concerns regarding the usability and usefulness of these tools to local public health decision makers.Limitations and Future Work
The scoping review methodology used in this study has several potential limitations. First, although we took multiple steps to ensure the rigor of our literature search and screening strategy, it is still possible that some relevant studies that met the study’s inclusion criteria were overlooked, including studies published after our search was concluded in mid-2023. However, by opting for a procedure designed to maximize recall (coverage) at the expense of precision (specificity), we were able to mitigate any potential bias due to omission of relevant studies. Second, and related to scope, the studies included in this scoping review were limited to public health data dashboards in the United States, whereas our search strategy identified a nontrivial number of relevant studies involving public health dashboards developed in other countries. Regions and countries around the world vary in terms of available public health data infrastructure, health systems, and public health conditions and priorities. Such international samples of case studies, while not directly comparable, may produce additional valuable insights and therefore deserve similar attention. Accordingly, we plan to conduct a separate, complementary scoping review of these additional case studies, using the same procedure and methodology implemented in this study, and compare the findings to the ones reported here, noting any similarities and differences between the 2 samples. Third, case studies of public health data dashboards that are available from the academic literature on this topic may overrepresent a particular type or subpopulation of dashboards (eg, dashboards developed and evaluated by university researchers) and therefore underrepresent the actual diversity of dashboard applications in public health, which may potentially bias our findings and conclusions. At the same time, our findings and conclusions are largely congruent with those reported by previous similar literature syntheses [
, , - ]. In addition, the next phase of our project, which involves coding and analysis of a probability sample of US federal and state public health data dashboards, will permit us to assess the degree and type of bias, if any, in the literature based on the findings of this scoping review. Finally, because the studies included in this scoping review vary considerably in the type and depth of the information provided, our data extraction and analysis, which focuses on detecting and synthesizing patterns of findings, may not be sufficiently robust to derive practical recommendations regarding the optimal design of actionable public health data dashboards; however, we believe this research contributes to advancing additional theory and research on this topic.Conclusions
Public health data dashboards have significant potential to support evidence-informed policy and practice decisions if they are actionable. The findings of the scoping review reveal a rather fragmented body of scholarship on this topic, which lacks a coherent and systematic focus on the various functions, design elements, causal mechanisms, conditions, and range of outcomes of dashboard use and their relationship with actionability across applications and diverse user groups. Notably absent from current scholarship are explicit theories of action that identify major factors (user-, design-, goal-, and context-related) that facilitate or impede informed use of these tools and explicate the mechanisms that link use with outcomes (eg, users’ knowledge, sensemaking, reflection, decisions, and actions) and ultimately impact practice or policy. Also notably missing are rigorously designed empirical studies that go beyond usability assessments to assess the usefulness of dashboards as a key dimension of actionability, as well as studies that tease out the relative advantages and disadvantages of different dashboard design philosophies and processes and produce practical recommendations. There is a significant opportunity for future research to advance both scholarship and practice regarding the design, deployment, and sustainability of actionable dashboards by addressing these existing gaps.
Acknowledgments
This scoping review was funded by the Robert Wood Johnson Foundation. The views expressed here do not necessarily reflect the views of the foundation, which had no role in the study design, decision to publish, or drafting of the manuscript.
Data Availability
The datasets generated or analyzed during this study are available from the corresponding author on reasonable request.
Authors' Contributions
GS and IY contributed equally to the study concept and design, literature search strategy, screening and analysis of the case studies included in the review, and drafting of the manuscript. MK contributed to pretesting and refinement of the literature search strategy, the initial development of the data extraction instrument, and the screening and analysis of the case studies. All authors read and approved the paper for submission.
Conflicts of Interest
None declared.
PRISMA-ScR checklist.
DOCX File , 108 KBList and characteristics of case studies included in the scoping review.
DOCX File , 44 KBReferences
- Gardner L, Ratcliff J, Dong E, Katz A. A need for open public data standards and sharing in light of COVID-19. Lancet Infect Dis. Apr 2021;21(4):e80. [CrossRef]
- Khodaveisi T, Dehdarirad H, Bouraghi H, Mohammadpour A, Sajadi F, Hosseiniravandi M. Characteristics and specifications of dashboards developed for the COVID-19 pandemic: a scoping review. Z Gesundh Wiss. Feb 02, 2023;32:553-574. [FREE Full text] [CrossRef] [Medline]
- Dasgupta N, Kapadia F. The future of the public health data dashboard. Am J Public Health. Jun 2022;112(6):886-888. [CrossRef] [Medline]
- Schulze A, Brand F, Geppert J, Böl GF. Digital dashboards visualizing public health data: a systematic review. Front Public Health. Dec 2023;11(12):999958. [FREE Full text] [CrossRef] [Medline]
- Few S. Information Dashboard Design: The Effective Visual Communication of Data. Berkeley, CA. O'Reilly Media; 2006.
- Han Q, Nesi P, Pantaleo G, Paoli I. Smart city dashboards: design, development, and evaluation. In: Proceedings of the 2020 IEEE International Conference on Human-Machine Systems. 2020. Presented at: ICHMS '20; September 7-9, 2020:1-4; Rome, Italy. URL: https://ieeexplore.ieee.org/document/9209493 [CrossRef]
- Sarikaya A, Correll M, Bartram L, Tory M, Fisher D. What do we talk about when we talk about dashboards? IEEE Trans Vis Comput Graph (Forthcoming). Aug 21, 2018:682-692. [CrossRef] [Medline]
- Wu E, Villani J, Davis A, Fareed N, Harris DR, Huerta TR, et al. Community dashboards to support data-informed decision-making in the HEALing communities study. Drug Alcohol Depend. Dec 01, 2020;217:108331. [FREE Full text] [CrossRef] [Medline]
- D’Agostino EM, Feger BJ, Pinzon MF, Bailey R, Kibbe WA. Democratizing research with data dashboards: data visualization and support to promote community partner engagement. Am J Public Health. Nov 2022;112(S9):S850-S853. [CrossRef]
- Dixon BE, Dearth S, Duszynski TJ, Grannis SJ. Dashboards are trendy, visible components of data management in public health: sustaining their use after the pandemic requires a broader view. Am J Public Health. Jun 2022;112(6):900-903. [CrossRef] [Medline]
- Thorpe LE, Gourevitch MN. Data dashboards for advancing health and equity: proving their promise? Am J Public Health. Jun 2022;112(6):889-892. [CrossRef] [Medline]
- Vahedi A, Moghaddasi H, Asadi F, Hosseini AS, Nazemi E. Applications, features and key indicators for the development of COVID-19 dashboards: a systematic review study. Inform Med Unlocked. 2022;30:100910. [FREE Full text] [CrossRef]
- Zhou BJ, Liang SW, Monahan KM, Singh GM, Simpson RB, Reedy J, et al. Food and nutrition systems dashboards: a systematic review. Adv Nutr. Jun 01, 2022;13(3):748-757. [FREE Full text] [CrossRef] [Medline]
- Carroll LN, Au AP, Detwiler LT, Fu TC, Painter IS, Abernethy NF. Visualization and analytics tools for infectious disease epidemiology: a systematic review. J Biomed Inform. Oct 2014;51:287-298. [FREE Full text] [CrossRef] [Medline]
- Stieb DM, Huang A, Hocking R, Crouse DL, Osornio-Vargas AR, Villeneuve PJ. Using maps to communicate environmental exposures and health risks: review and best-practice recommendations. Environ Res. Sep 2019;176(12):108518. [FREE Full text] [CrossRef] [Medline]
- Narayan K, Prasad Nayak MD. Need for interactive data visualization in public health practice: examples from India. Int J Prev Med. 2021;12(1):16. [CrossRef]
- Barbazza E, Ivankovic D. What makes COVID-19 dashboards actionable? Lessons learned from international and country specific studies of COVID-19 dashboards and with dashboard developers in the WHO European Region...14th European Public Health Conference (Virtual), Public health futures in a changing world, November 10-12, 2021. Eur J Public Health. 2021;31(Suppl 3):488. [FREE Full text] [CrossRef]
- Bos CV, Jansen T, Klazinga NS, Kringos DS. Development and actionability of the Dutch COVID-19 dashboard: descriptive assessment and expert appraisal study. JMIR Public Health Surveill. Oct 12, 2021;7(10):e31161. [FREE Full text] [CrossRef] [Medline]
- Mach KJ, Lemos MC, Meadow AM, Wyborn C, Klenk N, Arnott JC, et al. Actionable knowledge and the art of engagement. Curr Opin Environ Sustain. Feb 2020;42:30-37. [CrossRef]
- Sorapure M. User perceptions of actionability in data dashboards. J Bus Tech Commun. Mar 20, 2023;37(3):253-280. [FREE Full text] [CrossRef]
- Verhulsdonck G, Shah V. Making actionable metrics “actionable”: the role of affordances and behavioral design in data dashboards. J Bus Tech Commun. Oct 29, 2021;36(1):114-119. [CrossRef]
- Ivanković D, Barbazza E, Bos V, Brito Fernandes Ó, Jamieson Gilmore K, Jansen T, et al. Features constituting actionable COVID-19 dashboards: descriptive assessment and expert appraisal of 158 public web-based COVID-19 dashboards. J Med Internet Res. Feb 24, 2021;23(2):e25682. [FREE Full text] [CrossRef] [Medline]
- Matheus R, Janssen M, Maheshwari D. Data science empowering the public: data-driven dashboards for transparent and accountable decision-making in smart cities. Gov Inf Q. Jul 2020;37(3):101284. [FREE Full text] [CrossRef]
- Peters MD, Marnie C, Colquhoun H, Garritty CM, Hempel S, Horsley T, et al. Scoping reviews: reinforcing and advancing the methodology and application. Syst Rev. Oct 08, 2021;10(1):263. [FREE Full text] [CrossRef] [Medline]
- Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. Oct 02, 2018;169(7):467-473. [CrossRef] [Medline]
- Yanovitzky I, Stahlman G, Quow J, Ackerman M, Perry Y, Kim M. National public health dashboards: protocol for a scoping review. JMIR Res Protoc. May 16, 2024;13:e52843. [CrossRef] [Medline]
- Salvador-Oliván JA, Marco-Cuenca G, Arquero-Avilés R. Errors in search strategies used in systematic reviews and their effects on information retrieval. J Med Libr Assoc. Apr 15, 2019;107(2):210-221. [FREE Full text] [CrossRef] [Medline]
- Stryker JE, Wray RJ, Hornik RC, Yanovitzky I. Validation of database search terms for content analysis: the case of cancer news coverage. Journal Mass Commun Q. Jun 01, 2006;83(2):413-430. [CrossRef]
- Gusenbauer M, Haddaway NR. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res Synth Methods. Mar 28, 2020;11(2):181-217. [FREE Full text] [CrossRef] [Medline]
- Anderson J, Demeter N, Pasquires M, Wirtz S. Using the CA opioid overdose surveillance dashboard to track opioid overdose deaths. Online J Public Health Inform. 2019;11(1):e62570. [CrossRef]
- Backonja U, Park S, Kurre A, Yudelman H, Heindel S, Schultz M, et al. Supporting rural public health practice to address local-level social determinants of health across Northwest states: Development of an interactive visualization dashboard. J Biomed Inform. May 2022;129:104051. [FREE Full text] [CrossRef] [Medline]
- Baxter L, Baynes J, Weaver A, Neale A, Wade T, Mehaffey M, et al. Development of the United States environmental protection agency's facilities status dashboard for the COVID-19 pandemic: approach and challenges. Int J Public Health. 2022;67:1604761. [FREE Full text] [CrossRef] [Medline]
- Bilal U, McCulley E, Li R, Rollins H, Schnake-Mahl A, Mullachery P, et al. Tracking COVID-19 inequities across jurisdictions represented in the big cities health coalition (BCHC): the COVID-19 health inequities in BCHC cities dashboard. Am J Public Health. Jun 2022;112(6):904-912. [CrossRef]
- Bonham-Werling J, DeLonay A, Stephenson K, Hendricks K, Bednarz L, Weiss J, et al. Using statewide electronic health record and influenza vaccination data to plan and prioritize COVID-19 vaccine outreach and communications in Wisconsin communities. Am J Public Health. Dec 2021;111(12):2111-2114. [CrossRef]
- Bors PA, Kemner A, Fulton J, Stachecki J, Brennan LK. HKHC Community Dashboard: design, development, and function of a Web-based performance monitoring system. J Public Health Manag Pract. 2015;21 Suppl 3:S36-S44. [CrossRef] [Medline]
- Brakefield WS, Ammar N, Shaban-Nejad A. An urban population health observatory for disease causal pathway analysis and decision support: underlying explainable artificial intelligence model. JMIR Form Res. Jul 20, 2022;6(7):e36055. [FREE Full text] [CrossRef] [Medline]
- Brinkley JF, Fisher S, Harris MP, Holmes G, Hooper JE, Jabs EW, FaceBase Consortium, et al. The FaceBase Consortium: a comprehensive resource for craniofacial researchers. Development. Jul 15, 2016;143(14):2677-2688. [FREE Full text] [CrossRef] [Medline]
- Chande A, Lee S, Harris M, Hilley T, Andris C, Weitz JS. Real-time, interactive website for US-county level COVID-19 event risk assessment. medRxiv. Preprint posted online August 29, 2020. [FREE Full text] [CrossRef] [Medline]
- Claborn K, Creech S, Conway FN, Clinton NM, Brinkley KT, Lippard E, et al. Development of a digital platform to improve community response to overdose and prevention among harm reduction organizations. Harm Reduct J. Jun 03, 2022;19(1):62. [FREE Full text] [CrossRef] [Medline]
- Cocoros NM, Kirby C, Zambarano B, Ochoa A, Eberhardt K, Rocchio Sb C, et al. RiskScape: a data visualization and aggregation platform for public health surveillance using routine electronic health record data. Am J Public Health. Feb 2021;111(2):269-276. [CrossRef] [Medline]
- Coelho D, Gupta N, Papenhausen E, Mueller K. Patterns of Social Vulnerability: An Interactive Dashboard to Explore Risks to Public Health on the US County Level. In: Proceedings of the 2022 Workshop on Visual Analytics in Healthcare. 2022. Presented at: VAHC '22; November 5, 2022:269-276; Washington, DC. URL: https://ieeexplore.ieee.org/document/10108527
- Cramer EY, Huang Y, Wang Y, Ray EL, Cornell M, Bracher J, et al. US COVID-19 Forecast Hub Consortium. The United States COVID-19 forecast hub dataset. Sci Data. Aug 01, 2022;9(1):462. [FREE Full text] [CrossRef] [Medline]
- Curriero FC, Wychgram C, Rebman AW, Corrigan AE, Kvit A, Shields T, et al. The Lyme and Tickborne disease dashboard: a map-based resource to promote public health awareness and research collaboration. PLoS One. 2021;16(12):e0260122. [FREE Full text] [CrossRef] [Medline]
- D’Agostino EM, Feger BJ, Pinzon MF, Bailey R, Kibbe WA. Democratizing research with data dashboards: data visualization and support to promote community partner engagement. Am J Public Health. Nov 2022;112(S9):S850-S853. [CrossRef]
- Dean 2nd DA, Goldberger AL, Mueller R, Kim M, Rueschman M, Mobley D, et al. Scaling up scientific discovery in sleep medicine: the national sleep research resource. Sleep. May 01, 2016;39(5):1151-1164. [FREE Full text] [CrossRef] [Medline]
- Dixon BE, Grannis SJ, Tachinardi U, Williams JL, McAndrews C, Embί PJ. Daily visualization of statewide COVID-19 healthcare data. In: Proceedings of the 2020 Workshop on Visual Analytics in Healthcare. 2020. Presented at: VAHC '20; November 14-18, 2020:1-3; Baltimore, MD. URL: https://ieeexplore.ieee.org/document/9588507 [CrossRef]
- Dixon BE, Grannis SJ, McAndrews C, Broyles AA, Mikels-Carrasco W, Wiensch A, et al. Leveraging data visualization and a statewide health information exchange to support COVID-19 surveillance and response: application of public health informatics. J Am Med Inform Assoc. Jul 14, 2021;28(7):1363-1373. [FREE Full text] [CrossRef] [Medline]
- Dong E, Du H, Gardner L. An interactive web-based dashboard to track COVID-19 in real time. Lancet Infect Dis. May 2020;20(5):533-534. [FREE Full text] [CrossRef] [Medline]
- Dong E, Ratcliff J, Goyea TD, Katz A, Lau R, Ng TK, et al. The Johns Hopkins University Center for Systems Science and Engineering COVID-19 dashboard: data collection process, challenges faced, and lessons learned. Lancet Infect Dis. Dec 2022;22(12):e370-e376. [FREE Full text] [CrossRef] [Medline]
- Dupuis JR, Bremer FT, Jombart T, Sim SB, Geib SM. mvmapper: interactive spatial mapping of genetic structures. Mol Ecol Resour. Mar 2018;18(2):362-367. [FREE Full text] [CrossRef] [Medline]
- Florez H, Singh S. Online dashboard and data analysis approach for assessing COVID-19 case and death data. F1000Res. 2020;9:570. [FREE Full text] [CrossRef] [Medline]
- Foldy SL, Biedrzycki PA, Baker BK, Swain GR, Howe DS, Gieryn D, et al. The public health dashboard: a surveillance model for bioterrorism preparedness. J Public Health Manag Pract. 2004;10(3):234-240. [CrossRef] [Medline]
- Fosdick BK, Bayham J, Dilliott J, Ebel GD, Ehrhart N. Model-based evaluation of policy impacts and the continued COVID-19 risk at long term care facilities. Infect Dis Model. Sep 2022;7(3):463-472. [FREE Full text] [CrossRef] [Medline]
- Gardner L. The COVID-19 dashboard for real-time tracking of the pandemic: the Lasker Bloomberg public service award. JAMA. Oct 04, 2022;328(13):1295-1296. [CrossRef] [Medline]
- Gesteland PH, Livnat Y, Galli N, Samore MH, Gundlapalli AV. The EpiCanvas infectious disease weather map: an interactive visual exploration of temporal and spatial correlations. J Am Med Inform Assoc. 2012;19(6):954-959. [FREE Full text] [CrossRef] [Medline]
- Ghosh S, Datta A, Tan K, Choi H. SLIDE - a web-based tool for interactive visualization of large-scale - omics data. Bioinformatics. Jan 15, 2019;35(2):346-348. [FREE Full text] [CrossRef] [Medline]
- Gourevitch MN, Athens JK, Levine SE, Kleiman N, Thorpe LE. City-level measures of health, health determinants, and equity to foster population health improvement: the city health dashboard. Am J Public Health. Apr 2019;109(4):585-592. [CrossRef] [Medline]
- Graham SS, Majdik ZP, Barbour JB, Rousseau JF. A dashboard for exploring clinical trials sponsorship and potential virtual monopolies. JAMIA Open. Oct 2021;4(4):ooab089. [FREE Full text] [CrossRef] [Medline]
- Harris JK, Hinyard L, Beatty K, Hawkins JB, Nsoesie EO, Mansour R, et al. Evaluating the implementation of a twitter-based foodborne illness reporting tool in the city of St. Louis Department of Health. Int J Environ Res Public Health. Apr 24, 2018;15(5):833. [CrossRef]
- Hashmi AZ, Christy J, Saxena S, Factora R. An age-friendly population health dashboard geolocating by clinical and social determinant needs. Health Serv Res. Feb 2023;58 Suppl 1(Suppl 1):44-50. [FREE Full text] [CrossRef] [Medline]
- Hedberg K, Bui LT, Livingston C, Shields LM, Van Otterloo J. Integrating public health and health care strategies to address the opioid epidemic: the Oregon Health Authority's opioid initiative. J Public Health Manag Pract. 2019;25(3):214-220. [CrossRef] [Medline]
- Hilton BN, Horan TA, Burkhard R, Schooley B. SafeRoadMaps: communication of location and density of traffic fatalities through spatial visualization and heat map analysis. Inf Vis. Dec 09, 2010;10(1):82-96. [CrossRef]
- Hswen Y, Yom-Tov E, Murti V, Narsing N, Prasad S, Rutherford GW, et al. Covidseeker: a geospatial temporal surveillance tool. Int J Environ Res Public Health. Jan 27, 2022;19(3):1410. [FREE Full text] [CrossRef] [Medline]
- Hutchinson-Colas JA, Balica A, Chervenak FA, Friedman D, Locke LS, Bachmann G, et al. New Jersey maternal mortality dashboard: an interactive social-determinants-of-health tool. J Perinat Med. Feb 23, 2023;51(2):188-196. [CrossRef] [Medline]
- Ising A, Waller A, Frerichs L. Evaluation of an emergency department visit data mental health dashboard. J Public Health Manag Pract. 2023;29(3):369-376. [CrossRef] [Medline]
- Ji X, Chun A, Geller J. Monitoring public health concerns using Twitter sentiment classifications. In: Proceedings of the 2013 IEEE International Conference on Healthcare Informatics. 2013. Presented at: ICHI '13; September 9-11, 2013:335-344; Philadelphia, PA. URL: https://ieeexplore.ieee.org/document/6680494 [CrossRef]
- Jo G, Habib D, Varadaraj V, Smith J, Epstein S, Zhu J, et al. COVID-19 vaccine website accessibility dashboard. Disabil Health J. Jul 2022;15(3):101325. [FREE Full text] [CrossRef] [Medline]
- Joshi A, Amadi C, Katz B, Kulkarni S, Nash D. A human-centered platform for HIV infection reduction in New York: development and usage analysis of the ending the epidemic (ETE) dashboard. JMIR Public Health Surveill. Dec 11, 2017;3(4):e95. [CrossRef] [Medline]
- Kallenbach L, Whipple S, Bonafede M. EPH60 Real-World Data Dashboard Integrating Public Health and EHR Data for Identifying COVID-19 Vaccination Gaps in the US. Value Health. Jul 2022;25(7):S445. [CrossRef]
- Kaul S, Coleman C, Gotz D. A rapidly deployed, interactive, online visualization system to support fatality management during the coronavirus disease 2019 (COVID-19) pandemic. J Am Med Inform Assoc. Dec 09, 2020;27(12):1943-1948. [FREE Full text] [CrossRef] [Medline]
- Kianersi S, Zhang Y, Rosenberg M, Macy JT. Prevalence of e-cigarette use (2016 to 2018) and cigarette smoking (2012 to 2019) among U.S. adults by state: an interactive data visualization dashboard. Drug Alcohol Depend. Jan 01, 2021;218:108361. [CrossRef] [Medline]
- Kostkova P, Mano V, Larson HJ, Schulz WS. Who is spreading rumours about vaccines?: influential user impact modelling in social networks. In: Proceedings of the 2017 International Conference on Digital Health. 2017. Presented at: DH '17; July 2-5, 2017:48-52; London, UK. URL: https://dl.acm.org/doi/10.1145/3079452.3079505 [CrossRef]
- Krause DD. Data lakes and data visualization: an innovative approach to address the challenges of access to health care in Mississippi. Online J Public Health Inform. 2015;7(3):e225. [FREE Full text] [CrossRef] [Medline]
- Laurent AA, Matheson A, Escudero K, Lazaga A. Linking health and housing data to create a sustainable cross-sector partnership. Am J Public Health. Jul 2020;110(S2):S222-S224. [CrossRef] [Medline]
- Le P, Casper M, Vaughan AS. A dynamic visualization tool of local trends in heart disease and stroke mortality in the United States. Prev Chronic Dis. Sep 08, 2022;19:E57. [FREE Full text] [CrossRef] [Medline]
- Lechner C, Rumpler M, Dorley MC, Li Y, Ingram A, Fryman H. Developing an online dashboard to visualize performance data-tennessee newborn screening experience. Int J Neonatal Screen. Sep 02, 2022;8(3):23. [FREE Full text] [CrossRef] [Medline]
- Lee MT, Lin FC, Chen ST, Hsu WT, Lin S, Chen TS, et al. Web-based dashboard for the interactive visualization and analysis of national risk-standardized mortality rates of sepsis in the US. J Med Syst. Jan 11, 2020;44(2):54. [CrossRef] [Medline]
- Liu S, Wall E, Patel SA, Park Y. COVID-19 health equity dashboard - addressing vulnerable populations. Open Science Framework. 2020. URL: https://osf.io/preprints/osf/2frha [accessed 2024-05-29]
- Marshall BD, Yedinak JL, Goyer J, Green TC, Koziol JA, Alexander-Scott N. Development of a statewide, publicly accessible drug overdose surveillance and information system. Am J Public Health. Nov 2017;107(11):1760-1763. [CrossRef] [Medline]
- Mast TC, Heyman D, Dasbach E, Roberts C, Goveia MG, Finelli L. Planning for monitoring the introduction and effectiveness of new vaccines using real-word data and geospatial visualization: an example using rotavirus vaccines with potential application to SARS-CoV-2. Vaccine X. Apr 2021;7:100084. [FREE Full text] [CrossRef] [Medline]
- Mayfield CA, Gigler ME, Snapper L, Jose J, Tynan J, Scott VC, et al. Using cloud-based, open-source technology to evaluate, improve, and rapidly disseminate community-based intervention data. J Am Med Inform Assoc. Nov 01, 2020;27(11):1741-1746. [FREE Full text] [CrossRef] [Medline]
- Mirhaji P, Richesson R, Turley J, Zhang J, Smith J. Public health situation awareness: toward a semantic approach. In: Proceedings of the 2004 Conference on Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications. 2004. Presented at: SPIE '04; April 12, 2004:11; Orlando, FL. URL: https://www.spiedigitallibrary.org/conference-proceedings-of-spie/5434/0000/Public-health-situation-awareness-toward-a-semantic-approach/10.1117/12.541189.short [CrossRef]
- Naughton CC, Roman FA, Alvarado AG, Tariqi AQ, Deeming MA, Kadonsky KF, et al. Show us the data: global COVID-19 wastewater monitoring efforts, equity, and gaps. FEMS Microbes. 2023;4:xtad003. [FREE Full text] [CrossRef] [Medline]
- Ngai S, Sell J, Baig S, Iqbal M, Eddy M, Culp G, et al. Built by epidemiologists for epidemiologists: an internal COVID-19 dashboard for real-time situational awareness in New York City. JAMIA Open. Jul 2022;5(2):ooac029. [FREE Full text] [CrossRef] [Medline]
- Ninkov A, Sedig K. VINCENT: a visual analytics system for investigating the online vaccine debate. Online J Public Health Inform. 2019;11(2):e5. [FREE Full text] [CrossRef] [Medline]
- Pace C, Fencl A, Baehner L, Lukacs H, Cushing LJ, Morello-Frosch R. The drinking water tool: a community-driven data visualization tool for policy implementation. Int J Environ Res Public Health. Jan 27, 2022;19(3):80. [FREE Full text] [CrossRef] [Medline]
- Patel J, Dzomba B, Vo H, Von Nessen-Scanlin S, Siminoff L, Wu H. A health IT-empowered integrated platform for secure vaccine data management and intelligent visual analytics and reporting. In: Proceedings of the 15th International Joint Conference on Biomedical Engineering Systems and Technologies. 2022. Presented at: BIOSTEC '22; February 9-11, 2022:522-531; Virtual Event. URL: https://www.scitepress.org/Papers/2022/108437/108437.pdf [CrossRef]
- Patrick R, Greenberg A, Magnus M, Opoku J, Kharfen M, Kuo I. Development of an HIV testing dashboard to complement the HIV care continuum among MSM, PWID, and heterosexuals in Washington, DC, 2007-2015. J Acquir Immune Defic Syndr. Jul 01, 2017;75 Suppl 3(Suppl 3):S397-S407. [FREE Full text] [CrossRef] [Medline]
- Peddireddy AS, Xie D, Patil P, Wilson ML, Machi D, Venkatramanan S, et al. From 5Vs to 6Cs: operationalizing epidemic data management with COVID-19 surveillance. In: Proceedings of the 2020 IEEE International Conference on Big Data. 2020. Presented at: Big Data '20; December 10-13, 2020:1380-1387; Atlanta, GA. URL: https://ieeexplore.ieee.org/document/9378435 [CrossRef]
- Penaia CS, Morey BN, Thomas KB, Chang RC, Tran VD, Pierson N, et al. Disparities in native Hawaiian and pacific islander COVID-19 mortality: a community-driven data response. Am J Public Health. Jul 2021;111(S2):S49-S52. [FREE Full text] [CrossRef] [Medline]
- Petroni M, Howard S, Howell IB, Collins MB. NYenviroScreen: an open-source data driven method for identifying potential environmental justice communities in New York State. Environ Sci Policy. Oct 2021;124:348-358. [CrossRef]
- Reid NE, Johnson-Arbor K, Smolinske S, Litovitz T. 2020 webPOISONCONTROL data summary. Am J Emerg Med. Apr 2022;54:184-195. [FREE Full text] [CrossRef] [Medline]
- Runnels P, Coran JJ, Goldman ML, Pronovost P. Utilizing a dashboard to promote system-wide value in behavioral health. Popul Health Manag. Aug 2021;24(4):427-429. [CrossRef] [Medline]
- Ryan K, Pillai P, Remington PL, Malecki K, Lindberg S. Development of an obesity prevention dashboard for Wisconsin. WMJ. Nov 2016;115(5):224-227. [FREE Full text] [Medline]
- Shaheen AW, Ciesco E, Johnson K, Kuhnen G, Paolini C, Gartner G. Interactive, on-line visualization tools to measure and drive equity in COVID-19 vaccine administrations. J Am Med Inform Assoc. Oct 12, 2021;28(11):2451-2455. [FREE Full text] [CrossRef] [Medline]
- Shi A, Gaynor SM, Dey R, Zhang H, Quick C, Lin X. COVID-19 Spread mapper: a multi-resolution, unified framework and open-source tool. Bioinformatics. Apr 28, 2022;38(9):2661-2663. [FREE Full text] [CrossRef] [Medline]
- Shi Q, Herbert C, Ward DV, Simin K, McCormick BA, Ellison Iii RT, et al. COVID-19 variant surveillance and social determinants in central Massachusetts: development study. JMIR Form Res. Jun 13, 2022;6(6):e37858. [FREE Full text] [CrossRef] [Medline]
- Smith KC, Chawla DG, Dhillon BK, Ji Z, Vita R, van der Leest EC, Human Immunology Project Consortium (HIPC), et al. A curated collection of human vaccination response signatures. Sci Data. Nov 08, 2022;9(1):678. [FREE Full text] [CrossRef] [Medline]
- Sopan A, Noh AS, Karol S, Rosenfeld P, Lee G, Shneiderman B. Community Health Map: a geospatial and multivariate data visualization tool for public health datasets. Gov Inf Q. Apr 2012;29(2):223-234. [CrossRef]
- Stone AB, Jones MR, Rao N, Urman RD. A dashboard for monitoring opioid-related adverse drug events following surgery using a national administrative database. Am J Med Qual. 2019;34(1):45-52. [CrossRef] [Medline]
- Stone G, Lekht A, Burris N, Williams C. Data collection and communications in the public health response to a disaster: rapid population estimate surveys and the Daily Dashboard in post-Katrina New Orleans. J Public Health Manag Pract. 2007;13(5):453-460. [CrossRef] [Medline]
- Sullivan PS, Woodyatt C, Koski C, Pembleton E, McGuinness P, Taussig J, et al. A data visualization and dissemination resource to support HIV prevention and care at the local level: analysis and uses of the AIDSVu public data resource. J Med Internet Res. Oct 23, 2020;22(10):e23173. [FREE Full text] [CrossRef] [Medline]
- Sullivan PS, Woodyatt CR, Kouzouian O, Parrish KJ, Taussig J, Conlan C, et al. America's HIV epidemic analysis dashboard: protocol for a data resource to support ending the HIV epidemic in the United States. JMIR Public Health Surveill. Feb 10, 2022;8(2):e33522. [FREE Full text] [CrossRef] [Medline]
- Suri A, Askari M, Calder J, Branas C, Rundle A. A real-time COVID-19 surveillance dashboard to support epidemic response in Connecticut: lessons from an academic-health department partnership. J Am Med Inform Assoc. Apr 13, 2022;29(5):958-963. [FREE Full text] [CrossRef] [Medline]
- Thompson MP, Belval EJ, Dilliott J, Bayham J. Supporting wildfire response during a pandemic in the United States: the COVID-19 incident risk assessment tool. Front For Glob Change. May 21, 2021;4:23. [CrossRef]
- Tsuchida RE, Haggins AN, Perry M, Chen CM, Medlin RP, Meurer WJ, et al. Developing an electronic health record-derived health equity dashboard to improve learner access to data and metrics. AEM Educ Train. Sep 2021;5(Suppl 1):S116-S120. [FREE Full text] [CrossRef] [Medline]
- Valdiserri RO, Sullivan PS. Data visualization promotes sound public health practice: the AIDSvu example. AIDS Educ Prev. Feb 2018;30(1):26-34. [CrossRef] [Medline]
- Wahi MM, Dukach N. Visualizing infection surveillance data for policymaking using open source dashboarding. Appl Clin Inform. May 2019;10(3):534-542. [FREE Full text] [CrossRef] [Medline]
- Williams AJ, Lambert JC, Thayer K, Dorne JL. Sourcing data on chemical properties and hazard data from the US-EPA CompTox Chemicals Dashboard: a practical guide for human risk assessment. Environ Int. Sep 2021;154:106566. [FREE Full text] [CrossRef] [Medline]
- Wilson GM, Ball MJ, Szczesny P, Haymann S, Polyak M, Holmes T, et al. Health intelligence atlas: a core tool for public health intelligence. Appl Clin Inform. Aug 2021;12(4):944-953. [FREE Full text] [CrossRef] [Medline]
- Wissel BD, Van Camp PJ, Kouril M, Weis C, Glauser TA, White PS, et al. An interactive online dashboard for tracking COVID-19 in U.S. counties, cities, and states in real time. J Am Med Inform Assoc. Jul 01, 2020;27(7):1121-1125. [FREE Full text] [CrossRef] [Medline]
- Wong AK, Kim H, Charpignon ML, Carvalho L, Monares-Zepeda E, Madushani RW, et al. A method to explore variations of ventilator-associated event surveillance definitions in large critical care databases in the United States. Crit Care Explor. Nov 2022;4(11):e0790. [FREE Full text] [CrossRef] [Medline]
- Wong T, Brovman EY, Rao N, Tsai MH, Urman RD. A dashboard prototype for tracking the impact of diabetes on hospital readmissions using a national administrative database. J Clin Med Res. Jan 2020;12(1):18-25. [FREE Full text] [CrossRef] [Medline]
- Yang J, Tsou M, Jung C, Allen C, Spitzberg BH, Gawron JM, et al. Social media analytics and research testbed (SMART): exploring spatiotemporal patterns of human dynamics with geo-targeted social media messages. Big Data Soc. Jun 24, 2016;3(1):95002. [CrossRef]
- Yu Z, Pepe K, Rust G, Ramirez-Marquez JE, Zhang S, Bonnet B. Patient-provider geographic map: an interactive visualization tool of patients' selection of health care providers. In: Proceedings of the 2017 IEEE Workshop on Visual Analytics in Healthcare. 2017. Presented at: VAHC '17; October 1, 2017:1-8; Phoenix, AZ. URL: https://ieeexplore.ieee.org/document/8387494 [CrossRef]
- Zheng S, Edwards JR, Dudeck MA, Patel PR, Wattenmaker L, Mirza M, et al. Building an interactive geospatial visualization application for national health care-associated infection surveillance: development study. JMIR Public Health Surveill. Jul 30, 2021;7(7):e23528. [FREE Full text] [CrossRef] [Medline]
- Zhu Z, Meng K, Caraballo J, Jaradat I, Shi X, Zhang Z, et al. A dashboard for mitigating the COVID-19 misinfodemic. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics on System Demonstrations. 2021. Presented at: EACL '21; Virtual Event:99-105; April 19-23, 2021. URL: https://aclanthology.org/2021.eacl-demos.12.pdf [CrossRef]
- Loukissas YA. Taking Big Data apart: local readings of composite media collections. Inf Commun Soc. Jul 31, 2016;20(5):651-664. [CrossRef]
- Viswanath K, McCloud RF, Lee EWJ, Bekalu MA. Measuring what matters: data absenteeism, science communication, and the perpetuation of inequities. Ann Am Acad Pol Soc Sci. May 05, 2022;700(1):208-219. [CrossRef]
- Hayes AF, Krippendorff K. Answering the call for a standard reliability measure for coding data. Commun Methods Meas. Apr 2007;1(1):77-89. [CrossRef]
- Scheirer MA, Dearing JW. An agenda for research on the sustainability of public health programs. Am J Public Health. Nov 2011;101(11):2059-2067. [CrossRef]
- Etzkowitz H, Leydesdorff L. The dynamics of innovation: from National Systems and “Mode 2” to a Triple Helix of university–industry–government relations. Research Policy. Feb 2000;29(2):109-123. [CrossRef]
- Bachmann P, Frutos-Bencze D. R and D and innovation efforts during the COVID-19 pandemic: the role of universities. J Innov Knowl. Oct 2022;7(4):100238. [CrossRef]
- Shankar K, Jeng W, Thomer A, Weber N, Yoon A. Data curation as collective action during COVID‐19. J Assoc Inf Sci Technol. Sep 02, 2020;72(3):280-284. [CrossRef]
Abbreviations
CDC: Centers for Disease Control and Prevention |
MeSH: Medical Subject Headings |
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses |
PRISMA-ScR: Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews |
Edited by A Mavragani; submitted 11.08.24; peer-reviewed by E Shevni, MD, CB Wilson; comments to author 13.03.25; revised version received 20.03.25; accepted 20.03.25; published 21.05.25.
Copyright©Gretchen Stahlman, Itzhak Yanovitzky, Miriam Kim. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 21.05.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.