This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
Dashboards can support data-driven quality improvements in health care. They visualize data in ways intended to ease cognitive load and support data comprehension, but how they are best integrated into working practices needs further investigation.
This paper reports the findings of a realist evaluation of a web-based quality dashboard (QualDash) developed to support the use of national audit data in quality improvement.
QualDash was co-designed with data users and installed in 8 clinical services (3 pediatric intensive care units and 5 cardiology services) across 5 health care organizations (sites A-E) in England between July and December 2019. Champions were identified to support adoption. Data to evaluate QualDash were collected between July 2019 and August 2021 and consisted of 148.5 hours of observations including hospital wards and clinical governance meetings, log files that captured the extent of use of QualDash over 12 months, and a questionnaire designed to assess the dashboard’s perceived usefulness and ease of use. Guided by the principles of realist evaluation, data were analyzed to understand how, why, and in what circumstances QualDash supported the use of national audit data in quality improvement.
The observations revealed that variation across sites in the amount and type of resources available to support data use, alongside staff interactions with QualDash, shaped its use and impact. Sites resourced with skilled audit support staff and established reporting systems (sites A and C) continued to use existing processes to report data. A number of constraints influenced use of QualDash in these sites including that some dashboard metrics were not configured in line with user expectations and staff were not fully aware how QualDash could be used to facilitate their work. In less well-resourced services, QualDash automated parts of their reporting process, streamlining the work of audit support staff (site B), and, in some cases, highlighted issues with data completeness that the service worked to address (site E). Questionnaire responses received from 23 participants indicated that QualDash was perceived as useful and easy to use despite its variable use in practice.
Web-based dashboards have the potential to support data-driven improvement, providing access to visualizations that can help users address key questions about care quality. Findings from this study point to ways in which dashboard design might be improved to optimize use and impact in different contexts; this includes using data meaningful to stakeholders in the co-design process and actively engaging staff knowledgeable about current data use and routines in the scrutiny of the dashboard metrics and functions. In addition, consideration should be given to the processes of data collection and upload that underpin the quality of the data visualized and consequently its potential to stimulate quality improvement.
RR2-10.1136/bmjopen-2019-033208
Health care organizations are complex systems with variations in patient care and outcomes observed nationally and internationally [
Dashboards offer the potential to overcome some of the constraints reported in the use of national audit data. They use visualization techniques intended to ease cognitive load and improve data comprehension [
QualDash is a customizable, web-based dashboard that was designed and evaluated using data from 2 national audits: The Myocardial Ischemia National Audit Project (MINAP) and the Pediatric Intensive Care Audit Network (PICANet). MINAP collects data spanning 130 data fields, contributed by all hospitals in England, Wales, and Northern Ireland that admit patients with acute coronary syndromes [
Standardized mortality ratio: Ratio between the observed number of deaths and the number of deaths that would be expected, given the severity of patients’ illness at admission
Unplanned extubations—accidental removal of breathing tubes
Emergency readmissions within 48 hours of discharge
In comparison, MINAP audits services for heart attack and includes metrics such as:
Call-to-balloon time: Time between ambulance call and primary percutaneous coronary intervention treatment: Target 90 minutes
Door-to-angiography time: Time between arrival at hospital to diagnostic procedure: Target 72 hours
Discharge on gold standard drugs: Proportion of patients who received all secondary prevention medication for which they were eligible
These audits were chosen for dashboard development to increase the generalizability of the findings beyond a single audit.
Dashboard development was conducted within 5 NHS acute health care organizations and included interviews with 54 staff members, a workshop with audit suppliers, and 2 co-design workshops with clinicians and managers from one organization [
QualDash development work.
Development work provided insight into how national audit data were used—
Four Pediatric Intensive Care Audit Network Qualcards—displays simulated data.
Each Qualcard was designed to address a sequence of user tasks related to a single metric. On first opening QualDash, the user would see the main visualization for each metric displayed by month over a year; essentially, these could be used to address key care quality questions quickly and easily, for example, referring to the Mortality Qualcard in
Expanded mortality Qualcard showing method of admission in the subview pie chart—displays simulated data.
QualDash included functions to export visualizations and raw data and incorporated customization features, for example, users could select which variables were displayed in expanded Qualcard subviews, and dashboard authors (local staff with sufficient information technology [IT] experience) were able to configure service-specific variants of a metric, that is, add or adapt existing Qualcards. Usability evaluation of the prototype dashboard (
Timeliness of data have also been reported to be important for data use in quality improvement. Managing the dashboard on a central server would have required requesting data from the national audit supplier and then waiting for those data to be sent to the research team before being uploaded to QualDash. Instead, to visualize data that were as timely as possible, QualDash was installed on site servers at each participating organization to enable staff to upload local data as needed. QualDash was accessible via Google Chrome on the service PCs. However, locating QualDash on local servers meant the dashboard displayed site data only; the metrics were relevant and used by services nationally but did not include national comparator data. Therefore, rather than comparing performance against peer organizations, sites compared service performance month-by-month and against national targets, for example, time to treatment targets.
QualDash was installed in 5 organizations including 3 large teaching hospital trusts that offer specialist services, including PICUs, and 2 smaller district general hospitals. Within these sites, QualDash was evaluated in services that participated in the 2 national audits of interest: PICANet and MINAP. District general hospitals do not provide the PICU service for which PICANet operates; therefore, the PICANet dashboard was available within PICUs in the 3 teaching hospital trusts and the MINAP dashboard was available in all 5 organizations.
QualDash—data flow and anticipated front-end use.
To promote confidence in the accuracy of the data displayed, site data were validated by the research team on upload, after which site staff were able to upload data at their convenience. To support the adoption of QualDash and informed by the adoption focus groups, the research team identified
QualDash was conceptualized as a sociotechnical intervention [
To assess the extent of use of QualDash across organizations and services, log files automatically recorded information about the use of QualDash at each site from the time of installation (June and July 2019) to July 2020. The information captured included the type of user (job title), data used (audit, year), time spent interacting with different Qualcards, and functionality used. Data from the log files were postprocessed in Microsoft Excel, removing entries generated by members of the research team undertaking on-site testing of the software. Using the timestamps for each login, the number of sessions per audit per month from installation to the end of July 2020 was determined for each site. Where a login occurred less than 20 minutes after the last timestamp and appeared to be the same user (based on the audit and year selected and the job title entered), this was treated as a continuation of the previous session.
To understand how, why, and where QualDash was used, observations of practice were conducted at study sites between August 2019 and February 2020, in spaces where QualDash had the potential to be used in the ways expressed in the CMOcs, including:
Hospital wards and units (eg, admission wards, catheterization laboratories, acute coronary care units, and PICUs)
Clinical governance and directorate meetings
Offices used by audit support and clinical staff for data management and administration activities
Informal interviews relating to the activities observed
An observation schedule was developed to help guide data collection, including prompts to capture (1) service processes and routines for monitoring care quality, (2) types of data and technologies used and by whom, and (3) how, why, and the extent to which QualDash was integrated within these routines. Local collaborators within each service facilitated observations by introducing researchers to ward matrons and senior nurses. These staff provided permission for the observations to take place and notified their colleagues. Observations were conducted by 2 researchers who recorded the observations in handwritten notes. The observations were nonparticipatory, but researchers interacted with participants via informal interviews where an explanation of the activity observed was provided by the participant. The 2 researchers initially conducted observations together to develop a shared understanding of how to record fieldnotes, after which each researcher aimed to conduct at least one observation per service.
Hours and type of observation performed in study sites.
|
Teaching Hospital Trust (MINAPa and PICANetb) | District General Hospital (MINAP) | Total hours | |||
|
A | B | C | Dc | E |
|
QualDash installation and customization meetings | 9 | 8 | 16 | 7 | 6.5 | 46.5 |
Ward and “back office” observations, including data collection and validation | 25 | 24.5 | 17 | 2 | 13.5 | 82 |
Meeting observations and informal interviews | 7.5 | 3 | 4 | 3 | 2.5 | 20 |
Total hours of observation | 41.5 | 35.5 | 37 | 12 | 22.5 | 148.5 |
aMINAP: Myocardial Ischemia National Audit Project.
bPICANet: Pediatric Intensive Care Audit Network.
cQualDash installed in December 2019 because of delays in site approval. This explains why the first use of QualDash at this site, discussed in results, begins in January 2021, much later than at other sites.
Observation notes were transcribed by researchers as soon as possible after the observation and analyzed following (and adapting) the 5 stages of framework analysis [
At the end of the evaluation period, a questionnaire based on the technology acceptance model (TAM) was administered [
The questionnaire data were analyzed to assess the perceived usefulness of QualDash and its ease of use. Excel was used to produce summary statistics for each TAM statement, calculating the mean and range for all participants and for those who reported having used QualDash, broken down by audit and role. Guided by previous studies, ratings of 3 or higher were considered to indicate a positive response.
Ethical approval for this study was provided by the University of Leeds School of Healthcare Research Ethics Committee (approval # HREC16–044). Approval was received from the Health Research Authority and each of the 5 sites.
Extent of QualDash use by site and clinical service. PICU: pediatric intensive care unit.
Uses of QualDash by clinical service and month.
Below, we use the observation data to understand the reasons behind these variations and to refine the CMOcs in
The services observed included PICUs, where clinical staff cared for children, often on ventilators to assist their breathing, and cardiology wards for adults in admission for, and treatment or recovery from, acute coronary syndrome. Quality monitoring processes varied across services but included ward activities directed at the care of individual patients and directorate and clinical governance meetings directed at service-level monitoring of quality and safety.
Ward observations were conducted at or around the nurses’ stations. As might be expected, staff routines focused on the needs of individual patients, for example, ward staff shared information about patient care via handovers (summaries at shift changes), ward rounds, and clinical team
The clinical governance (service) and directorate (specialty) meetings were observed monthly or quarterly. These meetings were attended by a range of staff, including nurses, physicians, junior doctors, and service managers. In comparison to the wards, observations revealed that national audit data were used in these meetings at certain sites, sometimes via QualDash.
Audit data collection, management, and use. MINAP: Myocardial Ischemia National Audit Project; National Clinical Audit; PICANet: Pediatric Intensive Care Audit Network; PICU: pediatric intensive care unit.
The CMOcs hypothesize that, depending on the context, QualDash would be used either to facilitate the use of, or introduce, national audit data into service monitoring processes, such as the clinical governance meetings depicted in
Sites A and C (PICU and Cardiology units) were resourced with dedicated audit support staff who produced performance reports when requested by clinicians and managers. Reports were produced using local databases and systems such as Microsoft Access or Microsoft Excel, where national audit data were stored before upload to audit suppliers. Observations revealed that, within these services, audit support staff continued to use existing systems post installation of QualDash. In site A Cardiology unit, for example, the nurse dedicated to audit participation did not have Google Chrome installed on their PC, despite repeated requests to their IT department, so they were unable to open QualDash to support reporting requests. Even in services where QualDash was accessible via Google Chrome, however, audit support staff continued to use existing systems, as this observation of an audit clerk updating monthly reports using their access database highlights:
We discuss a Qualcard that shows invasive ventilation and bed days. The Audit Clerk checks the figures in Qualcard against her visualizations for the same months, and they do not match. They explain that this mismatch might be because QualDash shows bed days by admission date rather than bed days across the months of admission. In one month, the Qualcard showed that there were more invasive ventilation days than there were bed days (which is impossible). Furthermore, the data in QualDash have not been updated since the installation (in July); therefore, the data in the access database is the only way to update the reports discussed today.
The use of simulated data in the co-design workshops meant that staff were unable to scrutinize the dashboard with data familiar and meaningful to them at that time. It was only post installation; therefore, when using QualDash to explore site specific data that it became apparent that some metrics were not configured as typically reported by the service. The issue described above was addressed by incorporating additional functionality within the dashboard, allowing Qualcards to be configured to present information in their respective months. The dashboard customization feature also supported metric reconfiguration. However, QualDash was not able to configure all of the metrics reported locally. For example, the number of accidental extubations (removal of a breathing tube) on PICUs per month could not be configured in QualDash because of the complex way in which data were captured as part of the audit. The measure was provided as the total number of incidents across the year in the subview pie chart but could not be used to facilitate monthly reporting. Similarly, in the site C Cardiology unit, a MINAP project assistant commented:
I would really love to be able to use it [QualDash], but it just does not show me, you know, everything I need, and I would like to be able to say to you, yeah, I use it all the time to add. But [...] it just does not do exactly what I need. But maybe if you spoke to someone like, the lead consultant [physician], and you showed him what was available, and you said, does this answer, you know, some of your queries? However, like the thing he asked me to do, to look at the patients that have failed [within specific time points].
The request to which the MINAP assistant refers was to see the data of patients treated between specific hours in one day. In the work undertaken to capture requirements for QualDash, tasks using such a timeframe were not identified; thus, this functionality was not included in QualDash.
In comparison to the dashboard itself, user knowledge of the audit data set and understanding of QualDash’s functionality also constrained use of the dashboard; an audit clerk in site C reported:
The Audit Clerk says that the service database contains more detail than PICANet, for example, when they produce reports for the business meeting, [...], they include information about what ward the patients have been referred from and where the patient was discharged. They also get “ad hoc” requests for data on a regular basis.
In fact, QualDash (and the PICANet data set from which it was derived) provided referral and discharge information in Qualcard subviews but this participant was unaware that the dashboard could support this data need. Even so, an additional influence on staff choice to use QualDash in these well-resourced services was that the data displayed were not as timely as those stored in local systems. This was due to challenges in establishing routines for uploading data into QualDash including having the necessary software installed on service PCs to complete the upload. Therefore, despite the manual work involved in report production, audit support staff in these services chose to use existing systems as they provided data that were timely and from which they had expertise that enabled them to configure metrics reported routinely and when requested by clinical team members.
In comparison to sites A and C, the audit clerk in site B PICU consulted hard copy and electronic data sources to submit quarterly reports to NHS England (a body of the Department of Health responsible for commissioning NHS services). Data sources included the ward diary, admissions books, and the patient administration system, and they also consulted with clinicians to retrieve data. QualDash automatically calculated measures, including patient bed days, 48-hour readmission, and mortality rates. Consequently, this audit clerk chose to use QualDash as it facilitated and streamlined their reporting, as hypothesized in the CMOc. Extent of use of QualDash in this site is reflected in
An important observation was that, unlike other sites, this audit clerk developed a routine for uploading data into QualDash in a timely way. Therefore, where staff understood and experienced the benefits of using QualDash in their work routines, they supported the data upload process necessary to maintain dashboard use in practice.
Sites D and E were not resourced by staff dedicated to national audit data collection and management. These sites represented an opportunity for QualDash to be used to integrate national audit data within routine monitoring processes. QualDash was observed to be used within these sites, but not in the ways expressed in the CMOcs. For example, in site E, a cardiologist (and champion) was initially keen to use national audit data via QualDash. However, at the postinstallation site visit, the cardiologist noted:
The cardiologist says that they have not used QualDash since we first emailed the link to them in the installation email. When asked why, they explain that they have not yet integrated QualDash into their routines. They say that they are not accustomed to having access to MINAP data in this way. Therefore, they sometimes forget that it is there to look at and use QualDash. However, they say that there is no reason why they should not use it, and that one way in which they could try and integrate it into their practice is in their performance meetings where such data could be reviewed.
Despite being involved in the development and dissemination of QualDash as a champion, this physician and their service had managed without routine use of national audit data, and unfamiliarity with access to the data appeared to constrain their interactions with QualDash. The cardiologist added QualDash to the agenda for the next directorate meeting, which was observed as part of the data collection. During the meeting, the functions of QualDash were demonstrated to attendees, including physicians and nurses, and some tentative impacts were highlighted:
The cardiologist says that when QualDash was first installed, it enabled them to see where there was missing data and the audit team cleaned up the data in response. Therefore, QualDash has already helped the service to get their data a bit cleaner.
The visualizations in QualDash enabled the audit team to identify missing data and work to address the gaps. However, the meeting chair queried the accuracy of the data displayed:
The meeting chair queries the displayed data. They say that all the graphs steadily decline and that “that cannot be accurate.” They ask [the Cardiologist who acted as QualDash Champion] why the graphs are like that, who responds that the audit team must be behind with the data collection. The Chair says that it is “amazing” to have access to real-time data, and that they should be able to make more use of MINAP data using QualDash.
The meeting attendees responded positively to QualDash’s potential for supporting data use, but the data currently displayed could not be used optimally for quality improvement, because they were incomplete. This finding highlights the importance of the work underpinning front-end use of the dashboard, that is, data collection and upload, and the significance of the dedicated audit support staff in sites better resourced to use national audit data for quality improvement.
In site D, queries about the metric configurations discussed during a postinstallation meeting (QualDash was installed at this site in December 2019) raised concerns about the use and dissemination of the dashboard.
[Referring to a configuration of a complicated measure—patients with a certain diagnosis discharged on “gold standard” (up to five) drugs by month and for which the research team has received conflicting definitions] Cardiologists comment that they have come across this sort of thing before with new systems that are developed by people outside their unit: the data going in are correct, but it does not make sense, and that time is needed to check it to ensure that the data are interpreted correctly. They know what their own data should look like, though people outside the unit, including managers from their trust, do not always have that understanding and can misinterpret data that are not displayed correctly.
On the basis of previous experience with similar initiatives, the champions requested that all Qualcards undergo a
In total, 23 responses were received from the adapted TAM questionnaire, 18 from participants who had experience using QualDash.
Although the average rating for all statements was positive, ratings were higher among those who had used QualDash and higher for ease of use as opposed to usefulness. The questionnaire was anonymous, so we were unable to link responses to sites, but we were able to analyze them according to role, which revealed that audit support staff and doctors found QualDash more useful than nursing staff. On the basis of observations of ward practice, this finding is possibly a reflection of nurses’ more frequent engagement with data that informs direct patient care, as opposed to service-level data. Interestingly, however, observations suggested that senior nurses may find QualDash useful, as a pediatrician (and champion) commented:
I ask the physician if they see a role for QualDash in the PICU from what we have discussed. The physician says that QualDash is an “amalgamate of retrospective data” but it may help prospective planning, for example, highlighting high-risk patients. They say that they think senior nurses, bands 6 and 7, may be interested in the dashboard, and that QualDash icons on the nurse station PCs might be helpful in supporting their use of the dashboard.
The pediatrician noted that QualDash displays aggregate, as opposed to patient-specific data, and that this may be useful for prospective planning on the wards. Therefore, senior nurses might utilize the dashboard if desktop icons facilitate access at the nurses’ station. Despite constraints on the use of QualDash during the evaluation period, its potential as a useful tool for a range of health care staff was recognized and ideas to support uptake in these ways suggested (desktop icons).
Rating of technology acceptance model statements. QD: QualDash; TAM: technology acceptance model.
CMOc 1 in
CMOc 2 in
In response to user feedback, a new version of QualDash (Version 2) was developed that addressed issues with Qualcard configurations where possible, including invasive ventilation in PICANet and discharge of gold standard drugs in MINAP, as discussed above. The intention was to install Version 2 across all sites, continue dissemination activities, and capture further impacts via observations. However, in March 2020, the data collection and site visits were suspended due to COVID-19. The lockdown effectively ended face-to-face evaluation work, with data to refine CMOcs focused on learning from QualDash Version 1. Interestingly, the onset of COVID-19 has been accompanied by a rapid uptake in the use of digital technologies to deliver care in lockdown restrictions and to understand the impact of the virus on health care systems [
This paper presents findings from a realist evaluation of a web-based, customizable, quality dashboard designed to support the use of national audit data in quality improvement. QualDash was co-designed with data users to address their needs, and champions were identified to support uptake and adoption. Even so, observations of practice revealed that QualDash had a variable impact across sites within the evaluation period. These findings are comparable with a recently updated review of dashboards that showed variation in impact, even within the same clinical area [
Using CMOcs, we hypothesized that staff would integrate QualDash within their routines because it could facilitate monitoring and interrogation of metrics considered markers of safe and effective care—a mechanism we termed
A study of dashboard design in the context of lymphedema services reported that the complexity and accessibility of data to develop dashboards to support use of aggregate data was more challenging than the development of clinical, patient-level dashboards [
First, the use of data familiar and meaningful to users in the co-design process would be beneficial. The use of site-specific data would encourage participants to scrutinize visualizations more thoroughly to confirm that they are configured as required for care quality monitoring before installation. Other studies have highlighted similar messages about how the interpretability of information needs greater consideration in design processes alongside usability and ease of use [
Second, engaging staff familiar with and knowledgeable about data use within the service, such as audit support staff and physicians, in scrutiny of dashboard metrics and functions, using real data, would provide an opportunity to (1) develop dashboard functionality in ways that can better support existing routines of data use and (2) raise awareness of the range of functionality offered by the dashboard among these key stakeholders, providing them with motivation to integrate the technology into their working practices on installation.
The findings also emphasize the work surrounding and maintaining dashboard use. Alongside configuring metrics that reflect service needs, consideration needs to be given to the systems in place for collecting and uploading data in a timely and accurate way—attributes that also underpin confidence in the data and consequently the visualizations produced. These systems vary from service to service and the design of digital dashboards may need to be more ambitious, for example, incorporating solutions to automate data upload to support data timeliness and to ease the workload of those supporting audit participation where necessary. This may be particularly pertinent as health care providers around the world seek to move to paperless systems.
A challenge when evaluating digital technologies is that development is
Web-based, customizable dashboards have the potential to support the use of national audit data in quality improvement, reducing variation in data use across services nationally, and, as COVID-19 has demonstrated, in response to specific events or crises. To optimize the dashboard impact in different settings, co-design would benefit from use of
Context + Mechanism = Outcome configurations.
Context + Mechanism = Outcome configuration
information technology
Myocardial Ischemia National Audit Project
National Health Service
Pediatric Intensive Care Audit Network
pediatric intensive care unit
technology acceptance model
The authors would like to thank the National Health Service staff who generously gave their time to speak with us and allowed us to observe their practice, without which this research would not be possible. We also thank the School of Health Care, University of Leeds, for their ongoing support in this research. This research was funded by the National Institute for Health Research Health Services and Delivery Research Program (project #16/04/06). The views and opinions expressed are those of the presenter and do not necessarily reflect those of the Health Services and Delivery Research program, National Institute for Health Research, National Health Service or the Department of Health.
CG is a member of the Myocardial Ischemia National Audit Project Academic and Steering Groups. RF is principal investigator for Pediatric Intensive Care Audit Network. The authors have no other competing interests to declare.