Background: Virtual care (VC) and remote patient monitoring programs were deployed widely during the COVID-19 pandemic. Deployments were heterogeneous and evolved as the pandemic progressed, complicating subsequent attempts to quantify their impact. The unique arrangement of the US Military Health System (MHS) enabled direct comparison between facilities that did and did not implement a standardized VC program. The VC program enrolled patients symptomatic for COVID-19 or at risk for severe disease. Patients’ vital signs were continuously monitored at home with a wearable device (Current Health). A central team monitored vital signs and conducted daily or twice-daily reviews (the nurse-to-patient ratio was 1:30).
Objective: Our goal was to describe the operational model of a VC program for COVID-19, evaluate its financial impact, and detail its clinical outcomes.
Methods: This was a retrospective difference-in-differences (DiD) evaluation that compared 8 military treatment facilities (MTFs) with and 39 MTFs without a VC program. Tricare Prime beneficiaries diagnosed with COVID-19 (Medicare Severity Diagnosis Related Group 177 or International Classification of Diseases–10 codes U07.1/07.2) who were eligible for care within the MHS and aged 21 years and or older between December 2020 and December 2021 were included. Primary outcomes were length of stay and associated cost savings; secondary outcomes were escalation to physical care from home, 30-day readmissions after VC discharge, adherence to the wearable, and alarms per patient-day.
Results: A total of 1838 patients with COVID-19 were admitted to an MTF with a VC program of 3988 admitted to the MHS. Of these patients, 237 (13%) were enrolled in the VC program. The DiD analysis indicated that centers with the program had a 12% lower length of stay averaged across all COVID-19 patients, saving US $2047 per patient. The total cost of equipping, establishing, and staffing the VC program was estimated at US $3816 per day. Total net savings were estimated at US $2.3 million in the first year of the program across the MHS. The wearables were activated by 231 patients (97.5%) and were monitored through the Current Health platform for a total of 3474 (median 7.9, range 3.2-16.5) days. Wearable adherence was 85% (IQR 63%-94%). Patients triggered a median of 1.6 (IQR 0.7-5.2) vital sign alarms per patient per day; 203 (85.7%) were monitored at home and then directly discharged from VC; 27 (11.4%) were escalated to a physical hospital bed as part of their initial admission. There were no increases in 30-day readmissions or emergency department visits.
Conclusions: Monitored patients were adherent to the wearable device and triggered a manageable number of alarms/day for the monitoring–team-to-patient ratio. Despite only enrolling 13% of COVID-19 patients at centers where it was available, the program offered substantial savings averaged across all patients in those centers without adversely affecting clinical outcomes.
The COVID-19 pandemic forced health care systems around the world to rapidly innovate and adapt to unprecedented operational and clinical strain . Many health care systems leveraged virtual care (VC) capabilities to monitor patients while reducing staff exposure and managing resource constraints [ - ]. The US Department of Health and Human Services waived penalties for violations of the Health Insurance Portability and Accountability Act incurred through use of popular communication tools and the Centers for Medicare and Medicaid Services waived a number of regulations, paving the way for inpatient care in the home [ , ].
Early initiatives were heterogeneous, driven by local trends and available technologies . They ranged from patients reporting single, manually recorded vital signs (such as pulse oximetry) to comprehensive packages of care delivered by dedicated clinical services and augmented by 24-hour continuous monitoring of multiple vital signs [ - ]. These deployments must now be evaluated and proven both clinically and financially effective for VC to continue, to justify the resources already invested in setting up these initiatives, and to better select suitable patients for care [ , , ]. Some centers have reported reductions in length of hospital stay, intensive care admissions, and readmissions with VC [ , ]. However, evaluation has been complicated by the variation in approaches to VC in different institutions, which evolved in parallel with new COVID-19 treatments, vaccinations, and viral variants, and by selection bias between monitored and unmonitored cohorts [ , , ]. A related concern has been the “digital divide,” the variation in access to digital technologies across age, educational, socioeconomic, and geographical strata, limiting the reach of VC services [ , ].
In December 2020, the US Military Health System (MHS) Virtual Medical Center (VMC) implemented a VC program in 8 military treatment facilities (MTFs) across the United States for COVID-19 patients. The program was delivered virtually by a 24-hour dedicated nursing service using continuous remote patient monitoring. The implementation of a standardized VC program for the same condition in a subset of similar treatment facilities across the country made for an ideal natural experiment.
The VC Program
The VC program was available to any Tricare Prime beneficiary eligible for care within the MHS aged 21 years or older. Inclusion criteria were broad and included any patient who presented to hospital symptomatic for COVID-19 and those who, despite not requiring admission, had a high risk of exposure and were at risk for severe disease due to a comorbid state. The program was subsequently expanded to other use cases, including congestive heart failure, gestational hypertension, and postoperative monitoring for bariatric surgery; however, this analysis only pertains to COVID-19.
Referred patients were screened for eligibility and consented to participation in the VC program, and they were then issued the VC equipment and familiarized with it by specially trained nurses. Once home, patients were called to ensure kit setup was successful. Enrollment triage forms () stratified patients as low, medium, or high risk, dictating the frequency of patient engagement. Low-risk patients were monitored intermittently, medium-risk patients were reviewed daily, and high-risk patients were reviewed twice daily. All reviews were conducted virtually.
Patients were monitored using the Current Health (CH) VC platform (Current Health Inc), which included a US Food and Drug Administration (FDA) 510(k)-cleared wearable device worn on the upper arm, a blood pressure cuff and weighing scale, a tablet, and a network hub that operated via home Wi-Fi or roaming cellular signal, enabling access for patients without home internet. Continuous vital signs measured were pulse rate, respiratory rate, oxygen saturation, temperature, and motion. Blood pressure and weight were monitored intermittently. The tablet collected customizable patient-reported outcome measures, including symptom burden, and could be used to asynchronously request direct assistance from the on-call nursing team. All data were processed via cloud computing and displayed on a web dashboard for the clinical team. If any vital sign exceeded a predetermined threshold, an alert would trigger on the dashboard and send notifications to the appropriate staff. The team monitored patients across the 8 participating MTFs with a nurse-to-patient ratio of 1:30. Each MTF had designated on-call physicians available for on-demand support if care escalation was required.
Patients were disenrolled from VC at their physicians’ discretion or if escalated back from VC to inpatient care due to clinical decompensation. CH coordinated kit return, sanitization, service, and repackaging and returned the kits to their original MTF. This ensured that each MTF maintained a consistent stock of equipment to enroll new patients. The infrastructure, personnel, and fiscal resources for the program were directly funded by the MHS. A total of 200 CH kits were available for distribution across the 8 participating MTFs, dynamically divided based on utilization and demand. There were always sufficient kits available at each location.
The difference-in-differences (DiD) model used ordinary least squares regression, regressing the outcome on an indicator for whether the hospital was included in the VC program, an indicator for whether the patient’s date of admission was after the earliest implementation of the VC program, and an interaction variable of the 2 indicators. The coefficient of this interaction can be interpreted as the effect of being admitted to a hospital that had an active VC program regardless of enrollment in the program. The model controlled for age, gender, marital status, COVID-19 pneumonia diagnosis during the index admission, and Elixhauser comorbidity score . Fixed effects were included for the hospital and the quarter year. The final estimate was the average within-hospital change in length of stay in hospitals that implemented a VC program compared to hospitals that did not. To understand which patients were most affected by the program, a predicted length of stay was estimated based on observable characteristics of the patients. This estimate was computed in two stages. In the first stage, the observed length of stay was regressed on patient age (linearly and quadratically), gender, marital status, Elixhauser comorbidity score, and an indicator for whether there was a pneumonia diagnosis. This produced a partial correlation between each of these covariates and length of stay. These partial correlations were then applied to each patient’s observed covariates to generate a predicted length of stay for each individual. The residual length of stay was then calculated as the difference between the predicted and actual lengths of stay. This residual can be interpreted as the portion of the stay that was not attributable to observed characteristics of the patient.
The sample was constructed using all Tricare beneficiaries admitted to an MTF with COVID-19 from December 7, 2020, to December 6, 2021. Only the patient’s first admission was included, with patients admitted either directly by their physician or through an emergency department. Patients with any Medicare Severity Diagnosis Related Groups other than 177 were excluded, along with any who were not discharged to their home. The final sample included 3988 index admissions: 1838 patients who were admitted into an MTF with an VC program and 2150 who were admitted to an MTF without a program. Of the 1838 patients admitted into an MTF with a VC program, 237 (13%) were enrolled in VC. The average cost of VC was calculated per day based on the capital expenditure and ongoing monitoring contract, costs of nursing labor, and program management support. While VC program initiation varied at the MTF level, the Defense Health Agency (DHA) paid for the VC centrally with a single, centralized monitoring hub. This makes sense given that VC is a high fixed-cost investment that can be managed from a single location, allowing for economies of scale. However, this means that costs can only be calculated at the system level and not at the MTF level.
Clinical Outcomes for the Cohort Enrolled in the VC Program
Outcome data for the 237 patients enrolled in the program were obtained from MHS Mart, known as “M2,” which is a queryable data repository for the DHA. Vital-sign and alarm data were obtained from CH. A manual chart review of patients’ electronic medical records (EMRs) was conducted for the subgroup of patients at Brooke Army Medical Center (BAMC) between December 7, 2020, and June 7, 2021. The review was limited to the first 6 months of the program due to the availability of clinicians to conduct the review. That review focused on comorbidities that increased risk for severe COVID-19, including smoking, diabetes, being immunocompromised, chronic kidney disease, and hypertension, as well as validated scores for severity and readmission: the Quick COVID-19 Severity Index (qCSI), Quick Sepsis Related Organ Failure Assessment (qSOFA), and the HOSPITAL score (an abbreviation that represents “hemoglobin at discharge, discharge from an oncology service, sodium level at discharge, procedure during the index admission, index type of admission, number of admissions during the last 12 months, and length of stay”) [- ]. The manual review was given precedence if there was a conflict between the M2 database and the EMR reviews. Data were analyzed in R (R Foundation for Statistical Computing). Results were quantified, assessed for normality (via visualization with the Shapiro–Wilk test), and presented as the mean (SD) or, if nonparametric, the median (IQR). Wearable adherence was defined as hours of data transmitted divided by the time between the first and last transmission. Differences between groups were assessed using the Wilcoxon rank sum test or chi-square test (without Yates correction), as were nonparametric data, with significance at P<.05. Multiple comparisons were corrected using the Holm–Bonferroni method.
The study complied with the Declaration of Helsinki. This was a retrospective study of data collected for clinical rather than research purposes, so prior informed consent was not sought and no compensation was offered. An exemption for retrospective analysis was granted by the BAMC Institutional Review Board (reference number C.2021.103e). The data were deidentified for analysis.
During the study period, 3988 patients were admitted to the MHS with COVID-19.compares patient covariates for those seen at MTFs with and without VC. Patients were similar in age, Elixhauser index, and rates of pneumonia listed on index admission. There was a trend toward more female patients at hospitals with VC (P=.08), but this result was not statistically significant. The patients were significantly more likely to be married. A strict balance on these covariates was not required for the DiD because the changes in the distribution did not coincide with the timing of the program; the marginal difference in genders disappeared when looking at the postimplementation period, while the marital difference remained constant [ , ].
The first column indisplays the results of the DiD analysis, and shows the difference in adjusted means of the treatment and control hospitals before and after implementation of VC. VC was associated with a 0.56-day (12%, P<.031) reduction in length of stay for all COVID-19 patients at an MTF, with no increase in readmissions or emergency department visits. The average Tricare Medicare Severity Diagnosis Related Group (DRG) payment, excluding graduate medical education and other add-ons for DRG 177, was US $16,568. Tricare calculated the per diem payment (used on hospital transfers) as the total payment divided by the geometric mean length of stay for that DRG. Using this methodology each day in the hospital cost US $3682. This implied savings of US $2047 per patient for every COVID-19 patient admitted to a center with VC available, or US $3,762,386 in total. At the hospital level, the total cost of equipping, establishing, and staffing the VC program was estimated at US $3816 per day, or US $1,392,840 per year. Total net savings were therefore estimated at US $2.3 million in the first year of the program across the MHS.
The DiD provided an average effect, but conceptually the program should have impacted those that would have stayed in the hospital for monitoring.displays plots of the residual length of stay for patients in the VC and non-VC groups. The program appears to have reduced length of stay most effectively for those that would have otherwise been in the hospital longest.
|Characteristics||Treatment (n=1838)||Control (n=2150)||% Difference (treatment minus control)||P Value|
|Mean age (years)||55.98||56.27||–0.29||.58|
|Female, n (%)||717 (39)||774 (36)||3||.08|
|Married, n (%)||1489 (81)||1613 (75)||6||<.001|
|Pneumonia, n (%)||1011 (55)||1226 (57)||–2||.21|
|Mean Elixhauser score||1.96||1.98||–0.02||.78|
|Length of stay (n=3984)||Thirty-day readmission (n=3984)||Emergency department visit within 90 days (n=3984)|
|Panel A: linear (postimplementation), coefficient (SE)||–0.556 (0.256)||0.007 (0.019)||–0.011 (0.018)|
|Panel B: log transformation (postimplementation), coefficient (SE)||–0.135 (0.047)||N/Aa||N/A|
|Panel C: alternative specification (postimplementation), coefficient (SE)||–0.115 (0.051)b||0.094 (0.249)c||0.45 (0.501)c|
|Full sample mean||4.78||0.08||0.02|
aN/A: not applicable.
bAlternative specification: Poisson.
cAlternative specification: logit.
Clinical Outcomes for the Cohort Enrolled in the VC Program
For the 237 patients enrolled in the program, mean age was 53 (SD 15.3) years, 100 (42%) were female, 137 (58%) were male, 231 (97.5%) activated their wearable, median activation time was 60 (IQR 11-186) minutes, and they were monitored through the CH platform for a total of 3474 (median 7.9, IQR 3.2-16.5, range 1-106) days. Wearable adherence was 85% (IQR 63%-94%). Patients triggered a median of 1.6 (IQR 0.7-5.2) physiological alarms per patient per day; 203 (85.7%) were monitored at home and directly discharged from VC; 27 (11.4%) were escalated to a physical hospital bed while on monitoring; and 1 (0.4%) was readmitted to the hospital within 30 days of discharge from VC. There were no deaths in the cohort. There were significant differences between those requiring escalation to physical care and those remaining at home throughout their time in the VC program. Those who required escalation activated their CH kit in a similar timeframe but were less adherent to using the wearable (median 63%, IQR 32%-83% vs 87%, IQR 70%-95%, respectively; P<.001), although they generated significantly more physiological alarms per 24 hours of monitoring (median 7.0, IQR 1.9-17.3 alarms vs median 1.3, IQR 0.6-3.6 alarms; P<.001) despite the decreased wear time.
Charts from 39 patients (16% of the monitored cohort) from the first 6 months of the VC program at BAMC were hand-reviewed for COVID-19 risk factors. Their demographics and COVID-19 risk factors are presented in. The patients in the subset were monitored for a total of 684 (median 8.8, IQR 3-12, range 1-45) days. Thirty-eight (97.4%) activated their wearable for a median 34 (IQR 1-125) minutes, and wearable adherence was 78% (IQR 60%-91%). Patients triggered 1.7 (IQR 0.8-5.6) physiological alarms per day. Thirty-four (87%) of the 38 remained at home during their monitoring period and 4 (10.3%) were escalated to physical care during their initial admission. Once discharged from hospital and VC, there were no readmissions in the subsequent 30 days (the subset’s HOSPITAL scores would have predicted a 30-day readmission rate of 5.8%, equivalent to 2 or 3 of the 39 patients reviewed).
|Age (years), mean (SD, range)||59 (14.6, 31-86)|
|Sex, n (%)|
|Ethnicity, n (%)|
|Other or unspecified||3 (8)|
|Southeast Asian||1 (3)|
|Triage stratification, n (%)|
|High risk||25 (64)|
|Medium risk||14 (36)|
|COVID-19 risk factors|
|Smoker, n (%)||6 (15)|
|Diabetes, n (%)||12 (31)|
|Immunocompromised, n (%)||11 (28)|
|Chronic kidney disease, n (%)||4 (10)|
|Hypertension, n (%)||23 (59)|
|Risk factors, median, (IQR, range)||1 (0-2, 0-4)|
|Quick Sepsis Related Organ Failure Assessment score groups, n (%)|
|HOSPITALa score, median (IQR, range)||2 (1-2.5, 0-6)|
|Intensive care unit admission while in hospital, n (%)|
|Discharged on oxygen, n (%)|
aHOSPITAL score: hemoglobin at discharge, discharge from an oncology service, sodium level at discharge, procedure during the index admission, index type of admission, number of admissions during the last 12 months, and length of stay.
The MHS VC program was established during a time of acute national need, with patients offered round-the-clock remote care as an alternative to being in the hospital. Despite only enrolling 6% of patients with COVID-19 admitted to the MHS and only 13% of patients at centers where the program was available, the program had a disproportionately large impact. Overall length of stay was reduced by 12%, averaged across all COVID-19 patients at centers with availability, with an associated cost saving of $2047 per patient. Reassuringly, the 11% rate of escalation to physical care for patients enrolled in the program demonstrated that unwell patients were being identified and treated despite being at home, with no increase in emergency department attendance, 30-day readmissions, or deaths. It should be stressed that escalation to physical care was not a “readmission,” as the patient remained in their “initial admission” until discharged from the program. Indeed, escalation was desirable in those patients who warranted it, and any days spent at home rather than in the hospital increased inpatient capacity at the facilities while reducing exposure to COVID-19 for other patients and staff.
The median length of monitoring on VC was 7.9 days, and the overall adherence to wear was 85%. Adherence to wearables has typically been reported as the total number of days patients wore a device, rather than the consistency of wear during those days. In both postoperative and clinical trial contexts, median length of engagement has been reported at around 5 days [, ]. It was notable that patients less adherent to the wearable were more likely to require escalation to physical care. Lower adherence may have reduced health care provider confidence in the patient staying at home, precipitating admission, or led to deteriorations being caught later, after simple steps such as increasing fluid intake were no longer effective. Alternatively, those who were more unwell may have been less inclined to wear the wearable.
Other previously identified barriers to VC adoption have included lack of connectivity (the so-called digital divide that disproportionately affects the elderly, those with low income, and rural populations) and concerns around privacy and usability [, , ]. The CH platform included a roaming cellular function, so the patients did not require their own internet connection. The data were transmitted from the patient’s home as raw waveforms and did not include any patient identifiable information. The VMC team were able to address usability concerns when handing over the kit in hospital and following up by phone.
The alarms needed to be specific as well as sensitive to avoid disturbing patients, bringing them into the hospital unnecessarily, or increasing the risk of alarm fatigue among the nursing team . There is a significant relationship between alarm exposure and response time [ ]. It has previously been reported that actionable alarms are already a low percentage (20%-36%) of the total numbers of alarms triggered in adult ward settings [ ]. Alarm actions were not tracked directly, but the alarm rate of 1.6 per patient-day was manageable in the context of a 1:30 nurse-to-patient staffing ratio and is lower than previously reported in-hospital and at-home alarm rates of 10.8 per patient-day and 3.42 per patient-day, respectively [ , ].
The facilitators and barriers to rolling out this intervention were the subject of a separate qualitative study, presently under review. However, to contextualize the findings of this paper, we noted that in common with similar interventions, clinician acceptance took time to establish, and more patients would have likely been enrolled if acceptance had come sooner [, ]. The program also benefited in some ways from its military setting. Nurses, licensed and credentialed at a single MTF, could practice across the whole health system. That, along with the clear leadership structures and hospital physicians being employed by a single entity, facilitated rapid expansion across the country. The unique medical malpractice conditions of the MHS, where legal action cannot be taken against individual providers, may have made the clinicians more willing to take responsibility for patients’ remote care.
However, the program was also hampered by the inability to coordinate community services or go into patients’ homes. The lack of an integrated inpatient/outpatient EMR, along with inexperience in the use of virtual care relative value units and Current Procedural Terminology–associated reimbursement also slowed revenue generation. Finally, the CH platform was FDA 510(k)-cleared only for patients older than 21 years, which made the proportion of the active-duty population aged between 18 and 21 ineligible for enrollment.
The unique strength of this study was its comparison of similar health facilities spread across the United States all attempting to treat an identical clinical condition concurrently. The DiD analysis compared a treatment group (centers with VC) to a control group (centers without VC) before and after the intervention (ie, VC), then estimated the divergence in outcome (ie, length of stay). The identifying assumption was that the treatment group would have followed parallel trends with the control group in the absence of the intervention. In other words, changes in the dominant COVID-19 variant, vaccination rates, treatment methodologies, or other factors would have impacted centers with and without VC equally. While this was an inherently untestable assumption,plots the difference in adjusted means of the treatment and control MTFs before and after implementation of the VC program, demonstrating parallel pretrends (ie, both the treatment and the control groups followed the same trajectory prior to the initiation of the VC program). DiD methodology also requires quasi-random (ie, uncorrelated with the intervention) assignment to the treatment. MTFs were not randomized to set up VC programs, but the hospital that someone attended could be considered quasi-random (patients with COVID-19 were unlikely to travel to a hospital on the grounds that it had a VC program). The Elixhauser comorbidity index indicated that patients included in the analysis were similar at MTFs with and without VC. However, enrollment in VC could not be considered random, as patients were selected.
Consequently, the DiD analysis could only estimate the effect that the presence of VC had on patients on average. It demonstrated that patients at centers with VC stayed in the hospital for less time than their counterparts in centers without VC. This may have been due to the specific patients being entered in the program but could also have been driven by hospitals and physicians having more ability to focus on those patients who remained in the hospital. The VC effect may also have been driven through creating additional capacity, as well as the care itself. To this point,suggests that the reduction in length of stay came from individuals who would otherwise have spent a long time in the hospital being able to go home with VC. However, given the limitations above, the separate contributions of patient selection, nursing care, logistics, and technology could not be easily parsed.
In conclusion, the unique structure of the MHS allowed comparison between MTFs that implemented a VC program for COVID-19 and those that did not. Despite the VC program enrolling only a small proportion of patients admitted with COVID-19, it offered substantial savings in centers where it was adopted. The program was effective in identifying suitable patients, escalating them appropriately to physical care, and discharging them once their illness was resolved. The program's military context may have aided its rapid rollout and adoption across multiple centers, and the single-payer nature of the DHA may have facilitated the economic justification of the initiative. However, the results are likely to be applicable to other large health systems that can support or engage a nurse monitoring service, particularly those systems that can reap the economic benefit of a cost-saving program.
We acknowledge Dr Gary Legault, the Virtual Medical Center clinical team, Ms Jessie Lever-Taylor, Ms Hope Smith, and Ms Abenaa Boyce for their hard work in setting up the program and making it a success.
The data sets generated during and/or analyzed during the current study are not publicly available due to information governance restrictions within the Military Health System. Please make any requests for access to MJM (firstname.lastname@example.org), who is chairman of the Brooke Army Medical Center Institutional Review Board.
Conflicts of Interest
MW, DY, NZ, JP, and AW are paid employees of Current Health and hold stock in Best Buy Inc. MJM is a US government employee and a paid speaker for Janssen.
Enrollment form and risk stratification for the Virtual Care program.PDF File (Adobe PDF File), 3035 KB
- Keesara S, Jonas A, Schulman K. Covid-19 and health care's digital revolution. N Engl J Med 2020 Jun 04;382(23):e82. [CrossRef] [Medline]
- Omboni S, Padwal RS, Alessa T, Benczúr B, Green BB, Hubbard I, et al. The worldwide impact of telemedicine during COVID-19: current evidence and recommendations for the future. Connect Health 2022 Jan 04;1:7-35 [FREE Full text] [CrossRef] [Medline]
- Koonin LM, Hoots B, Tsang CA, Leroy Z, Farris K, Jolly B, et al. Trends in the use of telehealth during the emergence of the COVID-19 pandemic - United States, January-March 2020. MMWR Morb Mortal Wkly Rep 2020 Oct 30;69(43):1595-1599 [FREE Full text] [CrossRef] [Medline]
- Golinelli D, Boetto E, Carullo G, Nuzzolese AG, Landini MP, Fantini MP. Adoption of digital technologies in health care during the COVID-19 pandemic: systematic review of early scientific literature. J Med Internet Res 2020 Nov 06;22(11):e22280 [FREE Full text] [CrossRef] [Medline]
- Notification of enforcement discretion for telehealth remote communications during the COVID-19 nationwide public health emergency. Federal Register. 2020. URL: https://www.federalregister.gov/d/2020-08416 [accessed 2023-01-16]
- Mecklai K, Smith N, Stern AD, Kramer DB. Remote patient monitoring - overdue or overused? N Engl J Med 2021 Apr 15;384(15):1384-1386. [CrossRef] [Medline]
- Alboksmaty A, Beaney T, Elkin S, Clarke JM, Darzi A, Aylin P, et al. Effectiveness and safety of pulse oximetry in remote patient monitoring of patients with COVID-19: a systematic review. Lancet Digit Health 2022 Apr;4(4):e279-e289 [FREE Full text] [CrossRef] [Medline]
- Wells E, Taylor JL, Wilkes M, Prosser-Snelling E. Successful implementation of round-the-clock care in a virtual ward during the COVID-19 pandemic. Br J Nurs 2022 Nov 10;31(20):1040-1044. [CrossRef] [Medline]
- Adly AS, Adly MS, Adly AS. Telemanagement of home-isolated COVID-19 patients using oxygen therapy with noninvasive positive pressure ventilation and physical therapy techniques: randomized clinical trial. J Med Internet Res 2021 Apr 28;23(4):e23446 [FREE Full text] [CrossRef] [Medline]
- Vegesna A, Tran M, Angelaccio M, Arcona S. Remote patient monitoring via non-invasive digital technologies: a systematic review. Telemed J E Health 2017 Jan;23(1):3-17 [FREE Full text] [CrossRef] [Medline]
- Taylor ML, Thomas EE, Snoswell CL, Smith AC, Caffery LJ. Does remote patient monitoring reduce acute care use? A systematic review. BMJ Open 2021 Mar 02;11(3):e040232 [FREE Full text] [CrossRef] [Medline]
- Crotty BH, Dong Y, Laud P, Hanson RJ, Gershkowitz B, Penlesky AC, et al. Hospitalization outcomes among patients with COVID-19 undergoing remote monitoring. JAMA Netw Open 2022 Jul 01;5(7):e2221050 [FREE Full text] [CrossRef] [Medline]
- Dirikgil E, Roos R, Groeneveld GH, Heringhaus C, Silven AV, Petrus AHJ, et al. Home monitoring reduced short stay admissions in suspected COVID-19 patients: COVID-box project. Eur Respir J 2021 Aug;58(2):2100636 [FREE Full text] [CrossRef] [Medline]
- Gruwez H, Bakelants E, Dreesen P, Broekmans J, Criel M, Thomeer M, et al. Remote patient monitoring in COVID-19: a critical appraisal. Eur Respir J 2022 Feb;59(2):2102697 [FREE Full text] [CrossRef] [Medline]
- Litchfield I, Shukla D, Greenfield S. Impact of COVID-19 on the digital divide: a rapid review. BMJ Open 2021 Oct 12;11(10):e053440 [FREE Full text] [CrossRef] [Medline]
- Cantor JH, McBain RK, Pera MF, Bravata DM, Whaley CM. Who is (and is not) receiving telemedicine care during the COVID-19 pandemic. Am J Prev Med 2021 Sep;61(3):434-438 [FREE Full text] [CrossRef] [Medline]
- Quan H, Sundararajan V, Halfon P, Fong A, Burnand B, Luthi J, et al. Coding algorithms for defining comorbidities in ICD-9-CM and ICD-10 administrative data. Med Care 2005 Nov;43(11):1130-1139. [CrossRef] [Medline]
- Haimovich AD, Ravindra NG, Stoytchev S, Young HP, Wilson FP, van Dijk D, et al. Development and validation of the Quick COVID-19 Severity Index: a prognostic tool for early clinical decompensation. Ann Emerg Med 2020 Oct;76(4):442-453 [FREE Full text] [CrossRef] [Medline]
- Seymour CW, Liu VX, Iwashyna TJ, Brunkhorst FM, Rea TD, Scherag A, et al. Assessment of clinical criteria for sepsis: for the third international consensus definitions for sepsis and septic. JAMA 2016 Feb 23;315(8):762-774 [FREE Full text] [CrossRef] [Medline]
- Donzé J, Aujesky D, Williams D, Schnipper JL. Potentially avoidable 30-day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med 2013 Apr 22;173(8):632-638. [CrossRef] [Medline]
- Angrist J, Pischke J. Mostly Harmless Econometrics: An Empiricist's Companion. Princeton, NJ: Princeton University Press; 2009.
- Daw JR, Hatfield LA. Matching and regression to the mean in difference-in-differences analysis. Health Serv Res 2018 Dec;53(6):4138-4156 [FREE Full text] [CrossRef] [Medline]
- Ratcliffe AM, Zhai B, Guan Y, Jackson DG, South West Anaesthesia Research Matrix (SWARM), Sneyd JR. Patient-centred measurement of recovery from day-case surgery using wrist worn accelerometers: a pilot and feasibility study. Anaesthesia 2021 Jun 05;76(6):785-797 [FREE Full text] [CrossRef] [Medline]
- Amin T, Mobbs RJ, Mostafa N, Sy LW, Choy WJ. Wearable devices for patient monitoring in the early postoperative period: a literature review. Mhealth 2021;7:50 [FREE Full text] [CrossRef] [Medline]
- Gagnon M, Ngangue P, Payne-Gagnon J, Desmartis M. m-Health adoption by healthcare professionals: a systematic review. J Am Med Inform Assoc 2016 Jan;23(1):212-220. [CrossRef] [Medline]
- Hong YA, Cho J. Has the digital health divide widened? trends of health-related internet use among older adults from 2003 to 2011. J Gerontol B Psychol Sci Soc Sci 2017 Sep 01;72(5):856-863. [CrossRef] [Medline]
- Lukasewicz C, Mattox E. Understanding Clinical Alarm Safety. Crit Care Nurse 2015 Aug;35(4):45-57. [CrossRef] [Medline]
- Paine CW, Goel VV, Ely E, Stave CD, Stemler S, Zander M, et al. Systematic review of physiologic monitor alarm characteristics and pragmatic interventions to reduce alarm frequency. J Hosp Med 2016 Feb;11(2):136-144 [FREE Full text] [CrossRef] [Medline]
- Alavi A, Bogu GK, Wang M, Rangan ES, Brooks AW, Wang Q, et al. Real-time alerting system for COVID-19 and other stress events using wearable data. Nat Med 2022 Jan;28(1):175-184 [FREE Full text] [CrossRef] [Medline]
- Prgomet M, Cardona-Morrell M, Nicholson M, Lake R, Long J, Westbrook J, et al. Vital signs monitoring on general wards: clinical staff perceptions of current practices and the planned introduction of continuous monitoring technology. Int J Qual Health Care 2016 Sep;28(4):515-521. [CrossRef] [Medline]
- Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009 Aug 07;4:50 [FREE Full text] [CrossRef] [Medline]
- Vilendrer S, Lestoquoy A, Artandi M, Barman L, Cannon K, Garvert DW, et al. A 360 degree mixed-methods evaluation of a specialized COVID-19 outpatient clinic and remote patient monitoring program. BMC Prim Care 2022 Jun 13;23(1):151 [FREE Full text] [CrossRef] [Medline]
|BAMC: Brooke Army Medical Center|
|CH: Current Health|
|DHA: Defense Health Agency|
|DiD: difference in differences|
|DRG: Diagnosis Related Group|
|EMR: electronic medical record|
|FDA: US Food and Drug Administration|
|HOSPITAL score: hemoglobin at discharge, discharge from an oncology service, sodium level at discharge, procedure during the index admission, index type of admission, number of admissions during the last 12 months, and length of stay|
|M2: Military Health System Mart|
|MHS: Military Health System|
|MTF: military treatment facility|
|qCSI: Quick COVID-19 Severity Index|
|qSOFA: Quick Sepsis Related Organ Failure Assessment|
|VC: virtual care|
|VMC: Virtual Medical Center|
Edited by A Mavragani; submitted 07.11.22; peer-reviewed by S Vilendrer, J Sharp; comments to author 29.12.22; revised version received 03.01.23; accepted 11.01.23; published 25.01.23Copyright
©Robert J Walter, Stephen D Schwab, Matt Wilkes, Daniel Yourk, Nicole Zahradka, Juliana Pugmire, Adam Wolfberg, Amanda Merritt, Joshua Boster, Kevin Loudermilk, Sean J Hipp, Michael J Morris. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 25.01.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.