In an ongoing effort of this Journal to develop and further the theories, models, and best practices around eHealth research, this paper argues for the need for a “science of attrition”, that is, a need to develop models for discontinuation of eHealth applications and the related phenomenon of participants dropping out of eHealth trials. What I call “law of attrition” here is the observation that in any eHealth trial a substantial proportion of users drop out before completion or stop using the appplication. This feature of eHealth trials is a distinct characteristic compared to, for example, drug trials. The traditional clinical trial and evidence-based medicine paradigm stipulates that high dropout rates make trials less believable. Consequently eHealth researchers tend to gloss over high dropout rates, or not to publish their study results at all, as they see their studies as failures. However, for many eHealth trials, in particular those conducted on the Internet and in particular with self-help applications, high dropout rates may be a natural and typical feature. Usage metrics and determinants of attrition should be highlighted, measured, analyzed, and discussed. This also includes analyzing and reporting the characteristics of the subpopulation for which the application eventually “works”, ie, those who stay in the trial and use it. For the question of what works and what does not, such attrition measures are as important to report as pure efficacy measures from intention-to-treat (ITT) analyses. In cases of high dropout rates efficacy measures underestimate the impact of an application on a population which continues to use it. Methods of analyzing attrition curves can be drawn from survival analysis methods, eg, the Kaplan-Meier analysis and proportional hazards regression analysis (Cox model). Measures to be reported include the relative risk of dropping out or of stopping the use of an application, as well as a “usage half-life”, and models reporting demographic and other factors predicting usage discontinuation in a population. Differential dropout or usage rates between two interventions could be a standard metric for the “usability efficacy” of a system. A “run-in and withdrawal” trial design is suggested as a methodological innovation for Internet-based trials with a high number of initial dropouts/nonusers and a stable group of hardcore users.
In this issue of the Journal, several excellent papers deal with the methodology of conducting Internet-based trials. Peter Farvolden and colleagues present an Internet-based evaluation of a panic disorder self-help Web program, struggling with a huge proportion of users discontinuing usage: only 12 out of 1161 (about 1%) completed the entire 12-week program [
All these papers allude to a common problem: the law of attrition, as I call it, ie. the phenomenon of participants stopping usage and/or being lost to follow-up, as one of the fundamental characteristics and methodological challenges in the evaluation of eHealth applications.
Nonusage attrition curves for two studies [
While in most drug trials the intervention is “prescribed”, in studies involving information and communication technology usage of the intervention is mostly at the discretion of the participant and the participant has the option to discontinue usage very easily. In any longitudinal study where the intervention is neither mandatory nor critical to the participants' well-being, trial participants will be lost. Lack of compliance is usually not a major problem in drug trials, as participants are more closely supervised and sometimes experience observable and immediate health benefits in taking a drug. Thus, in drug trials, almost everyone in the intervention group will actually be getting the intervention (and receiving the same dose). In contrast, one of the fundamental methodological problems in eHealth trials is that in the intervention group a (sometimes substantial) proportion of people will not be using the intervention or using it sparingly [
In this paper I argue that a “science of attrition” is needed. Nonusage data
When talking about attrition in longitudinal studies, we may actually refer to two different processes: the phenomenon of losing participants to follow-up (eg, participants do not return to fill in follow-up questionnaires), which I call
Thus, in any longitudinal eHealth study, we can draw two kinds of attrition curves: (1) proportion of users who are lost to follow-up over time, and (2) proportion of users who do not drop out (eg, who are still filling in questionnaires), but who are no longer using the application, plotted over time. My hypothesis is that the loss-to-follow-up attrition curve usually follows the nonusage attrition curve because a high proportion of loss to follow-up is a result of nonusage (“losing interest” is the underlying variable which explains both curves). In longitudinal studies with control groups, for example randomized trials, a third curve can be drawn to illustrate loss-to-follow-up rate in the comparison group. If the comparison group consists of providing another technological innovation, a fourth curve can be drawn to characterize nonusage of the control intervention (
An example for logarithmic “attrition curves” in a hypothetical eHealth trial. In the intervention group (INTV), a proportion of participants will be lost to follow-up (INTV dropout), as will be in the control group (CTRL dropout). In addition, even within those not lost to follow up, there might be a proportion of nonusers
The hypothetical attrition curves in
Attrition curves from
The “science of attrition” can be seen as an application of (and contribution to) the theoretical framework of diffusion research. An eHealth intervention trial usually brings an innovation to participants. Everett M. Rogers defines innovation as “an idea perceived as new by the individual” and diffusion is “the process by which an innovation spreads.” [
In trials of efficacy of eHealth interventions we are usually starting with an enrolled population of 100% “intent-to-use” participants, who have already gone through a recruitment, selection and informed consent process, ie, all of them have, in principle, already agreed to use and “adopted” the intervention. However, as shown above, in many trials a considerable number of users may discontinue the intervention or (worse) drop out of the trial altogether — a reversal of the adoption process.
In his 550-page book about how new ideas spread and are adopted, Rogers spends a mere 5 pages on
Rogers calls the innovation adoption stage where people may reverse their decision the
Rogers distinguishes
In the classical model of Rogers, the rate of adoption is positively related to several characteristics of the innovation as they are perceived by the members of the system in which the innovation is diffusing. These are
relative advantage, the degree to which the innovation is perceived to be superior to the idea that it replaces;
compatibility, the degree to which an innovation is perceived as being consistent with the existing values, past experiences, and needs of potential adopters;
complexity, the degree to which an innovation is perceived as difficult to understand and use;
trialability, the degree to which an innovation may be experimented with on a limited basis; and
observability, the degree to which the results of an innovation are visible to others.
These characteristics of the innovation also play a role in the decision to stop using an eHealth innovation and/or to drop out of an eHealth trial. For example, the innovation will be rejected if it is not perceived as creating any benefit (relative advantage) or if it has usability problems (complexity). However, there are further factors involved which are not related to the innovation itself but more to the environment and the trial setting. These factors, for example, expectation management before the trial or “push factors” such as reminders by the study team, influence the shape and slope (steepness) of the attrition curve. In
A more formal analysis of such curves, eg, with methods of survival curve analysis, may elicit metrics for different attrition rates and identify factors affecting the shape and slope of these curves. Some of these proposed (hypothetical) factors have been compiled in
There will also be additional participant factors, for example, demographics, which influence attrition rates. Users with less formal education, lower socioeconomic status, and less change agent contact are more likely to discontinue innovations [
Proposed (hypothetical) factors influencing nonusage attrition and dropout attrition in eHealth trials
|
|
|
Quantity and appropriateness of information given before the trial, expectation management | Inappropriate information leads to unrealistic expectations which in turn leads to disenchantment discontinuance | Indirectly through nonusage (usage discontinuance leads to drop out) |
Ease of enrolment (eg, with a simple mouseclick as opposed to personal contact, physical examination etc), recruiting the “right” users, degree of pre-enrolment screening | If the “wrong” participants are enrolled, ie, those who are less likely to use it, and willing to invest time, and for whom the intervention does not “fit” | The easier it is to enroll, the more users will later drop out if they realize that filling in questionnaires, etc creates more work than they thought. Also indirect via nonusage. |
Ease of drop out / stop using it | The easier it is to stop using the application, the higher the nonusage attrition rate will be (and indirectly through dropouts) | The easier it is to leave the trial, the higher the attrition rate will be (and indirectly through nonusage) |
Usability and interface issues | Usability issues obviously affect usage | Indirectly through nonusage (usage discontinuance leads to drop out) |
“Push” factors (reminders, research assistants chasing participants) | Participants may feel obliged to continue usage if reminded (cave external validity) | Participants may feel obliged to stay in trial |
Personal contact (on enrolment, and continuous contact) via face-to-face or phone, as opposed to virtual contact | Mainly indirectly via dropout | The more “virtual” the contact with the research team is, the more likely participants will drop out |
Positive feedback, buy-in and encouragement from change agents and (for consumer health informatics applications) from health professionals / care providers | Participants may discontinue usage without buy-in from change agents. In particular, patients may stop using eHealth applications if discouraged (or no actively encouraged) by health professionals | Indirectly through nonusage (usage discontinuance leads to drop out) |
Tangible and intangible observable advantages in completing the trial or continuing to use it (external pressures such as financial disadvantages, clinical/medical/quality of life/pain) | Yes | Yes |
Intervention has been fully paid for (out-of-pocket expense) | If individuals have paid for an innovation upfront they are less likely to abandon it (as opposed to interventions paid on a fee-per-usage basis) | Indirectly through nonusage (usage discontinuance leads to drop out) |
Workload and time required | Yes | eg, to fill in the follow-up questionnaires may create such a burden that participants drop out |
Competing interventions | For example similar interventions on the web or offline can lead to replacement discontinuance | Indirectly through nonusage (usage discontinuance leads to drop out) |
External events (9/11 etc) | These may lead to distractions and discontinuance, especially if the intervention is not essential | Indirectly through nonusage (usage discontinuance leads to drop out) |
Networking effects/peer pressure, peer-to-peer communication, and community building (open interactions between participants) | Communities may increase or slow the speed with which an innovation is abandoned. | Communities may increase or slow dropout attrition. |
Experience of the user (or being able to obtain help) | As most eHealth applications require an initial learning curve and organizational change, users have to overcome initial hurdles to make an application work. Experience/external help can contribute to overcoming these initial hurdles and help to see the “light at the end of the tunnel” | Indirectly through nonusage (usage discontinuance leads to dropout) |
When reporting the results of eHealth studies, a number of usage and dropout attrition metrics can (and should) be provided in addition to efficacy measures. Raw attrition proportions at different points in time should be reported and can be illustrated as attrition curves. The shape of the curve may indicate the underlying causes for attrition. A logarithmic curve, such as those in Figures 1 to 3, indicates a steady attrition with a constant proportion of users discontinuing use (or dropping out), similar to a probabilistic event. A sigmoid curve, such as the one illustrated in
A (hypothetical) sigmoid attrition curve
In addition to providing attrition curves, some summary metrics can be calculated. In biology, physics and economics the term “half-life” is used to measure “the time required for half of something to undergo a process”
It is also interesting to formally compare different attrition curves, for example, a dropout attrition curve of intervention A against a dropout attrition curve of intervention B, evaluated in the same trial. For example, Christensen et al [
Further statistical comparisons across attrition curves can be done using Kaplan-Meier (survival curve) analysis and using Cox regression models.
Dropout attrition is a threat to validity, because it may introduce a selection bias. For example, the intervention group may selectively lose more unmotivated people (who may have different outcomes due to the fact of being unmotivated) than the control group, and this differential dropout may lead to differences in outcomes measured among the remaining participants. An intention-to-treat (ITT) analysis, where all dropouts are assumed to have negative or neutral outcomes, is the only chance to avoid this bias. However, a high attrition rate and an intent-to-treat analysis greatly diminish the power to detect differences between groups (increasing the beta, ie, the chance that true differences are not measured).
ITT analysis could be combined with a method which I would call a “run-in and withdrawal design.” Here, the first phase of the trial (corresponding to Phase I and Phase II in
Sadly, this design is only feasible if there is indeed a “hardcore” user group (ie, attrition virtually stops if the right users are found), if the outcomes are fully reversible, and if there are no learning or other carryover effects, such as in educational interventions. However, the proposed design is feasible for evaluating eHealth interventions which have a transient effect only for the duration in which they are used, such as evaluating email versus telephone communication with physicians, or evaluating access to electronic clinical guidelines, and so on.
A proposed “run-in and withdrawal” design
The law of attrition may be a cause for publication bias, as authors with eHealth trials and high attrition rates may have difficulties in getting their work published. Journal editors and reviewers usually frown if they see substantial dropout rates. At the
The other reason that we see attrition rates so rarely analyzed in depth is that many investigators (in particular if they were involved in the development of an application) have an implicit pro-innovation bias, not expecting that an innovation will be rejected [