Published on in Vol 22, No 8 (2020): August

Preprints (earlier versions) of this paper are available at, first published .
Online Guide for Electronic Health Evaluation Approaches: Systematic Scoping Review and Concept Mapping Study

Online Guide for Electronic Health Evaluation Approaches: Systematic Scoping Review and Concept Mapping Study

Online Guide for Electronic Health Evaluation Approaches: Systematic Scoping Review and Concept Mapping Study

Original Paper

1Department of Public Health and Primary Care, Leiden University Medical Centre, Leiden, Netherlands

2National eHealth Living Lab, Leiden, Netherlands

3Department of Surgery, Amsterdam Gastroenterology and Metabolism, Amsterdam UMC, Amsterdam, Netherlands

4Wessex Institute, University of Southampton, Southampton, United Kingdom

5Department of Medical Informatics, Amsterdam UMC, Amsterdam, Netherlands

6Ksyos Health Management Research, Amstelveen, Netherlands

7Department of Clinical, Neuro and Developmental Psychology, Vrije Universiteit, Amsterdam, Netherlands

8Department of Psychology, Health and Technology, Centre for eHealth and Wellbeing Research, University of Twente, Enschede, Netherlands

9Centre of Medical Informatics, Usher Institute, The University of Edinburgh, Medical School, Edinburgh, United Kingdom

10 Please see acknowledgements section for list of collaborators

*these authors contributed equally

Corresponding Author:

Tobias N Bonten, MD, PhD

Department of Public Health and Primary Care

Leiden University Medical Centre

Department of Public Health & Primary Care, Room V6-22

PO Box 9600

Leiden, 2300 RC


Phone: 31 715268433


Related ArticleThis is a corrected version. See correction statement in:

Background: Despite the increase in use and high expectations of digital health solutions, scientific evidence about the effectiveness of electronic health (eHealth) and other aspects such as usability and accuracy is lagging behind. eHealth solutions are complex interventions, which require a wide array of evaluation approaches that are capable of answering the many different questions that arise during the consecutive study phases of eHealth development and implementation. However, evaluators seem to struggle in choosing suitable evaluation approaches in relation to a specific study phase.

Objective: The objective of this project was to provide a structured overview of the existing eHealth evaluation approaches, with the aim of assisting eHealth evaluators in selecting a suitable approach for evaluating their eHealth solution at a specific evaluation study phase.

Methods: Three consecutive steps were followed. Step 1 was a systematic scoping review, summarizing existing eHealth evaluation approaches. Step 2 was a concept mapping study asking eHealth researchers about approaches for evaluating eHealth. In step 3, the results of step 1 and 2 were used to develop an “eHealth evaluation cycle” and subsequently compose the online “eHealth methodology guide.”

Results: The scoping review yielded 57 articles describing 50 unique evaluation approaches. The concept mapping study questioned 43 eHealth researchers, resulting in 48 unique approaches. After removing duplicates, 75 unique evaluation approaches remained. Thereafter, an “eHealth evaluation cycle” was developed, consisting of six evaluation study phases: conceptual and planning, design, development and usability, pilot (feasibility), effectiveness (impact), uptake (implementation), and all phases. Finally, the “eHealth methodology guide” was composed by assigning the 75 evaluation approaches to the specific study phases of the “eHealth evaluation cycle.”

Conclusions: Seventy-five unique evaluation approaches were found in the literature and suggested by eHealth researchers, which served as content for the online “eHealth methodology guide.” By assisting evaluators in selecting a suitable evaluation approach in relation to a specific study phase of the “eHealth evaluation cycle,” the guide aims to enhance the quality, safety, and successful long-term implementation of novel eHealth solutions.

J Med Internet Res 2020;22(8):e17774




Electronic health (eHealth) solutions play an increasingly important role in the sustainability of future health care systems. An increase in the use and adoption of eHealth has been observed in the last decade. For instance, 59% of the member states of the European Union had a national eHealth record system in 2016 [1]. Despite the increase in use and high expectations about the impact of eHealth solutions, scientific evidence about the effectiveness, along with other aspects such as usability and accuracy, is often lagging behind [2-6]. In addition, due to rising demands such as time and cost restrictions from policymakers and commercial interests, the quality of eHealth evaluation studies is under pressure [7-9]. Although most eHealth researchers are aware of these limitations and threats, they may find it difficult to determine the most suitable evaluation approach to evaluate their novel eHealth solution since a clear overview of the wide array of evaluation approaches is lacking. However, to safely and successfully implement novel eHealth solutions into existing health care pathways, and to facilitate long-term implementation, robust scientific evaluation is paramount [10].

Limitations of Classic Methodologies in eHealth Research

The most rigorous method to study the effects of health interventions is considered to be the double blinded parallel-group randomized controlled trial (RCT). Randomization has the unique ability to distribute both known and unknown confounders between study arms equally [11]. Although many RCTs of eHealth solutions have been published, limitations of this method are frequently described in the literature [12]. For instance, information bias could occur due to blinding difficulties because of the visibility of an eHealth solution [13-16]. Moreover, conducting an RCT can be very time-consuming, whereas eHealth technology develops rapidly. Consequently, before the trial results are known, the tested eHealth solution may be outdated [17]. Further, “contamination” in which the control group also uses a digital intervention, despite being randomized to the no-intervention group, easily occurs in eHealth research. Another drawback of placing too much focus on the classical research methodologies that are generally used to evaluate effectiveness is that the need for significant evaluation during the development and implementation phases of eHealth is often neglected. Additionally, validating the quality and evaluating behavioral aspects of an eHealth solution may be lacking [18,19]. Although it is not wrong to use classical research methods such as an RCT to study eHealth solutions, given the fact that eHealth solutions are considered to be “complex” interventions, more awareness about the wide array of eHealth evaluation approaches may be required.

Evaluation of eHealth as a Complex Intervention

As described by the Medical Research Council (MRC) Framework 2000, eHealth solutions typically have multiple interacting components presenting several additional problems for evaluators, besides the already practical and methodological difficulties described above [20,21]. Because of these difficulties, eHealth solutions are considered as complex interventions. To study such interventions, multiple evaluation approaches are needed that are capable of answering the many different questions that arise during the consecutive phases of intervention development and implementation, including the “development,” “feasibility and piloting,” “evaluation,” and “implementation” phases [21]. For instance, to assess the effectiveness of complex interventions, the MRC Framework authors suggest the following experimental designs: individually randomized trials, cluster randomized trials, stepped wedge designs, preference trials, randomized consent designs, and N-of-1 designs. Unfortunately, the authors did not offer suggestions of evaluation approaches to use in the other phases of the MRC Framework. Murray et al [20] proposed a staged approach to the evaluation of eHealth that is modeled on the MRC Framework for Complex Interventions with 10 core questions to help developers quantify the costs, scalability, sustainability, and risks of harm of the eHealth solution. Greenhalgh et al [22] developed the Nonadoption, Abandonment, and challenges to Scale-up, Spread, and Sustainability (NASSS) framework to identify, understand, and address the interacting challenges around achieving sustained adoption, local scale-up, distant spread, and long-term sustainability of eHealth programs. Both of these studies illustrated and justified the necessity of a variety of evaluation approaches for eHealth beyond the RCT; however, this research does not assist evaluators in choosing which approach to use in a selected evaluation study phase. Another suggestion to improve the quality of eHealth research was proposed by Nykanen et al [23,24], who developed the guideline for Good Evaluation Practice in Health Informatics (GEP-HI), which precisely describes how to design and carry out a health informatics evaluation study in relation to the evaluation study phases. However, this guideline also did not include information on which specific evaluation approaches could be used in the related study phases. Besides the individual studies described above, there have been several books published concerning eHealth evaluation research. Among one of the first books on the topic is the “Handbook of Evaluation Methods for Health Informatics,” which was published in 2006 [25]. The aim of this book was to suggest options for finding appropriate tools to support the user in accomplishing an evaluation study. The book contains more than 30 evaluation methods, which are related to the phases of the system lifecycle, and the reliability, degree of difficulty, and resource requirements for each method are described. Moreover, the book “Evidence-Based Health Informatics,” published in 2016 [26], provides the reader with a better understanding of the need for robust evidence to improve the quality of health informatics. The book also provides a practical overview of methodological considerations for health information technology, such as using the best study design, stakeholder analysis, mixed methods, clinical simulation, and evaluation of implementation.

Although useful work has been performed by these previous authors, no single source is able to provide clear guidance in selecting appropriate evaluation approaches in relation to the specific evaluation phases of eHealth. Therefore, to enhance quality and safety, and to facilitate long-term implementation of eHealth solutions into daily practice, raising the awareness of eHealth evaluators about the wide array of eHealth evaluation approaches and thereby enhancing the completeness of evidence is sorely needed [27].

Aim and Objectives

The overall aim of the present study was to raise awareness among eHealth evaluators about the wide array of eHealth evaluation approaches and the existence of multiple evaluation study phases. Therewith, quality, safety, and successful long-term implementation of novel eHealth solutions may be enhanced.

To achieve this aim, we pursued the following objectives: (1) systematically map the current literature and expert knowledge on methods, study designs, frameworks, and philosophical approaches available to evaluate eHealth solutions; and (2) provide eHealth evaluators with an online “eHealth methodology guide” to assist them with selecting a suitable evaluation approach to evaluate their eHealth solution in a specific study phase.

Overall Design

The project consisted of three consecutive steps: (1) a systematic scoping review, (2) concept mapping study, and (3) development of the "eHealth methodology guide" with content based on the results from steps 1 and 2.

Step 1: Systematic Scoping Review

To describe the methods, study designs, frameworks, and other philosophical approaches (collectively referred to as “evaluation approach[es]”) currently used to evaluate eHealth solutions, a systematic scoping review was conducted. The online databases Pubmed, Embase, and PsycINFO were systematically searched using the term ”eHealth” in combination with ”evaluation” OR “methodology.” The search included Medical Subject Headings or Emtree terms and free-text terms. A complete list of the search strings is shown in Multimedia Appendix 1. Broad inclusion criteria were applied. All types of peer-reviewed English language articles published from January 1, 2006 until November 11, 2016 and a subsequent update from November 12, 2016 until October 21, 2018 describing any eHealth evaluation approach were included. We reasoned that articles published before January 1, 2006 would not necessarily need to be screened because the annual number of publications related to eHealth evaluation approaches was still low at that time, suggesting that the field was just starting to take its first scientific steps. In addition, if an article did describe a useful evaluation approach, it would have also been described by articles that were published later. Two reviewers (TB and AR) independently screened the titles and abstracts of the articles according to the inclusion criteria described above. Cohen kappa coefficient was calculated to measure the initial interrater reliability. Disagreements between the reviewers were resolved by the decision of a third independent reviewer (MK). Full-text assessment of the selected articles after screening of titles and abstracts was performed by both reviewers (TB and AR). Exclusion criteria after full-text assessment were: no eHealth evaluation approach described, article did not concern eHealth, the described methodology was unclear, full-text version was not available, or the article was a conference abstract. The reference list of eligible articles was checked for relevant additional studies. These studies were also checked for eligibility and included as crossreferenced articles in the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) diagram (Figure 1). In the qualitative synthesis, the eHealth evaluation approach was extracted from eligible articles, and duplicates and synonyms were merged to develop a single list of all the methods.

Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram of the article selection process.
View this figure

Step 2: Concept Mapping Study

Overview of Phases

Although the systematic scoping review was performed rigorously, it was possible that not all of the current or possible approaches to evaluate eHealth solutions would have been described in the eligible studies. Therefore, to achieve a reasonably complete overview of eHealth evaluation approaches, it was considered essential to incorporate eHealth researchers’ knowledge on these approaches. A concept mapping study was selected as the most suitable method for structuring the suggested evaluation approaches from the researchers and for exploring their views on the different phases of the “eHealth evaluation cycle.” Concept mapping is a qualitative research methodology that was introduced by Trochim and Linton in 1986 [28]. It can be used by a group of individuals to first determine the scope of ideas on a certain topic and then to structure these ideas [29]. There is no interaction between the participants. A typical concept mapping study consists of 5 phases: (1) selection of the participants; (2) brainstorm, generation of the evaluation approaches by participants; (3) sorting and rating of the evaluation approaches; (4) concept mapping analysis; (5) and interpretation and utilization of the concept map. In the next subsections, these 5 phases are described in more detail. Concept System 2017 Global MAX online software was used for these tasks [30]. A Venn diagram was drawn to visualize the overlap between the results of the scoping review (step 1) and the evaluation approaches suggested by participants (step 2).

Selection of the Participants

To include a wide cross-section of eHealth researchers and reduce the influence of “group think,” any researchers in contact with the authors and with any level of expertise in eHealth or evaluation research (to help assure that all major perspectives on the eHealth evaluation topic were represented) were approached as being suitable participants for this concept mapping study. Snowball sampling (ie, asking participants to recruit other researchers) was also included in the recruitment strategy. The target participants received an email describing the objective of the study and instructions on how they could participate. A register was kept of the number of participants that were approached and that refused. In general, in a concept mapping study, there are no “rules” established as to how many participants should be included [31]. However, we estimated that 25 or more participants would be appropriate to generate a sufficient number of evaluation approaches and to have representative sorting and rating results.

Brainstorm Phase: Generation of the List of Evaluation Approaches

In this phase, participants were asked to enter all of the evaluation approaches they were aware of into an online form using Global MAX software. We intentionally did not include a strict definition of “evaluation approaches” so as to maintain the concept mapping phase as broad as possible and to avoid missing any methods due to an overly restrictive definition. The participants were not familiar with the results of the systematic scoping review. Participants were also asked 8 general background questions about their age, gender, background, years of experience in research, type of health care institute they work at, whether their daily work comprised eHealth, self-rated expertise in eHealth in general (grade 1-10), and self-rated expertise (grade 1-10) in eHealth evaluation approaches.

Sorting and Rating Phases

The coordinating researcher (AR) reviewed the evaluation approaches suggested by the participants, checking if each suggested approach truly represented a specific evaluation approach rather than, for instance, a broad methodological category such as “qualitative research.” If the coordinating researcher was unfamiliar with the suggested approach, Pubmed or Google Scholar was searched for supporting information. The cleaned results were combined with the results from the systematic scoping review, omitting duplicate approaches. The resulting set of approaches was then presented back to the participants who were instructed to sort these approaches into categories that had to be created by the participants. The participant was instructed to keep the following question in mind while sorting each approach into a self-created category: “To which phase of the research cycle (eg, planning, testing, implementation) does this evaluation approach belong?” To gain insights about opinions of the researchers with respect to the use in daily practice and suitability for effectiveness testing of the evaluation approaches, the participants were asked the following three rating questions about each approach: (1) Does your research group use this approach, or did it do so in the past? (yes or no); (2) In your opinion, how important is it that researchers with an interest in eHealth are familiar with this approach? (1, unimportant; 2, less important; 3, very important; 4, absolutely essential); (3) In your opinion, how important is the approach for proving the effectiveness of eHealth? (1, unimportant; 2, less important; 3, very important; 4, absolutely essential).

Results of the first rating question are reported as percentages of how many participants use or used the approach. For the second and third questions related to familiarity with the approach and importance for proving effectiveness, respectively, average rating scores ranging from 1 to 4 for each evaluation approach and the proportion of participants who selected categories 3 or 4 are reported.

Concept Mapping Analysis

Global MAX software uses a 3-step analysis to compute the concept map [32]. First, the sorting data from each participant were compiled into a similarity matrix. The matrix illustrates how many times each approach was sorted into similar categories. Second, the software applied a multidimensional scaling algorithm to plot points that were frequently sorted close together on a point map. A stress value (0-1), indicating the goodness of fit of the configuration of the point map, was calculated; the lower the stress value, the better the fit. In the last step, a hierarchical cluster analysis using the Ward algorithm was applied to group approaches into clusters (see also pages 87-100 of Kane and Trochim [33] for a detailed description of the data analyses to compute concept maps).

Two authors (TN and AR) reviewed the concept maps ranging from a 7-cluster to a 3-cluster option. The guidance of Kane and Trochim [33] was followed to select the best fitting number of clusters. Once the best fitting number of clusters was identified, each evaluation approach on the concept map was reviewed by the two authors to check if the approach truly belonged to the assigned cluster. If the approach seemed to belong in an adjacent cluster, it was reassigned to that particular cluster. If an approach could be assigned to multiple clusters, the best fitting cluster was selected.

The average rating scores for the rating questions on familiarity with the approach and importance for proving effectiveness were used to create a 4-quadrant Go-Zone graph. The Go-Zone graph easily visualizes the evaluation approaches with above-average rating scores on both questions, which are represented in the upper right quadrant. Approaches in the upper right quadrant that were also mentioned in the effectiveness testing cluster of the concept map are asterisked in the “eHealth methodology guide,” meaning that participants in general used these approaches and that these approaches were recommended by participants for evaluating effectiveness.

Interpretation and Utilization of the Concept Map

The initial concept map clusters represented names that participants suggested when sorting the evaluation approaches into self-created categories. Because these cluster names were used to constitute the phases of the “eHealth evaluation cycle” later in the project, three authors (TN, AR, and JW) determined (after multiple discussion sessions) the most appropriate names for the final concept map clusters. A name was found to be appropriate when it was suggested by multiple participants and was considered to be representative for the “eHealth evaluation cycle,” meaning that all of the evaluation approaches could be logically subdivided. After updating the names, the concept map clusters still contained the evaluation approaches allocated by the participants. This subdivision of eHealth evaluation approaches was used as the content for the “eHealth evaluation guide.”

Step 3: eHealth Methodology Guide

The unique evaluation approaches identified in the systematic scoping review and unique evaluation approaches described by the participants in the concept mapping study were brought together by authors TB and AR, and used as the content to develop the “eHealth methodology guide.” To logically subdivide the eHealth evaluation approaches and to increase researchers’ awareness of the existence of multiple evaluation study phases, an “eHealth evaluation cycle” was developed. The cycle was based on the cluster names of the concept map and on the common denominators of the “all phases” evaluation approaches from the systematic scoping review. Each unique evaluation approach was assigned to a specific evaluation study phase. If an approach could belong to multiple study phases, it was assigned to all applicable phases.

Step 1: Systematic Scoping Review

The systematic search retrieved 5971 articles from the databases. After removing duplicates, 5021 articles were screened using title and abstract review. A total of 148 articles were selected for full-text assessment. Among these, 104 articles were excluded because of the following reasons: not containing any named eHealth evaluation approach, not being an eHealth article, unclear description of approach, no full-text version available, conference abstract, and other reasons. Through crossreferencing, 13 additional articles were added to the final selection. In total, 57 articles were included in the qualitative synthesis. Calculation of Cohen kappa showed an interrater reliability of 0.49, which corresponds to “moderate agreement” between both reviewers. Figure 1 presents the PRISMA flow diagram describing the selection process. The 57 articles described 50 unique eHealth evaluation approaches (Table 1). Of the 50 methods, 19 were described by more than 1 article.

Table 1. Articles included in the systematic scoping review according to the evaluation approach adopted.
ReferenceYearCountryEvaluation approach
Chiasson et al [34]2007United KingdomAction research
Campbell and Yue [35]2014United StatesAdaptive design; propensity score
Law and Wason [36]2014United KingdomAdaptive design
Mohr et al [16]2015United StatesBehavioral intervention technology model (bit) in Trials of Intervention Principles; SMARTa
Van Gemert-Pijnen et al [37]2011NetherlandsCeHResb Roadmap
Alpay et al [38]2018NetherlandsCeHRes Roadmap; Fog model; Oinas-Kukkonen model
Shaw [39]2002United KingdomCHEATSc: a generic ICTd evaluation framework
Kushniruk and Patel [40]2004CanadaCognitive task analysis; user-centered design
Jaspers [41]2009NetherlandsCognitive walkthrough; heuristic evaluation; think-aloud method
Khajouei et al [42]2017IranCognitive walkthrough; heuristic evaluation
Van Engen-Verheul et al [43]2015NetherlandsConcept mapping
Mohr et al [44]2013United StatesCEEBITe framework
Nicholas et al [45]2016AustraliaCEEBIT framework; single-case experiment (N=1)
Bongiovanni-Delaroziere and Le Goff Pronost [46]2017FranceEconomic evaluation; HASf methodological framework
Fatehi et al [47]2017AustraliaFive-stage model for comprehensive research on telehealth
Baker et al [48]2014United StatesFractional-factorial (ANOVAg) design; SMART
Collins et al [49]2007United StatesFractional-factorial (ANOVA) design; MOSTh; SMART
Chumbler et al [14]2008United StatesInterrupted time-series analysis; matched cohort study design
Grigsby et al [50]2006United StatesInterrupted time-series analysis; pretest-posttest design
Liu and Wyatt [51]2001United KingdomInterrupted time-series analysis
Kontopantelis et al [52]2015United KingdomInterrupted time-series analysis
Catwell and Shiekh [53]2009United KingdomLife cycle–based approach
Han [54]2011United StatesLife cycle–based approach
Sieverink [55]2017NetherlandsLogfile analysis
Kramer-Jackman Popkess-Vawter [56]2008United StatesMethod for technology-delivered health care measures
Wilson et al [57]2018CanadamHealthi agile and user-centered research and development lifecycle
Jacobs and Graham[58]2016United StatesmHealth development and evaluation framework; MOST
Dempsey et al [59]2015United StatesMicrorandomized trial; single-case experiment (N=1)
Klasnja et al [60]2015United StatesMicrorandomized trial; single-case experiment (N=1)
Law et al [61]2016United KingdomMicrorandomized trial
Walton et al [62]2018United StatesMicrorandomized trial
Caffery et al [63]2017AustraliaMixed methods
Lee and Smith [64]2012United StatesMixed methods
Kidholm et al [65]2017DenmarkMASTj
Kidholm et al [66]2018DenmarkMAST
Kummervold et al [67]2012NorwayNoninferiority trial
May [68]2006United KingdomNormalization process theory and checklist
Borycki et al [69]2016CanadaParticipatory design; user-centered design
Clemensen et al [70]2017DenmarkParticipatory design
Glasgow [71]2007United StatesPractical clinical trial; RE-AIMk framework
Danaher and Seeley [72]2007United StatesPragmatic randomized controlled trial; SMART; Stage model of behavioral therapies research
Sadegh et al [73]2018IranProposed framework for evaluated mHealth services
Harker and Kleinen [74]2012United KingdomRapid review
Glasgow et al [75]2014United StatesRE-AIM framework
Almirall et al [76]2014United StatesSMART
Ammenwerth et al [77]2012AustriaSimulation study
Jensen et al [78]2015DenmarkSimulation study
Dallery et al [79]2013United StatesSingle case experiment (N=1)
Cresswell and Shiekh [80]2014United KingdomSociotechnical evaluation
Kaufman et al [81]2006United StatesStead et al [82] evaluation framework
Brown and Lilford [83]2006United KingdomStepped wedge (cluster) randomized trial
Hussey and Hughes [84]2007United StatesStepped wedge (cluster) randomized trial
Spiegelman [85]2016United StatesStepped wedge (cluster) randomized trial
Langbecker et al [86]2017AustraliaSurvey methods
Rönnby et al [87]2018SwedenTechnology acceptance model
Bastien [88]2010FranceUser-based evaluation
Nguyen et al [89]2007CanadaWaitlist control group design

aSMART: Sequential Multiple Assignment Randomized Trial.

bCeHRes: Centre for eHealth Research and Disease management.

cCHEATS: Clinical, human and organizational, educational, administrative, ethnical and social explanatory factors in a randomized controlled trial intervention.

dICT: information and communication technology.

eCEEBIT: continuous evaluation of evolving behavioral intervention technology.

fHAS: Haute Autorité de Santé (French National Authority for Health).

gANOVA: analysis of variance.

hMOST: multiphase optimization strategy.

imHealth: mobile health.

jMAST: Model of Assessment of Telemedicine Applications.

kRE-AIM: Reach, Effectiveness, Adoption, Implementation, and Maintenance.

Step 2: Concept Mapping Study

Characteristics of the Participants

In total, 52 researchers were approached to participate in the concept mapping study, 43 (83%) of whom participated in the “brainstorm” phase. Reasons for refusal to participate were a lack of time or not feeling skilled enough to contribute. From the 43 initial participants, 27 (63%) completed the “sorting” phase and 32 (74%) answered the three rating questions of the “rating” phase. The characteristics of participants for each phase are shown in Table 2. Participant characteristics did not change substantially throughout the study phases, with a mean participant age ranging from 39.9 to 40.5 years, a mean of 13 years of eHealth research experience, and more than 70% of participants working in a university medical center. The majority of participants gave themselves high grades for their knowledge about eHealth but lower scores for their expertise in eHealth evaluation approaches.

Table 2. Characteristics of study participants for each phase of the concept mapping study.
CharacteristicBrainstorm phaseSorting phaseRating phase
Participants (n)43a2732b
Age (years), mean (SD)39.9 (12.1)39.0 (12.6)40.5 (13)
Female gender, n (%)21 (49)16 (53)16 (50)
Research experience (years), mean (SD)13.5 (10.8)12.6 (10.5)13.9 (11)
Working in university medical center, n (%)37 (73)26 (72)27 (71)
Use of eHealthcin daily practice, n (%)

During clinic work, not EHRd4 (7)3 (9)3 (8)

During research32 (59)21 (60)23 (59)

During clinic work and research10 (19)7 (20)8 (21)

No1 (2)0 (0)1 (3)

Other7 (13)4 (11)4 (10)
Knowledge about eHealth, n (%)

Grade 1-20 (0)0 (0)0 (0)

Grade 3-41 (2)1 (4)1 (3)

Grade 5-62 (5)1 (4)1 (3)

Grade 7-829 (71)17 (63)21 (68)

Grade 9-109 (22)8 (30)8 (26)
Expertise about eHealth research methods, n (%)

Grade 1-20 (0)0 (0)0 (0)

Grade 3-41 (2)1 (4)1 (3)

Grade 5-615 (37)8 (30)11 (36)

Grade 7-819 (46)15 (56)15 (48)

Grade 9-106 (15)3 (11)4 (13)
Background, n (%)

Biology2 (3)1 (2)1 (2)

Data science2 (3)1 (2)1 (2)

Economics1 (1)1 (2)1 (2)

Medicine24 (35)14 (30)18 (34)

(Health) Science9 (13)6 (13)7 (13)

Industrial design1 (1)1 (2)1 (2)

Informatics4 (6)3 (7)3 (6)

Communication and culture4 (6)3 (7)3 (6)

Psychology14 (21)11 (24)12 (23)

Other7 (10)5 (11)6 (11)

a43 participants participated in the sorting phase, but 41 participants answered the characteristics questions.

bOne of the 32 participants did not finish the third rating question: “importance for proving effectiveness.”

ceHealth: electronic health.

dEHR: electronic health record.

Brainstorm Phase

Forty-three participants participated in an online brainstorm phase and generated a total of 192 evaluation approaches. After removing duplicate or undefined approaches, 48 unique approaches remained (Multimedia Appendix 2). Only 23 of these 48 approaches (48%) overlapped with those identified in the systematic scoping review (Figure 2).

Based on the update of the scoping literature review at the end of the project, 13 additional evaluation approaches were found that were not incorporated into the sorting and rating phases. Therefore, in total, only 62 of the 75 unique methods were presented to the participants in the sorting and rating phases. Participants were asked to sort the 62 evaluation approaches into as many self-created categories as they wished. Twenty-seven individuals participated in this sorting exercise, and they suggested between 4 and 16 categories each, with a mean of 8 (SD 4) categories.

The rating questions on use of the approach, familiarity with the approach, and importance for proving effectiveness were answered by 32, 32, and 31 participants, respectively. An analysis of responses to these three questions is presented in Table 3 and the mean ratings for familiarity with the approach and importance for proving effectiveness are plotted on the Go-Zone graph shown in Figure 3. The evaluation approach used most frequently by the participants was the questionnaire, with 100% responding “yes.” The approach that the participants used the least often was the Evaluative Questionnaire for E-health Tools at 3%. The average rating score for familiarity with the approach ranged from 1.9 for stage model of behavioral therapies to 3.6 for feasibility study. In addition, 88% of the participants thought that it is essential that researchers are familiar with the feasibility study method. The average rating score for importance for proving effectiveness ranged from 1.6 for vignette study to 3.3 for pragmatic RCT. In addition, 90% of the participants considered the stepped wedge trial design to be essential for proving the effectiveness of eHealth solutions.

Figure 2. Venn diagram showing the origin of the 75 unique evaluation approaches.
View this figure
Table 3. Results of step 2: concept mapping study.
Evaluation approacha

Use of approachb, % “yes” responseFamiliarity with approachcProving effectivenessd

Mean% of 3 + 4 (n/N)Mean% of 3 + 4 (n/N)
Pilot/feasibility58 (SD 32.7)2.9 (SD 0.5)2.3 (SD 0.3)

3. Feasibility studye943.688 (28/42)2.652 (16/31)

4. Questionnairee1003.484 (27/63)2.552 (16/31)

8. Single-case experiments or n-of-1 study (N=1)282.543 (13/60)2.027 (8/30)

12. Action research study412.650 (15/58)2.338 (11/29)

44. A/B testing252.545 (13/58)2.236 (10/28)
Development and usability37 (SD 29.1)2.5 (SD 0.4)2.1 (SD 0.3)

5. Focus group (interview)913.281 (26/62)2.332 (10/31)

6. Interview943.175 (24/62)2.335 (11/31)

23. Think-aloud method662.652 (15/59)1.714 (4/29)

25. Cognitive walkthrough312.437 (11/59)1.817 (5/30)

27. eHealthf Analysis and Steering Instrument122.455 (16/58)2.448 (14/29)

28. Model for Assessment of Telemedicine applications (MAST)222.548 (14/59)2.437 (11/30)

29. Rapid review312.023 (7/58)1.87 (2/29)

30. eHealth Needs Assessment Questionnaire (ENAQ)62.445 (13/58)2.024 (7/29)

31. Evaluative Questionnaire for eHealth Tools (EQET)32.452 (15/58)2.341 (12/29)

32. Heuristic evaluation192.231 (9/57)2.124 (7/29)

33. Critical incident technique92.024 (7/59)1.84 (1/28)

36. Systematic reviewe943.167 (20/62)2.969 (20/29)

39. User-centered design methodse533.273 (22/62)2.550 (14/28)

43. Vignette study412.231 (9/58)1.67 (2/28)

45. Living lab342.541 (12/58)2.354 (15/28)

50. Method for technology-delivered health care measures92.339 (11/58)2.125 (7/28)

54. Cognitive task analysis (CTA)162.123 (7/59)1.918 (5/28)

60. Simulation study412.550 (15/60)2.234 (10/29)

62. Sociotechnical evaluation222.337 (11/60)2.129 (8/28)
All phases11 (SD 4)2.3 (SD 0.2)2.2 (SD 0.2)

21. Multiphase Optimization Strategy (MOST)62.345 (13/58)2.339 (11/28)

26. Continuous evaluation of evolving behavioral intervention technologies (CEEBIT) framework62.448 (14/60)2.338 (11/29)

40. RE-AIMg frameworke192.661 (17/59)2.452 (14/27)

46. Normalization process model92.025 (7/57)1.918 (5/28)

48. CeHResh Roadmap162.443 (12/58)2.341 (11/27)

49. Stead et al [82] evaluation framework122.238 (11/58)2.122 (6/27)

51. CHEATSi: a generic information communication technology evaluation framework62.341 (12/58)2.126 (7/27)

52. Stage Model of Behavioral Therapies Research91.921 (6/58)2.022 (6/27)

53. Life cycle–based approach to evaluation122.345 (13/58)2.021 (6/28)
Effectiveness testing45 (SD 23)2.6 (SD 0.3)2.6 (0.4)

1. Mixed methodse873.281 (26/63)2.965 (20/31)

2. Pragmatic randomized controlled triale623.177 (24/63)3.383 (25/30)

7. Cohort studye (retrospective and prospective)812.758 (18/61)2.558 (18/31)

9. Randomized controlled triale913.371 (22/63)3.374 (23/31)

10. Crossover studye442.757 (17/61)2.759 (17/29)
11. Case series502.120 (6/60)1.810 (3/29)

13. Pretest-posttest study designe622.645 (14/60)2.550 (15/30)

14. Interrupted time-series study442.543 (13/59)2.759 (17/29)

15. Nested randomized controlled trial312.337 (11/59)2.855 (16/29)

16. Stepped wedge trial designe562.870 (21/60)3.290 (26/29)

17. Cluster randomized controlled triale502.860 (18/60)3.169 (20/29)

19. Trials of intervention principles (TIPs)e232.542 (13/61)2.543 (13/30)
20. Sequential Multiple Assignment Randomized Trial (SMART)92.445 (13/58)2.762 (18/29)

35. (Fractional-)factorial design222.345 (13/58)2.236 (10/28)

37. Controlled before-after study (CBA)e372.650 (15/60)2.452 (15/29)

38. Controlled clinical trial /nonrandomized controlled trial (CCT/NRCT)e472.970 (21/60)2.971 (20/28)

41. Preference clinical trial (PCT)192.124 (7/58)2.125 (7/28)
42. Microrandomized trial92.224 (7/59)2.450 (14/28)
55. Cross-sectional study722.540 (12/60)2.129 (8/28)

56. Matched cohort study372.230 (9/59)2.346 (13/28)
57. Noninferiority trial designe532.647 (14/60)2.648 (14/29)

58. Adaptive designe192.652 (15/58)2.550 (14/28)
59. Waitlist control group design342.128 (8/59)2.032 (9/28)

61. Propensity score methodology312.130 (9/59)2.021 (6/29)
Implementation54 (SD 28)2.8 (SD 0.5)2.6 (SD 0.5)

18. Cost-effectiveness analysis813.487 (27/63)3.270 (21/30)

22. Methods comparison study162.017 (5/59)2.021 (6/28)

24. Patient reported outcome measures (PROMs)e843.180 (24/60)2.973 (22/30)

34. Transaction logfile analysis252.445 (13/57)2.121 (6/28)
47. Big data analysise623.073 (22/61)2.859 (17/29)

aApproach identification numbers correspond with the numbers used in Figure 3 and Figure 4.

bBased on the rating question: “does your research group use this approach, or did it do so in the past?”; the percentage of “yes” responses is shown.

cBased on the rating question: “according to your opinion, how important is it that researchers with an interest in eHealth will become familiar with this approach?”; average rating scores ranging from unimportant (1) to absolutely essential (4) and percentages of categories 3 plus 4 are represented.

dThe “proving effectiveness” column corresponds with the rating question: “according to your opinion, how important is the approach for proving the effectiveness of eHealth?” Average rating scores ranging from unimportant (1) to absolutely essential (4) and percentages of categories 3 plus 4 are presented.

eThis approach scored above average on the rating questions “familiarity with the approach” and “proving effectiveness, ” which is plotted in the upper right quadrant of the Go-Zone graph (Figure 3).

feHealth: electronic health.

gRE-AIM: Reach, Effectiveness, Adoption, Implementation, and Maintenance.

hCeHRes: Centre for eHealth Research and Disease management.

iCHEATS: Clinical, human and organizational, educational, administrative, ethnical and social explanatory factors in a randomized controlled trial intervention.

Figure 3. Go-Zone graph. The numbers refer to the evaluation approaches listed in Table 3.
View this figure
Figure 4. Concept map showing evaluation approaches grouped into five labeled clusters. The numbers refer to the approaches listed in Table 3.
View this figure
Concept Mapping Analysis

Based on sorting data from 27 participants, a point map with a stress value of 0.27 was created. Compared with previous concept mapping study stress values, this represents a good fit [90,91]. In the next step, the software automatically clustered the points into the clusters shown on the concept map in Figure 4. A 5-cluster concept map was judged to represent the best fit for aggregating similar evaluation approaches into one cluster. Table 3 lists these clusters with average rating scores for the three rating questions and the approaches belonging in each cluster. With an average score of 2.9, the pilot/feasibility cluster showed the highest score on the familiarity with approach scale, whereas the “all phases” cluster showed the lowest average score at 2.3. With respect to responses to the importance for proving effectiveness question, the implementation cluster presented the highest average score at 2.6 and the development and usability cluster presented the lowest average score at 2.1.

Twenty of the 62 methods (32%) received above-average scores for both the questions related to familiarity with the approach and importance for proving effectiveness, and therefore appear in the upper right quadrant of the Go-Zone graph (Figure 3) and are indicated in Table 3. The majority of these approaches (12/20, 60%) fall into the effectiveness testing cluster.

Interpretation and Utilization of the Concept Mapping Study

The results of the concept map study were discussed within the team and the following names for the clusters were selected: “Development and usability,” “Pilot/feasibility,” “Effectiveness testing,” “Implementation,” and “All phases.”

Step 3: eHealth Methodology Guide

Fifty evaluation approaches were identified in the systematic scoping review and 48 approaches were described by participants in the brainstorm phase of the concept mapping study. As visualized in the Venn diagram (Figure 2), 23 approaches were identified in both studies. Therefore, in total, 75 (50 + 48 – 23) unique evaluation approaches were identified. Examining the 23 approaches identified in both the literature and concept maps, 14 (67%) were described by more than one article.

Based on the cluster names from the concept map (Figure 4), development and usability, pilot/feasibility, effectiveness testing, implementation, and the all phases evaluation approaches found in the systematic scoping review, an empirically based “eHealth evaluation cycle” was developed (Figure 5). The concept map did not reveal a conceptual and planning phase; however, based on the results of the systematic scoping review, and since there are evaluation approaches that belong to this phase, it was added to the “eHealth evaluation cycle.”

This evaluation cycle is iterative with consecutive evaluation study phases and an “all phases” cluster in the middle, which includes “all phases” evaluation frameworks such as Model for Assessment of Telemedecine that are capable of evaluating multiple study phases [65]. The “eHealth evaluation cycle” was used to construct the “eHealth methodology guide” by subdividing the guide into the evaluation study phase themes. Within the guide, each of the 75 unique evaluation approaches are briefly described and allocated to their respective evaluation study phase(s). Note that a single evaluation approach may belong to multiple evaluation phases.

The “eHealth methodology guide” can be found in Multimedia Appendix 3 and is available online [92]. Because the “eHealth methodology guide” is web-based, it is easy to maintain and, more importantly, it is easy to add content as new evaluation approaches may be proposed.

Figure 5. The “eHealth evaluation cycle” derived from empirical results of the scoping literature review and concept map study.
View this figure

Principal Findings

By carrying out a systematic scoping review and concept mapping study with eHealth researchers, we identified and aggregated 75 unique evaluation approaches into an online “eHealth methodology guide.” This online guide supports researchers in the field of eHealth to identify the appropriate study phase of the research cycle and choose an evaluation approach that is suitable for each particular study phase.

As stipulated by the participants in the concept mapping study, the most frequently used eHealth evaluation approaches were questionnaire (100%) and feasibility study (88%). The participants were most familiar with cost-effectiveness analysis (87%) and feasibility study (84%). In addition, they found pragmatic RCT (83%) and the stepped wedge trial design (90%) to be the most suitable approaches for proving effectiveness in eHealth research. Although a wide array of alternative evaluation approaches are already available, well-known traditional evaluation approaches, including all of the evaluation approaches described above, seemed to be most relevant for the participants. This suggests that eHealth research is still an immature field with too much focus on traditional evaluation approaches. However, to facilitate long-term implementation and safe use of novel eHealth solutions, evaluations performed by less-known evaluation approaches such as those described in the online “eHealth evaluation guide” are required.

The Go-Zone graph (Figure 3) confirms the practicing researchers’ familiarity with—and judged importance for proving the effectiveness of—the traditional evaluation approaches. The majority of the 20 approaches in the upper right quadrant of this graph are well-known study designs such as cohort study, (pragmatic) RCT, and controlled before-after study. Alternative and novel study designs (eg, instrumental variable analysis, interrupted time-series analysis) were not mentioned in the upper right quadrant, possible due to unfamiliarity.

Comparison with Previous Work

Ekeland et al [93] performed a systematic review of reviews to summarize methodologies used in telemedicine research, analyze knowledge gaps, and suggest methodological recommendations for further research. They assessed and extracted data from 50 reviews and performed a qualitative summary and analysis of methodologies. They recommended that larger and more rigorous controlled studies are needed, including standardization of methodological aspects, to produce better evidence for the effectiveness of telemedicine. This is in line with our study, which provides easy access to, and an overview of, current approaches for eHealth evaluation throughout the research cycle. However, our work extends beyond effectiveness to cover the many other questions arising when developing and implementing eHealth tools. Aldossary et al [94] also performed a review to identify evaluations of deployed telemedicine services in hospitals, and to report methods used to evaluate service implementation. The authors included 164 papers describing 137 studies in the qualitative synthesis. They showed that 83 of the 137 studies used a descriptive evaluation methodology to report information about their activities, and 27 of the 137 studies evaluated clinical outcomes by the use of “traditional” study designs such as nonrandomized open intervention studies. Although the authors also reported methods to evaluate implementation, an overview of all evaluation study phases was lacking. In addition, no suggestions for alternative evaluation approaches were provided. Enam et al [27] developed an evaluation model consisting of multiple evaluation phases. The authors conducted a literature review to elucidate how the evidence of effectiveness and efficiency of eHealth can be generated through evaluation. They emphasized that generation of robust evidence of effectiveness and efficiency would be plausible when the evaluation is conducted through all distinct phases of eHealth intervention development (design, pretesting, pilot study, pragmatic trial, evaluation, and postintervention). This is partially in line with our study aim, and matches the “eHealth evaluation cycle” and online “eHealth methodology guide” developed as a result of our study. However, we added specific evaluation approaches to be used for each study phase and also incorporated other existing “all phases” research models.

Strengths and Limitations

One of the greater strengths of this study was the combination of the scoping review and concept mapping study. The scoping review focused on finding eHealth-specific evaluation approaches. In contrast, in the concept mapping study, the participants were asked to write down any approach they were aware of that could contribute to the evaluation of eHealth. This slight discrepancy was intentional because we particularly wanted to find evaluation approaches that are actually being used in daily research practice to evaluate eHealth solutions. Therefore, the results from the systematic scoping review and the concept mapping study complement and reinforce each other, and therewith contribute to delivering a complete as possible “eHealth methodology guide.”

Another strength of this project was the level of knowledge and experience of the eHealth researchers who participated in the concept mapping study. They had approximately 13 years of eHealth research experience and the majority of participants graded themselves high for knowledge about eHealth. Interestingly, they gave themselves lower grades for their expertise in eHealth evaluation approaches. This means that we likely included an average group of eHealth researchers and did not only include the top researchers in the field of eHealth methodology. In our view, we had a representative sample of average eHealth researchers, who are also the target end users for our online “eHealth methodology guide.” This supports the generalizability and implementability of our project. However, the fact that more than 70% of participants worked in university medical centers may slightly limit the generalizability of our work to nonacademic researchers. It would be wise to keep an eye out for positive deviants outside university medical centers and users that are not senior academic “expert” eHealth researchers [95]. Slight wandering off the beaten track might be very necessary to find the needed innovative evaluation approaches and dissemination opportunities for sustainable implementation.

A limitation of our study was the date restriction of the systematic scoping review. We performed a broad systematic search but limited the search to only English language articles published from January 1, 2006 so as to keep the number of articles manageable. This could explain why some approaches, especially those published before 2006, were not found.

Another weakness of our study was that the systematic search was updated after the concept mapping exercise was complete. Therefore, 13 of the 75 evaluation approaches were not reviewed by the participants in the sorting and rating phases of the concept mapping study. However, this will also occur in the future with every new approach added to the online “eHealth methodology guide,” as the aim is to frequently update the guide.

Future Perspectives

This first version of the “eHealth evaluation guide” contains short descriptions of the 75 evaluation approaches and references describing the approaches in more detail. Our aim is to include information on the level of complexity in the following version and other relevant resource requirements. Moreover, case example references will be added to the evaluation approaches to support the user in selecting an appropriate approach. Further, in the coming years, we aim to subject the “eHealth methodology guide” to an expert evaluation to assess the quality and ranking of the evaluation approaches, since this was not part of this present study. Finally, we are discussing collaboration and integration with the European Federation for Medical Informatics EVAL-Assessment of Health Information Systems working group.


In this project, 75 unique eHealth evaluation approaches were identified in a scoping review and concept mapping study and served as content for the online “eHealth methodology guide.” The online “eHealth methodology guide” could be a step forward in supporting developers and evaluators in selecting a suitable evaluation approach in relation to the specific study phase of the “eHealth evaluation cycle.” Overall, the guide aims to enhance quality and safety, and to facilitate long-term implementation of novel eHealth solutions.


We thank the following individual study participants of the eHealth Evaluation Research Group for their contributions to the concept mapping study: AM Hooghiemstra, ASHM van Dalen, DT Ubbink, E Tensen, HAW Meijer, H Ossebaard, IM Verdonck, JK Sont, J Breedvelt, JFM van den Heuvel, L Siemons, L Wesselman, MJM Breteler, MJ Schuuring, M Jansen, MMH Lahr, MM van der Vlist, NF Keizer, P Kubben, PM Bossuyt, PJM van den Boog, RB Kool, VT Visch, and WA Spoelman. We would like to acknowledge the Netherlands Organization for Health Research and Development (ZonMw) and the Netherlands Federation of University Medical Centres for their financial support through the means of the “Citrienfund - program eHealth” (grant number 839201005). We also acknowledge Terralemon for development and support of the online “eHealth methodology guide.”

Authors' Contributions

TB, AR, MS, MK, and NC designed the study. TB, AR, and MK performed the systematic scoping review. AR set up the online concept mapping software, invited participants, and coordinated data collection. TB, AR, JW, MK, LW, HR, LGP, MS, and NC engaged, alongside the eHealth Evaluation Collaborators Group, in the exercises of the concept mapping study. TB, AR, and JW analyzed data and interpreted the study results. TB and AR wrote the first draft. AR created the tables and figures. TB, AR, JW, MK, HR, LGP, KC, AS, MS, and NC contributed to the redrafting of the manuscript. All authors approved the final version of the manuscript for submission.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Search strategy.

DOCX File , 13 KB

Multimedia Appendix 2

List of 48 unique electronic health (eHealth) evaluation approaches suggested by participants of the concept mapping study.

DOCX File , 13 KB

Multimedia Appendix 3

eHealth methodology guide.

DOCX File , 288 KB

  1. From innovation to implementation: eHealth in the WHO European Region. World Health Organization. 2016.   URL: http:/​/www.​​__data/​assets/​pdf_file/​0012/​302331/​From-Innovation-to-Implementation-eHealth-Report-EU.​pdf?ua=1 [accessed 2020-01-01]
  2. de la Torre-Díez I, López-Coronado M, Vaca C, Aguado JS, de Castro C. Cost-utility and cost-effectiveness studies of telemedicine, electronic, and mobile health systems in the literature: a systematic review. Telemed J E Health 2015 Feb;21(2):81-85 [FREE Full text] [CrossRef] [Medline]
  3. Sanyal C, Stolee P, Juzwishin D, Husereau D. Economic evaluations of eHealth technologies: A systematic review. PLoS One 2018;13(6):e0198112 [FREE Full text] [CrossRef] [Medline]
  4. Flodgren G, Rachas A, Farmer AJ, Inzitari M, Shepperd S. Interactive telemedicine: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2015 Sep 07(9):CD002098 [FREE Full text] [CrossRef] [Medline]
  5. Marcolino MS, Oliveira JAQ, D'Agostino M, Ribeiro AL, Alkmim MBM, Novillo-Ortiz D. The Impact of mHealth Interventions: Systematic Review of Systematic Reviews. JMIR Mhealth Uhealth 2018 Jan 17;6(1):e23 [FREE Full text] [CrossRef] [Medline]
  6. Elbert NJ, van Os-Medendorp H, van Renselaar W, Ekeland AG, Hakkaart-van Roijen L, Raat H, et al. Effectiveness and cost-effectiveness of ehealth interventions in somatic diseases: a systematic review of systematic reviews and meta-analyses. J Med Internet Res 2014 Apr 16;16(4):e110 [FREE Full text] [CrossRef] [Medline]
  7. Olff M. Mobile mental health: a challenging research agenda. Eur J Psychotraumatol 2015;6:27882 [FREE Full text] [CrossRef] [Medline]
  8. Feehan LM, Geldman J, Sayre EC, Park C, Ezzat AM, Yoo JY, et al. Accuracy of Fitbit Devices: Systematic Review and Narrative Syntheses of Quantitative Data. JMIR Mhealth Uhealth 2018 Aug 09;6(8):e10527 [FREE Full text] [CrossRef] [Medline]
  9. Sheikh A, Cornford T, Barber N, Avery A, Takian A, Lichtner V, et al. Implementation and adoption of nationwide electronic health records in secondary care in England: final qualitative results from prospective national evaluation in "early adopter" hospitals. BMJ 2011 Oct 17;343:d6054 [FREE Full text] [CrossRef] [Medline]
  10. Scott RE, Mars M. Principles and framework for eHealth strategy development. J Med Internet Res 2013 Jul 30;15(7):e155 [FREE Full text] [CrossRef] [Medline]
  11. Vandenbroucke JP. Observational research, randomised trials, and two views of medical science. PLoS Med 2008 Mar 11;5(3):e67 [FREE Full text] [CrossRef] [Medline]
  12. Brender J. Evaluation of health information applications--challenges ahead of us. Methods Inf Med 2006;45(1):62-66. [Medline]
  13. Kaplan B. Evaluating informatics applications--some alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inform 2001 Nov;64(1):39-56. [CrossRef] [Medline]
  14. Chumbler NR, Kobb R, Brennan DM, Rabinowitz T. Recommendations for research design of telehealth studies. Telemed J E Health 2008 Nov;14(9):986-989. [CrossRef] [Medline]
  15. de Lusignan S, Crawford L, Munro N. Creating and using real-world evidence to answer questions about clinical effectiveness. J Innov Health Inform 2015 Nov 04;22(3):368-373. [CrossRef] [Medline]
  16. Mohr DC, Schueller SM, Riley WT, Brown CH, Cuijpers P, Duan N, et al. Trials of Intervention Principles: Evaluation Methods for Evolving Behavioral Intervention Technologies. J Med Internet Res 2015 Jul 08;17(7):e166 [FREE Full text] [CrossRef] [Medline]
  17. Riley WT, Glasgow RE, Etheredge L, Abernethy AP. Rapid, responsive, relevant (R3) research: a call for a rapid learning health research enterprise. Clin Transl Med 2013 May 10;2(1):10. [CrossRef] [Medline]
  18. Wyatt JC. How can clinicians, specialty societies and others evaluate and improve the quality of apps for patient use? BMC Med 2018 Dec 03;16(1):225 [FREE Full text] [CrossRef] [Medline]
  19. Black AD, Car J, Pagliari C, Anandan C, Cresswell K, Bokun T, et al. The impact of eHealth on the quality and safety of health care: a systematic overview. PLoS Med 2011 Jan 18;8(1):e1000387 [FREE Full text] [CrossRef] [Medline]
  20. Murray E, Hekler EB, Andersson G, Collins LM, Doherty A, Hollis C, et al. Evaluating Digital Health Interventions: Key Questions and Approaches. Am J Prev Med 2016 Nov;51(5):843-851 [FREE Full text] [CrossRef] [Medline]
  21. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M, Medical Research Council Guidance. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ 2008 Sep 29;337:a1655 [FREE Full text] [CrossRef] [Medline]
  22. Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A'Court C, et al. Beyond Adoption: A New Framework for Theorizing and Evaluating Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability of Health and Care Technologies. J Med Internet Res 2017 Nov 01;19(11):e367 [FREE Full text] [CrossRef] [Medline]
  23. Nykänen P, Brender J, Talmon J, de Keizer N, Rigby M, Beuscart-Zephir M, et al. Guideline for good evaluation practice in health informatics (GEP-HI). Int J Med Inform 2011 Dec;80(12):815-827. [CrossRef] [Medline]
  24. Nykänen P, Kaipio J. Quality of Health IT Evaluations. Stud Health Technol Inform 2016;222:291-303. [Medline]
  25. Brender J. Handbook of Evaluation Methods for Health Informatics. Cambridge, MA: Academic Press/Elsevier; 2006.
  26. Ammenwerth E, Rigby M. Evidence-Based Health Informatics. In: Studies in Health Technology and Informatics. Amsterdam: IOS press; 2016.
  27. Enam A, Torres-Bonilla J, Eriksson H. Evidence-Based Evaluation of eHealth Interventions: Systematic Literature Review. J Med Internet Res 2018 Nov 23;20(11):e10971 [FREE Full text] [CrossRef] [Medline]
  28. Trochim WM, Linton R. Conceptualization for planning and evaluation. Eval Program Plann 1986;9(4):289-308. [CrossRef] [Medline]
  29. Trochim WM. An introduction to concept mapping for planning and evaluation. Eval Program Plan 1989 Jan;12(1):1-16. [CrossRef]
  30. Concept Systems Incorporated. Global MAXTM.   URL: [accessed 2019-03-06]
  31. Trochim WM, McLinden D. Introduction to a special issue on concept mapping. Eval Program Plann 2017 Feb;60:166-175. [CrossRef] [Medline]
  32. Group Concept Mapping Resource Guide. groupwisdom.   URL: [accessed 2019-01-16]
  33. Kane M, Trochim W. Concept Mapping for Planning and Evaluation. Thousand Oaks, CA: Sage Publications Inc; 2007.
  34. Chiasson M, Reddy M, Kaplan B, Davidson E. Expanding multi-disciplinary approaches to healthcare information technologies: what does information systems offer medical informatics? Int J Med Inform 2007 Jun;76(Suppl 1):S89-S97. [CrossRef] [Medline]
  35. Campbell G, Yue LQ. Statistical innovations in the medical device world sparked by the FDA. J Biopharm Stat 2016 Sep 15;26(1):3-16. [CrossRef] [Medline]
  36. Law LM, Wason JMS. Design of telehealth trials--introducing adaptive approaches. Int J Med Inform 2014 Dec;83(12):870-880 [FREE Full text] [CrossRef] [Medline]
  37. van Gemert-Pijnen JEWC, Nijland N, van Limburg M, Ossebaard HC, Kelders SM, Eysenbach G, et al. A holistic framework to improve the uptake and impact of eHealth technologies. J Med Internet Res 2011 Dec 05;13(4):e111 [FREE Full text] [CrossRef] [Medline]
  38. Alpay L, Doms R, Bijwaard H. Embedding persuasive design for self-health management systems in Dutch healthcare informatics education: Application of a theory-based method. Health Informatics J 2019 Dec;25(4):1631-1646. [CrossRef] [Medline]
  39. Shaw NT. ‘CHEATS’: a generic information communication technology (ICT) evaluation framework. Comput Biol Med 2002 May;32(3):209-220. [CrossRef]
  40. Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform 2004 Feb;37(1):56-76 [FREE Full text] [CrossRef] [Medline]
  41. Jaspers MWM. A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence. Int J Med Inform 2009 May;78(5):340-353. [CrossRef] [Medline]
  42. Khajouei R, Zahiri Esfahani M, Jahani Y. Comparison of heuristic and cognitive walkthrough usability evaluation methods for evaluating health information systems. J Am Med Inform Assoc 2017 Apr 01;24(e1):e55-e60. [CrossRef] [Medline]
  43. van Engen-Verheul M, Peek N, Vromen T, Jaspers M, de Keizer N. How to use concept mapping to identify barriers and facilitators of an electronic quality improvement intervention. Stud Health Technol Inform 2015;210:110-114. [Medline]
  44. Mohr DC, Cheung K, Schueller SM, Hendricks Brown BC, Duan N. Continuous evaluation of evolving behavioral intervention technologies. Am J Prev Med 2013 Oct;45(4):517-523 [FREE Full text] [CrossRef] [Medline]
  45. Nicholas J, Boydell K, Christensen H. mHealth in psychiatry: time for methodological change. Evid Based Ment Health 2016 May;19(2):33-34. [CrossRef] [Medline]
  46. Bongiovanni-Delarozière I, Le Goff-Pronost M. Economic evaluation methods applied to telemedicine: From a literature review to a standardized framework. Eur Res Telemed 2017 Nov;6(3-4):117-135. [CrossRef]
  47. Fatehi F, Smith AC, Maeder A, Wade V, Gray LC. How to formulate research questions and design studies for telehealth assessment and evaluation. J Telemed Telecare 2017 Oct;23(9):759-763. [CrossRef] [Medline]
  48. Baker TB, Gustafson DH, Shah D. How can research keep up with eHealth? Ten strategies for increasing the timeliness and usefulness of eHealth research. J Med Internet Res 2014 Feb 19;16(2):e36 [FREE Full text] [CrossRef] [Medline]
  49. Collins LM, Murphy SA, Strecher V. The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med 2007 May;32(Suppl 5):S112-S118 [FREE Full text] [CrossRef] [Medline]
  50. Grigsby J, Bennett RE. Alternatives to randomized controlled trials in telemedicine. J Telemed Telecare 2006;12(Suppl 2):S77-S84. [CrossRef] [Medline]
  51. Liu JLY, Wyatt JC. The case for randomized controlled trials to assess the impact of clinical information systems. J Am Med Inform Assoc 2011;18(2):173-180 [FREE Full text] [CrossRef] [Medline]
  52. Kontopantelis E, Doran T, Springate DA, Buchan I, Reeves D. Regression based quasi-experimental approach when randomisation is not an option: interrupted time series analysis. BMJ 2015 Jun 09;350:h2750 [FREE Full text] [CrossRef] [Medline]
  53. Catwell L, Sheikh A. Evaluating eHealth interventions: the need for continuous systemic evaluation. PLoS Med 2009 Aug;6(8):e1000126 [FREE Full text] [CrossRef] [Medline]
  54. Han JY. Transaction logfile analysis in health communication research: challenges and opportunities. Patient Educ Couns 2011 Mar;82(3):307-312. [CrossRef] [Medline]
  55. Sieverink F, Kelders S, Poel M, van Gemert-Pijnen L. Opening the Black Box of Electronic Health: Collecting, Analyzing, and Interpreting Log Data. JMIR Res Protoc 2017 Aug 07;6(8):e156 [FREE Full text] [CrossRef] [Medline]
  56. Kramer-Jackman KL, Popkess-Vawter S. Method for technology-delivered healthcare measures. Comput Inform Nurs 2011 Dec;29(12):730-740. [CrossRef] [Medline]
  57. Wilson K, Bell C, Wilson L, Witteman H. Agile research to complement agile development: a proposal for an mHealth research lifecycle. NPJ Digit Med 2018 Sep 13;1(1):46 [FREE Full text] [CrossRef] [Medline]
  58. Jacobs MA, Graham AL. Iterative development and evaluation methods of mHealth behavior change interventions. Curr Opin Psychol 2016 Jun;9:33-37. [CrossRef]
  59. Dempsey W, Liao P, Klasnja P, Nahum-Shani I, Murphy SA. Randomised trials for the Fitbit generation. Signif (Oxf) 2015 Dec 01;12(6):20-23 [FREE Full text] [CrossRef] [Medline]
  60. Klasnja P, Hekler EB, Shiffman S, Boruvka A, Almirall D, Tewari A, et al. Microrandomized trials: An experimental design for developing just-in-time adaptive interventions. Health Psychol 2015 Dec;34S:1220-1228 [FREE Full text] [CrossRef] [Medline]
  61. Law L, Edirisinghe N, Wason JM. Use of an embedded, micro-randomised trial to investigate non-compliance in telehealth interventions. Clin Trials 2016 Aug;13(4):417-424. [CrossRef] [Medline]
  62. Walton A, Nahum-Shani I, Crosby L, Klasnja P, Murphy S. Optimizing Digital Integrated Care via Micro-Randomized Trials. Clin Pharmacol Ther 2018 Jul 19;104(1):53-58 [FREE Full text] [CrossRef] [Medline]
  63. Caffery LJ, Martin-Khan M, Wade V. Mixed methods for telehealth research. J Telemed Telecare 2017 Oct;23(9):764-769. [CrossRef] [Medline]
  64. Lee S, Smith CAM. Criteria for quantitative and qualitative data integration: mixed-methods research methodology. Comput Inform Nurs 2012 May;30(5):251-256. [CrossRef] [Medline]
  65. Kidholm K, Clemensen J, Caffery LJ, Smith AC. The Model for Assessment of Telemedicine (MAST): A scoping review of empirical studies. J Telemed Telecare 2017 Oct;23(9):803-813. [CrossRef] [Medline]
  66. Kidholm K, Jensen LK, Kjølhede T, Nielsen E, Horup MB. Validity of the Model for Assessment of Telemedicine: A Delphi study. J Telemed Telecare 2018 Feb;24(2):118-125. [CrossRef] [Medline]
  67. Kummervold PE, Johnsen JK, Skrøvseth SO, Wynn R. Using noninferiority tests to evaluate telemedicine and e-health services: systematic review. J Med Internet Res 2012 Sep 28;14(5):e132 [FREE Full text] [CrossRef] [Medline]
  68. May C. A rational model for assessing and evaluating complex interventions in health care. BMC Health Serv Res 2006 Jul 07;6:86 [FREE Full text] [CrossRef] [Medline]
  69. Borycki E, Dexheimer JW, Hullin Lucay Cossio C, Gong Y, Jensen S, Kaipio J, et al. Methods for Addressing Technology-induced Errors: The Current State. Yearb Med Inform 2016 Nov 10(1):30-40 [FREE Full text] [CrossRef] [Medline]
  70. Clemensen J, Rothmann MJ, Smith AC, Caffery LJ, Danbjorg DB. Participatory design methods in telemedicine research. J Telemed Telecare 2017 Oct;23(9):780-785. [CrossRef] [Medline]
  71. Glasgow RE. eHealth evaluation and dissemination research. Am J Prev Med 2007 May;32(Suppl 5):S119-S126. [CrossRef] [Medline]
  72. Danaher BG, Seeley JR. Methodological issues in research on web-based behavioral interventions. Ann Behav Med 2009 Aug;38(1):28-39 [FREE Full text] [CrossRef] [Medline]
  73. Sadegh SS, Khakshour Saadat P, Sepehri MM, Assadi V. A framework for m-health service development and success evaluation. Int J Med Inform 2018 Apr;112:123-130. [CrossRef] [Medline]
  74. Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. Int J Evid Based Healthc 2012 Dec;10(4):397-410. [CrossRef] [Medline]
  75. Glasgow RE, Phillips SM, Sanchez MA. Implementation science approaches for integrating eHealth research into practice and policy. Int J Med Inform 2014 Jul;83(7):e1-e11. [CrossRef] [Medline]
  76. Almirall D, Nahum-Shani I, Sherwood NE, Murphy SA. Introduction to SMART designs for the development of adaptive interventions: with application to weight loss research. Transl Behav Med 2014 Sep;4(3):260-274 [FREE Full text] [CrossRef] [Medline]
  77. Ammenwerth E, Hackl WO, Binzer K, Christoffersen TEH, Jensen S, Lawton K, et al. Simulation studies for the evaluation of health information technologies: experiences and results. Health Inf Manag 2012 Jun;41(2):14-21. [CrossRef] [Medline]
  78. Jensen S, Kushniruk AW, Nøhr C. Clinical simulation: A method for development and evaluation of clinical information systems. J Biomed Inform 2015 Apr;54:65-76 [FREE Full text] [CrossRef] [Medline]
  79. Dallery J, Cassidy RN, Raiff BR. Single-case experimental designs to evaluate novel technology-based health interventions. J Med Internet Res 2013 Feb 08;15(2):e22 [FREE Full text] [CrossRef] [Medline]
  80. Cresswell KM, Sheikh A. Undertaking sociotechnical evaluations of health information technologies. Inform Prim Care 2014 Mar 18;21(2):78-83. [CrossRef] [Medline]
  81. Kaufman D, Roberts WD, Merrill J, Lai T, Bakken S. Applying an evaluation framework for health information system design, development, and implementation. Nurs Res 2006;55(Suppl 2):S37-S42. [CrossRef] [Medline]
  82. Stead M, Hastings G, Eadie D. The challenge of evaluating complex interventions: a framework for evaluating media advocacy. Health Educ Res 2002 Jun;17(3):351-364. [CrossRef] [Medline]
  83. Brown CA, Lilford RJ. The stepped wedge trial design: a systematic review. BMC Med Res Methodol 2006 Nov 08;6:54 [FREE Full text] [CrossRef] [Medline]
  84. Hussey MA, Hughes JP. Design and analysis of stepped wedge cluster randomized trials. Contemp Clin Trials 2007 Feb;28(2):182-191. [CrossRef] [Medline]
  85. Spiegelman D. Evaluating Public Health Interventions: 2. Stepping Up to Routine Public Health Evaluation With the Stepped Wedge Design. Am J Public Health 2016 Mar;106(3):453-457. [CrossRef] [Medline]
  86. Langbecker D, Caffery LJ, Gillespie N, Smith AC. Using survey methods in telehealth research: A practical guide. J Telemed Telecare 2017 Oct;23(9):770-779. [CrossRef] [Medline]
  87. Rönnby S, Lundberg O, Fagher K, Jacobsson J, Tillander B, Gauffin H, et al. mHealth Self-Report Monitoring in Competitive Middle- and Long-Distance Runners: Qualitative Study of Long-Term Use Intentions Using the Technology Acceptance Model. JMIR Mhealth Uhealth 2018 Aug 13;6(8):e10270 [FREE Full text] [CrossRef] [Medline]
  88. Bastien JMC. Usability testing: a review of some methodological and technical aspects of the method. Int J Med Inform 2010 Apr;79(4):e18-e23. [CrossRef] [Medline]
  89. Nguyen HQ, Cuenco D, Wolpin S, Benditt J, Carrieri-Kohlman V. Methodological considerations in evaluating eHealth interventions. Can J Nurs Res 2007 Mar;39(1):116-134. [Medline]
  90. Brennan LK, Brownson RC, Kelly C, Ivey MK, Leviton LC. Concept mapping: priority community strategies to create changes to support active living. Am J Prev Med 2012 Nov;43(5 Suppl 4):S337-S350 [FREE Full text] [CrossRef] [Medline]
  91. van Engen-Verheul MM, Peek N, Haafkens JA, Joukes E, Vromen T, Jaspers MWM, et al. What is needed to implement a web-based audit and feedback intervention with outreach visits to improve care quality: A concept mapping study among cardiac rehabilitation teams. Int J Med Inform 2017 Jan;97:76-85. [CrossRef] [Medline]
  92. Online eHealth methodology guide. NFU eHealth.   URL: https:/​/www.​​e-health-toolkit/​onderzoek/​e-health-evaluation-methodology/​overview-of-methods [accessed 2020-01-01]
  93. Ekeland AG, Bowes A, Flottorp S. Methodologies for assessing telemedicine: a systematic review of reviews. Int J Med Inform 2012 Jan;81(1):1-11. [CrossRef] [Medline]
  94. AlDossary S, Martin-Khan MG, Bradford NK, Smith AC. A systematic review of the methodologies used to evaluate telemedicine service initiatives in hospital facilities. Int J Med Inform 2017 Jan;97:171-194. [CrossRef] [Medline]
  95. Baxter R, Taylor N, Kellar I, Lawton R. What methods are used to apply positive deviance within healthcare organisations? A systematic review. BMJ Qual Saf 2016 Mar 20;25(3):190-201 [FREE Full text] [CrossRef] [Medline]

eHealth: electronic health
GEP-HI: Good Evaluation Practice in Health Informatics
MRC: Medical Research Council
NASS: Nonadoption, Abandonment, and challenges to Scale-up, Spread, and Sustainability
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
RCT: randomized controlled trial

Edited by G Eysenbach; submitted 12.01.20; peer-reviewed by E Ammenwerth, P Nykänen, F Mayoral; comments to author 17.03.20; revised version received 09.05.20; accepted 03.06.20; published 12.08.20


©Tobias N Bonten, Anneloek Rauwerdink, Jeremy C Wyatt, Marise J Kasteleyn, Leonard Witkamp, Heleen Riper, Lisette JEWC van Gemert-Pijnen, Kathrin Cresswell, Aziz Sheikh, Marlies P Schijven, Niels H Chavannes, EHealth Evaluation Research Group. Originally published in the Journal of Medical Internet Research (, 12.08.2020.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.