Accessibility settings

Published on in Vol 28 (2026)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/86879, first published .
Validation of the Updated Digital Health Literacy Instrument and Development of a Short Form: Online Survey Study of the General Population

Validation of the Updated Digital Health Literacy Instrument and Development of a Short Form: Online Survey Study of the General Population

Validation of the Updated Digital Health Literacy Instrument and Development of a Short Form: Online Survey Study of the General Population

1Centre of Expertise Health Innovation, Technology for Health Care Research Group, The Hague University of Applied Sciences, Johanna Westerdijkplein 75, Den Haag, The Netherlands

2Department of Health and Social Studies, NHL Stenden University of Applied Sciences, Leeuwarden, The Netherlands

3Department of Psychology, Health and Technology, Faculty of Behavioural, Management and Social Sciences, University of Twente, Enschede, The Netherlands

Corresponding Author:

Rosalie van der Vaart, PhD


Background: The digital health literacy instrument (DHLI) was developed in 2017 to measure individuals’ ability to access, understand, evaluate, and apply online health information. Since that time, digital health has shifted from desktop-based internet use to mobile devices, and there has been a rapidly expanding range of health apps. Additionally, heightened privacy and data security requirements have increased the complexity of user competencies needed to engage with digital health tools. These developments underscore the need to update the original DHLI.

Objective: This study aimed to create an updated version of the DHLI (DHLI 2.0) that reflects current digital health practices and to examine its reliability and validity by exploring associations with user characteristics. Additionally, we aimed to develop a short-form version to facilitate broader use in research and practice.

Methods: The instrument was iteratively updated and pilot-tested to retain the original theoretical framework while reflecting current digital health practices, devices, and emerging challenges such as mobile use and data security. Several items were reworded and a new 2-item subscale on digital safety was added. The full DHLI 2.0 comprises 24 items across 8 skill domains. A 16-item short form was developed by iteratively removing 1 or 2 items per subscale based on the “α if item deleted” criterion, while retaining the same subscale structure as the full form. Data to validate the new version of the instrument were collected in June 2024 through an online survey among members of a representative citizen panel in Friesland, a province in the Netherlands (N=2728). Sociodemographics, internet and health-related internet use, general health literacy (measured with the Single Item Literacy Screener), self-reported health, and health care use were assessed. Internal consistency was evaluated using Cronbach α, and construct validity was assessed via Spearman ρ correlations with related constructs.

Results: Internal consistency was high for both the full (α=0.94) and short-form (α=0.90) scales. Most subscales showed satisfactory to excellent reliability (α=0.71–0.93), while “Securing privacy” and “Using security measures” demonstrated moderate reliability (α=0.65-0.66). The DHLI 2.0 total scores were approximately normally distributed (skewness –0.5; kurtosis 0.4). As expected, digital health literacy was negatively correlated with age (ρ=−0.39, P<.001) and positively correlated with education (ρ=0.22, P<.001), income (ρ=0.27, P<.001), time spent online (ρ=0.32, P<.001), and general health literacy (ρ=−0.42, P<.001).

Conclusions: The DHLI 2.0 provides an updated, reliable, and valid measure of digital health literacy covering 8 key domains, including data security. The 16-item short form offers a concise alternative suitable for research and possibly practical applications in health and eHealth contexts.

J Med Internet Res 2026;28:e86879

doi:10.2196/86879

Keywords



Over the past decades, the landscape of health care has transformed considerably due to the rise of the internet and digital health technology. Citizens and patients now have access to a broad spectrum of digital health resources, allowing them to search online for health-related information, communicate with care professionals through patient portals, connect with peers via social media platforms, and apply digital tools and applications to monitor or manage their health.

Although these digital health innovations hold great potential to improve accessibility and quality of care and to improve patient involvement in care, not everyone benefits equally from them. For example, data from the Dutch National Monitor Digital Care show stark differences in use of digital solutions among care users, where almost all care users with a theoretical educational background report using a computer independently or with help, compared to about half of those with a practice-oriented educational background [1,2]. These disparities also extend to the use of digital health tools, where these numbers are 68% and 22%, respectively, for using websites to seek health information. In addition, age-related differences are also evident: while 75% of those aged 15 to 39 years reported having used health websites, this is only 36% in the age group of ≥65 years [1,2]. Similar trends have frequently been observed in international research contexts [3-5]. Effectively engaging with digital health resources requires a set of complex skills that are not evenly distributed across the population. People may struggle to access, navigate, understand, evaluate, or apply online health information, or to use digital health tools efficiently.

These findings highlight the importance of digital health literacy, defined as the ability to seek, find, understand, appraise, and apply online health information and services [6]. As health information and care services are increasingly delivered through digital channels, gaining insight into individuals’ digital health literacy is essential. It allows researchers to assess the level of digital health literacy at a population level and provides insight into differences in digital health literacy skills among subgroups. In addition, it supports researchers and developers of health applications to assess whether innovations fit their target group. For health care professionals, knowledge about their patients’ digital health literacy could even be useful to assess whether support is needed, for whom, and in which form. To assess digital health literacy, the digital health literacy instrument (DHLI) was developed in 2017 [7]. The DHLI is grounded in theory and empirical observational research and encompasses 7 subscales. Each subscale captures a distinct component of digital health literacy, and together they represent the practical skills needed to access, understand, evaluate, and apply digital health information: operational skills (the skills to operate a device, such as typing, clicking, and swiping), navigation skills, information searching, evaluating reliability, determining relevance, creating self-generated content (being able to express oneself clearly in written language), and protecting privacy while sharing information with others.

Since its introduction, the DHLI has been widely adopted and validated across national and international contexts, including studies among diverse patient populations [8,9], adolescents [10,11], older adults [12], university students [13-15], and health care professionals [16], demonstrating its value as a comprehensive instrument for assessing digital health competencies. Yet, the digital health landscape has evolved considerably since the publication of the DHLI. First, there has been a shift from desktop computers and laptops to mobile devices, such as smartphones and tablets, which require different operational skills (eg, tapping and swiping rather than clicking or typing). Second, the field has seen a rapid increase in digital health tools (eg, apps for self-management, self-monitoring, and communication), making some of the original DHLI items, which primarily focus on internet use, less suitable. Third, growing concerns around privacy and data safety have led to more complex authentication procedures, such as 2-factor identification and the need to manage secure passwords, which demand additional user competencies [17].

In response to these changes, we felt the need for a revision of the DHLI. Therefore, the first aim of this study was to validate an updated version of the DHLI, the DHLI 2.0, including its internal consistency and construct validity, by examining the associations with a range of user characteristics. Additionally, we aimed to examine the feasibility and validity of a short-form version of the instrument and to facilitate broader applications in research and practice.


Instrument Revision

The revised version of the DHLI, the DHLI 2.0, includes 8 skill domains, each measured with 2 to 4 items, resulting in 24 items in total. The goal of the revision was to keep the original structure and theoretical framework of the DHLI intact, while updating the instrument to current digital health practices and currently used devices. The revision was an iterative process, with some experts, in which we reworded the items to make them more in line with the vocabulary of the current digital landscape. Subsequently, a pilot study was conducted with a preliminary version. The prefinal version of the new instrument was tested in a qualitative pilot study, with 8 survey experts of Planbureau Fryslân, which informed the final adjustments. As main changes, we added the use of tablets and smartphones in the introduction of the items and included words as tapping and swiping (in addition to clicking). A new subscale, including 2 items, was added to address data security challenges based on problems encountered in recent projects. The items include skills to use 2-step authentication and skills in using strong passwords.

Similar to the original version, all items are scored on a 4-point Likert scale, with response options ranging from “very easy” to “very difficult” or from “never” to “often.” For the current validation study, we added an option “not applicable/do not know,” since this was common practice in the surveys of Planbureau Fryslân. For analysis purposes, scores are reverse-coded so that higher values reflect higher levels of digital health literacy. A total digital health literacy score was computed by averaging responses across all items, provided that at least 16 items have been completed in the full scale and 8 items in the short form (SF). Subscale scores were calculated using the mean of the items within each skill domain, provided that all items of the subscale were completed. Multimedia Appendices 1-3 provide (1) the items and scoring instructions for the revised DHLI, full form (FF DHLI 2.0); (2) the SF DHLI 2.0; and (3) an overview of the changes made in DHLI 2.0 compared to the original DHLI.

Study Design and Participants

For this study, the citizen panel from Planbureau Fryslân was used. Planbureau Fryslân is the independent regional institute for research and monitoring, aiming to inform and enhance local government policy in the province of Friesland, the Netherlands. Their panel consists of a sample of approximately 8000 residents in Friesland (a Northern province of the Netherlands). To ensure representativeness, Planbureau Fryslân regularly draws a random sample of all residents aged 18 years and older in the province, in close coordination with the municipalities. Potential panel members are personally invited by letter to participate in the sample. Self-enrollment in the panel is not possible. Panel members are periodically asked to participate in surveys on topics relevant to regional policy, including health. The data collection for Panel Fryslân follows a standardized procedure as described in their research accountability document [18]. For this study, in June 2024, all active panel members were invited to complete a survey on digital health care use and digital health literacy. The questionnaire included the DHLI 2.0 and 5 other digital health-related questions (see Measures). Panel members could respond to the survey during a 3-week period and received 2 reminders within these 3 weeks. Three months earlier, in January 2024, the panel had filled out a survey on general health, including questions on self-reported general health, health care use, and chronic health conditions (see Measures). These variables were merged with the data from June 2024 for the current analyses. In total, 2728 respondents filled out both the survey from January and June 2024. The data of these respondents are used in the analyses.

Procedure

Data collection by Planbureau Fryslân was carried out via an online Dutch-language survey (Quest Software, Inc), which was distributed via email to panel members. Panel members were given 1 month to respond to the survey and received 2 reminder emails during that period to enhance the response rate.

Measures

In addition to the DHLI 2.0, the survey items addressed the following characteristics of the participants: (1) sociodemographics (gender, age, educational attainment, and income); (2) internet use (average number of hours spent online per day); and (3) health-related internet use (the use of 11 different online health activities in the past 12 months) (see Table 1 for a complete overview of all activities); with these 11 items, a sum score was calculated (ranging from 0 to 11), where using 1 specific type of online health activities added 1 point to the sum score; (4) general health literacy (assessed using the Single Item Literacy Screener [SILS]) [19]; (5) self-reported general health on a scale from worst possible health (1) to the best possible health (10); (6) health care use (assessed by number of visits to a general practitioner in the past 12 months, visits to a medical specialist in the past 12 months, and visits to a mental health professional in the past 12 months); and (7) presence of chronic conditions.

Table 1. Internet use and health-related internet use (N=2728).
VariableValue
Hours spent online per day, n (%)
<1371 (13.6)
1‐31346 (49.3)
3‐5551 (20.2)
5‐8237 (8.7)
>8148 (5.4)
Not applicable/don’t know35 (1.3)
Missing40 (1.5)
Self-rated internet skills, n (%)
Very good624 (22.9)
Good1186 (43.5)
Average536 (19.6)
Less than average272 (10)
Poor50 (1.8)
Not applicable/don’t know20 (0.7)
Missing40 (1.5)
SILSa, n (%)
Never1692 (62)
Rarely739 (27.1)
Sometimes218 (8)
Often29 (1.1)
Always3 (0.1)
Not applicable/don’t know7 (0.3)
Missing40 (1.5)
Number of respondents who have used the internet for health-related reasons
Online information, n (%)2353 (86.2)
Accessing a patient portalb, n (%)1729 (63.4)
Online prescriptions, n (%)1220 (44.7)
Accessing a personal health portalc, n (%)1175 (43.1)
Wearable, n (%)1027 (37.6)
Online appointment scheduling, n (%)855 (31.3)
Information on a health app, n (%)824 (30.2)
Video consultation, n (%)428 (15.7)
Information on social media, n (%)423 (15.5)
Healthy lifestyle app, n (%)367 (13.5)
Communicating via a patient forum, n (%)170 (6.2)
Health-related internet use, mean (SD)3.93 (2.24)

aSILS: Single Item Literacy Screener.

bData from a care provider shared via a secured portal.

cHealth data from care providers combined on a personal secured portal.

Ethical Considerations

All participants provided informed consent upon joining the panel and agreed to the use and sharing of their anonymized data for research purposes. The panel adheres to the code of conduct established by the Dutch Association for Statistics and Research (Vereniging voor Statistiek en Onderzoek, VSO) [20]. This study was reviewed by the Ethical Board of NHL Stenden University of Applied Sciences, who determined that the study is indeed in line with this conduct (reference number Tuitert 202405). Prior to being shared with the authors of this paper, the data were fully anonymized. The authors had no direct contact with the participants, and no financial compensation was provided for participation in the panel.

Data Analyses

Data were analyzed using IBM SPSS version 29.0 for Windows (IBM Corp). Descriptive statistics were computed for all relevant variables, including frequencies, percentages, means, and SDs. Missing values and the option “don’t know/don’t want to tell” are in most cases reported in the descriptive statistics but were treated as missing values in further analyses.

Cronbach α served as a measure of internal consistency, reflecting the (weighted) average correlation of items within the scale [21]. In general, a Cronbach α of 0.7 to 0.8 is regarded as satisfactory for scales to be used as research tools [22]. The distributional characteristics of the DHLI and its subscales were examined to assess normality and detect floor and ceiling effects. Skewness and kurtosis values were used to evaluate score distributions, with values between +1 and –1 interpreted as indicating no or slight nonnormality [23]. Floor or ceiling effects were considered present when more than 15% of participants achieved either the lowest or highest possible score on a subscale [24]. Evidence for construct validity was determined by studying Spearman ρ correlations between total scores on the DHLI 2.0 and sociodemographics, (health-related) internet use, self-reported general health, health care use, and the SILS [19]. We expected the correlations with the other variables to be in line with the correlations of the original DHLI [7]. Additionally, we explored the reliability and validity of an SF DHLI 2.0. By using the “α if item is deleted” option, for each subscale 1 or 2 items were deleted. These short-form subscales were also taken along in the validity analyses.


Participants

Table 2 shows the characteristics of the sample. Men (1582/2728, 58%) and respondents with a theory-based educational background (1445/2728, 53%) were overrepresented, that is, people with tertiary education. The mean age of the sample was 60.1 (SD 16.8) years, with a range between 19 and 95 years.

Table 3 presents the sample’s health-related characteristics. A considerable number of participants (1034/2728, 37.9%) reported having at least 1 chronic condition, and about half (1286/2728, 47.1%) had had contact with a medical specialist in the past 12 months. Still, participants scored their own health as rather good on average (7.5 on a scale from 1 to 10).

Table 2. Sociodemographics (N=2728).
CharacteristicValue
Gender, n (%)
Male1582 (58)
Female1135 (41.6)
Other11 (0.4)
Age (y), mean (SD)60.1 (16.8)
Educational attainmenta, n (%)
Low496 (18.2)
Middle (practice-based/vocational)718 (26.3)
High (theory-based)1445 (53)
Missing69 (2.5)
Family income yearly (gross)b
Less than €15,00063 (2.3)
€15,000– €31,000308 (11.3)
€31,000–€38,500282 (10.3)
€38,500–€46,000363 (13.3)
€46,000–€77,000643 (23.6)
€77,000–€92,000235 (8.6)
€92,000 or more256 (9.4)
Not applicable/don’t know514 (18.8)
Missing64 (2.3)
Personal income monthly (net)
€1000 or less – €2000685 (25.2)
€2000–€3000826 (30.3)
€3000–€4000547 (20.1)
€4000–€5000161 (5.9)
€5000 or higher136 (4)
Not applicable/don’t know353 (12.9)
Missing20 (0.7)

aLow: less than primary, primary, and lower secondary education (European qualifications framework 1 (EQF1) and part of EQF2); medium: upper secondary and postsecondary nontertiary education (part of EQF2, EQF3, EQF4, EQF4+); high: tertiary education (EQF5, EQF6, EQF7, EQF8) [25].

b€1=US $1.15.

Table 3. Health and health care use.
VariableValue
Self-reported general healtha (N=2625), mean (SD)7.5 (1.3)
Health care use (N=2728), n yes (%)
General practitioner1876 (68.8)
Medical specialist1286 (47.1)
Mental health care198 (7.3)
Chronic condition (N=2728), n yes (%)1034 (37.9)

aOn a scale from 1 to 10.

Table 1 provides an overview of participants’ general and health-related internet use. Nearly half of the respondents reported spending approximately 1 to 3 hours online per day. Most participants (1810/2728, 66.4%) rated their own internet skills as (very) good. A large majority had used the internet to search for health-related information (2352/2728, 86.2%), and the use of online portals and digital prescriptions was also relatively high. In contrast, only a small proportion of participants reported using the internet to have a video consultation with their health care provider (428/2728, 15.7%) or to communicate online with fellow patients through a forum (170/2728, 6.2%).

Reliability and Distributional Properties of the Revised DHLI

Table 4 shows the scores and internal consistency of the DHLI 2.0 items and subscales. Both the FF (24 items) and the SF (16 items) showed strong reliability, with Cronbach α coefficients of 0.94 and 0.90, respectively. Only the protecting privacy full subscale and the using security measures subscale had moderate Cronbach α scores (0.65 and 0.66, respectively).

Table 4. Descriptive statistics of digital health literacy instrument (DHLI) 2.0 scales and subscales.
Scale scoreNaValueCronbach α
Total scale score, mean (SD)
FF DHLI 2.0b (24 items)25433.17 (0.44)0.94c
SF DHLI 2.0d (16 items)27283.11 (0.48)0.90e
Operational skills, meanf (SD)
FF 4 items27283.56 (0.51)0.86
SF 2 items27283.46 (0.64)0.83
Navigation skills, mean (SD)
FF 3 items24363.24 (0.54)0.77
SF 2 items24363.34 (0.58)0.71
Information searching, mean (SD)
FF 3 items26413.15 (0.57)0.89
SF 2 items26413.08 (0.61)0.86
Evaluating reliability, mean (SD)
FF 3 items25202.80 (0.66)0.84
SF 2 items25202.66 (0.73)0.83
Determining relevance, mean (SD)
FF 3 items24532.93 (0.61)0.89
SF 2 items24532.95 (0.63)0.86
Generating content, mean (SD)
FF 3 items19083.07 (0.64)0.93
SF 2 items19083.11 (0.65)0.91
Protecting privacy, mean (SD)
FF 3 items13873.52 (0.50)0.65
SF 2 items13873.68 (0.51)0.75
Using security measures, mean (SD)
FF 2 items26263.06 (0.69)0.66

aNumber of respondents per (sub)scale, after excluding missing and “not applicable/do not know” responses.

bFF: full form.

cn=1055; number of respondents that completed at least 16 items, required to compute a total score for the FF.

dSF: short form.

en=1030; number of respondents that completed at least 8 items, required to compute a total scale score for the SF.

fPossible range between 1 and 4.

Respondents had total mean scores of 3.17 (SD 0.44) and 3.11 (SD 0.48) for the FF and the SF, respectively. Both the FF (skewness = –0.5; kurtosis =0.4) and the SF (skewness = –0.5; kurtosis =0.50) show low skewness and kurtosis values, indicating that their distributions are approximately normal.

The highest scores on the subscales were reported for operational skills (mean 3.6, SD 0.5 and SF mean 3.5, SD 0.6), navigation skills (mean 3.2, SD 0.5 and SF mean 3.3, SD 0.6), and protecting privacy (mean 3.52, SD 0.50 and SF mean 3.68, SD 0.51). This last subscale is skewed for both the full subscale (−1.5) and the short subscale (−2.1), with a ceiling effect where 32.1% (445/1387) scored 4 in the full subscale and 61.9% (858/1387) in the short version, and showed kurtosis in the full subscale (3.8) and in the SF (5.7). Only in the short subscale, operational skills and navigation skills exhibited a skewness of –1, with a ceiling effect, where for operational skills, 49.3% (1345/2728) scored 4 and for navigation skills 26.9% (655/2436). Participants indicated having more difficulties with evaluating reliability (mean 2.8, SD 0.7 and SF mean 2.7, SD 0.7) and with determining relevance of information (mean 2.9, SD 0.6 and SF mean 3.0, SD 0.6). The scores on the new subscale, using security measures, were relatively high for the question regarding 2-factor authentication (mean 3.28, SD 0.78) and low for the question regarding creating and remembering strong passwords (mean 2.83, SD 0.82).

Construct Validity of the Revised DHLI

Table 5 presents the Spearman ρ correlations between the DHLI 2.0 scale scores, the total scores of both the full and short versions of the DHLI 2.0, and other assessed variables. Looking at sociodemographics, age was moderately negatively correlated with scores on the DHLI 2.0, indicating that older individuals tend to have lower digital health literacy levels. This association was strongest for operational skills and navigation skills. Educational attainment showed a weak positive correlation with the DHLI 2.0, with the highest scores found for generating content, suggesting that educational attainment is somewhat related to digital health literacy skills. Regarding income, family gross income showed a weak positive correlation with the DHLI 2.0, suggesting that a higher income is slightly related to higher digital health literacy skills. Concerning internet use, daily time spent online showed a moderate positive correlation with the DHLI 2.0, suggesting that individuals who spend more time online tend to have higher digital health literacy. Again, the strongest correlations were observed for operational skills. Health-related internet use only weakly correlated with the DHLI 2.0. The DHLI 2.0 showed a moderate positive correlation with SILS, indicating conceptual overlap between the constructs measured by these instruments.

Table 5. Spearman ρ correlations between the revised digital health literacy instrument, sociodemographics, (health-related) internet use, health-related variables, and the Single Item Literacy Screener (SILS).
(Sub)scaleAgeEducationFamily incomePersonal incomeHours spent onlineHealth-related internet useGeneral healthUse
general practitioner
Use
medical specialist
Use mental health careSILS
Full form (FF) and short form (SF)
FF 24 items ρ−0.390.220.270.090.320.070.150.090.06−0.02−0.42
P value<.001<.001<.001<.001<.001<.001<.001<.001.002.27<.001
SF 16 items ρ−0.390.220.260.100.320.120.140.080.05−0.02−0.40
P value<.001<.001<.001<.001<.001<.001<.001<.001.006.23<.001
Operational skills
FF 4 items ρ−0.460.260.280.090.360.140.120.060.07−0.04−0.35
P value<.001<.001<.001<.001<.001<.001<.001.004<.001.02<.001
SF 2 items ρ−0.470.240.280.090.370.160.110.060.06−0.05−0.34
P value<.001<.001<.001<.001<.001<.001<.001.001.004.006<.001
Navigation skills
FF 3 items ρ−0.350.150.190.050.260.0070.10.10.07−0.04−0.34
P value<.001<.001<.001<.001<.001.72<.001<.001<.001.03<.001
SF 2 items ρ−0.350.150.200.060.260.020.100.090.07−0.03−0.35
P value<.001<.001<.001.008<.001.40<.001<.001<.001.09<.001
Information searching
FF 3 items ρ−0.260.140.210.090.230.080.130.080.04−0.01−0.31
P value<.001<.001<.001<.001<.001<.001<.001<.001.03.78<.001
SF 2 items ρ−0.240.120.190.100.220.070.130.070.04−0.00−0.28
P value<.001<.001<.001<.001<.001<.001<.001.001.03.87<.001
Evaluating reliability
FF 3 items ρ−0.310.150.170.060.250.060.100.060.05−0.03−0.33
P value<.001<.001<.001.005<.001.003<.001.001.008.19<.001
SF 2 items ρ−0.260.130.160.070.210.050.100.050.05−0.02−0.30
P value<.001<.001<.001.002<.001.02<.001.006.02.42<.001
Determining relevance
FF 3 items ρ−0.260.150.170.060.220.080.120.060.01−0.01−0.28
P value<.001<.001<.001.005<.001<.001<.001.002.56.72<.001
SF 2 items ρ−0.270.140.170.060.220.070.110.060.01−0.00−0.27
P value<.001<.001<.001.006<.001.001<.001.006.53.86<.001
Generating content
FF 3 items ρ−0.130.280.200.090.120.050.130.01−0.010.01−0.34
P value<.001<.001<.001<.001<.001.02<.001.81.64.59<.001
SF 2 items ρ−0.150.280.200.090.130.070.12−0.01−0.02−0.01−0.34
P value<.001<.001<.001.001<.001.008<.001.81.30.74<.001
Protecting privacy
FF 3 items ρ−0.220.050.05−0.0060.13−0.110.050.080.09−0.02−0.28
P value<.001.09.09.83<.001<.001.07.005<.001.41<.001
SF 2 items ρ−0.120.040.06−0.0030.06−0.090.030.030.09−0.02−0.20
P value<.001.11.03.91.03.001.30.20.002.52<.001
Using security measures
FF and SF 2 items ρ−0.260.190.210.120.240.120.110.060.04−0.01−0.33
P value<.001<.001<.001<.001<.001<.001<.001.003.07.55<.001

This paper describes the validation of a revised version of the DHLI, the DHLI 2.0. The original instrument was published in 2017, and 8 years later, it has been updated to better reflect the rapidly evolving digital landscape. Several items were reworded to capture the widespread use of smartphones and tablets. Also, a new 2-item subscale was added to address digital safety, specifically the challenges of choosing and using secure passwords and applying two-step authentication when accessing personalized systems. In our view, the DHLI 2.0 offers a more accurate representation of the competencies required for digital health literacy today.

DHLI 2.0 demonstrated strong reliability, with Cronbach α values that were compared to, and in some cases slightly higher than, those of the original instrument. Construct validity was also adequate, with correlations to sociodemographics, internet use, and general health literacy as expected and consistent with those observed for the original DHLI [7] and in other recent systematic reviews focusing on these relationships [26,27]. A recent review by Wang et al [28] identified DHLIs developed over the past 2 decades, each addressing different aspects of digital health literacy, including the DHLI. Compared to the other included instruments, the DHLI and DHLI 2.0 offer a concise yet theoretically well-grounded instrument [7]. It captures a broad spectrum of skills, ranging from basic operational abilities to higher-order information-processing skills, while also addressing the increasingly important domain of privacy and data security. Moreover, to minimize overestimation bias, common for self-reported skills, items are phrased to assess how easy or difficult respondents find certain tasks or how frequently they encounter difficulties. This validation study also included an SF DHLI 2.0, consisting of 16 items instead of 24. The SF demonstrated good reliability and internal consistency. It facilitates easier administration and allows for integration into larger sets of survey instruments.

In revising the DHLI, we deliberately chose to retain only self-report items and exclude the performance-based tasks that were a unique feature of the original version [7]. Although innovative, those tasks turned out to be too complex to administer and provided only limited added value, mostly in terms of face validity. Furthermore, their applicability was highly dependent on the device, browser, and brand used by respondents (eg, desktop vs mobile), making it nearly impossible to develop a universally valid instrument for use in anonymous research settings. Similar challenges are reported by Crocker et al [29] in their systematic scoping review of performance-based eHealth literacy measures, who also highlight the time-consuming nature of these assessments for both participants and researchers, as well as their equipment requirements. Nevertheless, self-report measures have well-known limitations, and future research could explore the integration of performance-based items into the revised DHLI, also for screening in practice settings.

Some limitations of this study need to be acknowledged. First, validation was conducted through an online questionnaire via a citizen panel. Although our sample was large and broadly representative, the online format likely excluded individuals with the lowest digital skills and contributed to the overrepresentation of higher-educated respondents. The response via the citizen panel from Planbureau Fryslân might have contributed to the overrepresentation of older adults. This may limit the generalizability of the findings, particularly for less educated or younger and middle-aged populations. Validation of the instrument among these currently underrepresented populations is needed in future research to ensure its applicability and reliability across a broad range of people in the population. At present, a study in a patient sample is ongoing, which will provide further insight into the validity of the DHLI 2.0 in a more vulnerable population. Second, the reliability of the scale “Using security measures” did not reach the conventional reliability threshold of α=0.7, which may be due to its coverage of 2 distinct skills (using strong passwords and using 2-step authentication). These items should therefore be considered separately, rather than combined in a sum score, and further research is warranted to determine whether this domain would benefit from the inclusion of additional items. Third, the correlation between DHLI 2.0 scores and health-related internet use was relatively low. This may partly reflect the relatively high mean age of our sample; although many older adults do use the internet regularly for health purposes, they may not possess the full range of skills needed to do so effectively, efficiently, and safely, which is also reported in previous studies [30,31]. Furthermore, motivational factors could influence the relationship, for example, when individuals do not perceive the internet as a good or reliable source of health information and therefore use it less frequently. Finally, health status itself may play a role, as relatively healthy individuals may have limited need for health-related digital tools (eg, patient portals or online communities), despite having adequate or even high levels of digital health literacy [32]. As a final limitation, the total scores of the subscales demonstrated approximately normal distributions, while some subscales exhibited pronounced ceiling effects. The subscale “Protecting privacy” showed the most substantial ceiling effect, potentially influenced by the phrasing of its introductory item: “When you post a message on a public forum or social media…” This condition led to reduced response rates, as only respondents who engage in such online activity reported an answer. Consequently, the responses may reflect a self-selected group with high privacy awareness within the cohort. The ceiling effects in operational and navigation skills might be affected by the online nature of the questionnaire.

The online world and available technologies are changing rapidly. Since the development of the DHLI 2.0, the integration of artificial intelligence into health applications and internet browsers has accelerated significantly. This further stresses the importance of being able to judge the reliability of information, including being aware of its source and the applicability to oneself [33,34]. In this validation study, we specifically see that the skill to evaluate reliability of online information is what many people find challenging. These findings highlight the importance of supporting individuals in developing critical evaluation skills, particularly as digital health environments become more complex and information sources less transparent. This also underscores the need for further research on how to capture the skills required in an increasingly artificial intelligence–driven digital health environment. Future research should also examine the applicability and utility of the DHLI 2.0 among groups at higher risk of limited digital health literacy. Another interesting avenue for future work is to explore the potential of this instrument in practice; this could be within health care, but also within more community- or welfare-oriented organizations. Although not yet validated for that purpose, the DHLI 2.0 (especially the SF) could support decision-making in health consultations, for example, in determining the suitability of specific eHealth tools for patients.

In conclusion, the DHLI 2.0 provides an up-to-date, reliable, valid, and concise measure of digital health literacy, grounded in a strong theoretical framework and covering 8 key domains of digital health literacy. The SF DHLI proves to be a valuable alternative to measure the broad spectrum of digital health literacy skills using 16 items.

Acknowledgments

We thank Jesse David Marinus and Jolijn Hutjes from Planbureau Fryslân for granting access to the panel, for conducting the qualitative pilot study to make final adjustments to the instrument, for their valuable input on the data collection design, and for programming the questionnaire in their survey software. We thank Bart Koemans for his work on the preliminary validation of the DHLI 2.0 in his bachelor thesis. We thank all the respondents for their participation in the study. Artificial intelligence tools were used during the preparation of this manuscript for refinement, correction, and editing of the manuscript to improve the clarity of language, not for any other purposes.

Funding

The authors appreciate the financial support of the Faith Research Consortium.

Data Availability

The data used in this research are available upon request, provided that the request aligns with the established conditions, which can be found on the Planbureau Fryslân website [35]. Researchers interested in using these data should contact the Planbureau Fryslân Office at info@planbureaufryslan.nl.

Authors' Contributions

RvdV and CD were responsible for the conceptualization of DHLI 2.0, and RvdV, CD, and IT conceptualized the validation study. IT initiated the collaboration with Panel Fryslân. All authors contributed to the methodology. IT conducted the formal data analyses, which were verified by RvdV and CD. RvdV and IT drafted the original manuscript, and CD and JvV critically reviewed and edited the manuscript. All authors read and approved the final version.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Digital health literacy instrument 2.0 full form.

PDF File, 101 KB

Multimedia Appendix 2

Digital health literacy instrument 2.0 short form.

PDF File, 113 KB

Multimedia Appendix 3

Overview of alterations in DHLI 2.0.

PDF File, 135 KB

  1. van Tuyl L, Keuper J, Standaar L, Alblas E, Keij B. Toegankelijkheid van digitale zorg: signalen uit de e‑healthmonitor 2023 [Article in Dutch]. TSG Tijdschr Gezondheidswet. 2025;103(S1):3-7. [CrossRef]
  2. Keij B, Versluis A, Alblas EE, Keuper JJ, van Tuyl LHD, van der Vaart R. 2023 eHealth monitor: state of digital care [Report in Dutch]. Rijksinstituut voor Volksgezondheid en Milieu (RIVM); 2024. URL: https://www.rivm.nl/bibliotheek/rapporten/2024-0008.pdf [Accessed 2026-03-18]
  3. Mahajan S, Lu Y, Spatz ES, Nasir K, Krumholz HM. Trends and predictors of use of digital health technology in the United States. Am J Med. Jan 2021;134(1):129-134. [CrossRef] [Medline]
  4. De Santis KK, Jahnel T, Sina E, Wienert J, Zeeb H. Digitization and health in Germany: cross-sectional nationwide survey. JMIR Public Health Surveill. Nov 22, 2021;7(11):e32951. [CrossRef] [Medline]
  5. Yao R, Zhang W, Evans R, Cao G, Rui T, Shen L. Inequities in health care services caused by the adoption of digital health technologies: scoping review. J Med Internet Res. Mar 21, 2022;24(3):e34144. [CrossRef] [Medline]
  6. Norman CD, Skinner HA. eHealth literacy: essential skills for consumer health in a networked world. J Med Internet Res. Jun 16, 2006;8(2):e9. [CrossRef] [Medline]
  7. van der Vaart R, Drossaert C. Development of the digital health literacy instrument: measuring a broad spectrum of health 1.0 and health 2.0 skills. J Med Internet Res. Jan 24, 2017;19(1):e27. [CrossRef] [Medline]
  8. Duimel SLL, Linn AJ, Smets EMA, Smit ES, van Weert JCM. Profiling cancer patients based on their motives for seeking informational and emotional support online. Health Commun. Dec 2023;38(14):3223-3237. [CrossRef] [Medline]
  9. Neter E, Brainin E. Association between health literacy, eHealth literacy, and health outcomes among patients with long-term conditions. Eur Psychol. Jan 2019;24(1):68-81. [CrossRef]
  10. Barbosa MCF, Baldiotti ALP, Braga NS, et al. Cross-cultural adaptation of the digital health literacy instrument (DHLI) for use on Brazilian adolescents. Braz Dent J. 2023;34(5):104-114. [CrossRef] [Medline]
  11. Park E, Kwon M. Testing the Digital Health Literacy Instrument for adolescents: cognitive interviews. J Med Internet Res. Mar 15, 2021;23(3):e17856. [CrossRef] [Medline]
  12. Xie L, Hu H, Lin J, Mo PKH. Psychometric validation of the Chinese digital health literacy instrument among Chinese older adults who have internet use experience. Int J Older People Nurs. Jan 2024;19(1):e12568. [CrossRef] [Medline]
  13. Bouclaous C, Kamand AA, Daher R, Alrazim A, Kaedbey HD. Digital health literacy and online information-seeking behavior of Lebanese university students in the time of the COVID-19 pandemic and infodemic. Nord J Digit Lit. Apr 11, 2023;18(1):60-77. [CrossRef]
  14. Chen SC, Hong Nguyen NT, Lin CY, et al. Digital health literacy and well-being among university students: mediating roles of fear of COVID-19, information satisfaction, and internet information search. Digit Health. 2023;9:20552076231165970. [CrossRef] [Medline]
  15. Dadaczynski K, Okan O, Messer M, et al. Digital health literacy and web-based information-seeking behaviors of university students in Germany during the COVID-19 pandemic: cross-sectional survey study. J Med Internet Res. Jan 15, 2021;23(1):e24097. [CrossRef] [Medline]
  16. Shudayfat T, Bani Hani S, Al Qadire M. Assessing digital health literacy level among nurses in Jordanian hospitals. Electron J Gen Med. 2023;20(5):em525. [CrossRef]
  17. Alhammad N, Alajlani M, Abd-Alrazaq A, Epiphaniou G, Arvanitis T. Patients’ perspectives on the data confidentiality, privacy, and security of mHealth apps: systematic review. J Med Internet Res. May 31, 2024;26:e50715. [CrossRef] [Medline]
  18. Panel Fryslân. Research methodology: sampling and recruitment of a representative internet panel [Report in Dutch]. Planbureau Fryslân; 2023. URL: https://planbureau.frl/wp-content/uploads/2024/01/Verantwoording-herwerving-panel-2023.pdf [Accessed 2026-02-09]
  19. Morris NS, MacLean CD, Chew LD, Littenberg B. The single item literacy screener: evaluation of a brief instrument to identify limited reading ability. BMC Fam Pract. Mar 24, 2006;7:21. [CrossRef] [Medline]
  20. Vereniging voor statistiek en onderzoek handbook GDPR & statistical research. Vsonet. URL: https://vsonet.nl [Accessed 2026-03-31]
  21. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. Sep 1951;16(3):297-334. [CrossRef]
  22. Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use. Oxford University Press; 2014. URL: https://academic.oup.com/book/56061 [Accessed 2026-03-18] ISBN: 9780192869487
  23. George D, Mallery P. SPSS for Windows Step by Step: A Simple Guide and Reference, 170 Update. Allyn & Bacon; 2010. URL: https://books.google.com.my/books?id=KS1DPgAACAAJ [Accessed 2026-03-28] ISBN: 0205755615
  24. Terwee CB, Bot SDM, de Boer MR, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. Jan 2007;60(1):34-42. [CrossRef] [Medline]
  25. Description of the eight EQF levels. Europass. 2026. URL: https://europass.europa.eu/en/description-eight-eqf-levels [Accessed 2026-02-10]
  26. Jiang X, Wang L, Leng Y, et al. The level of electronic health literacy among older adults: a systematic review and meta-analysis. Arch Public Health. Nov 7, 2024;82(1):204. [CrossRef] [Medline]
  27. Yuen E, Winter N, Savira F, et al. Digital health literacy and its association with sociodemographic characteristics, health resource use, and health outcomes: rapid review. Interact J Med Res. Jul 26, 2024;13:e46888. [CrossRef] [Medline]
  28. Wang C, Chang L, Chen X, Kong J, Qi H. eHealth literacy assessment instruments: scoping review. J Med Internet Res. Aug 20, 2025;27:e66965. [CrossRef] [Medline]
  29. Crocker B, Feng O, Duncan LR. Performance-based measurement of eHealth literacy: systematic scoping review. J Med Internet Res. Jun 2, 2023;25:e44602. [CrossRef] [Medline]
  30. Waterworth S, Honey M. On-line health seeking activity of older adults: an integrative review of the literature. Geriatr Nurs. 2018;39(3):310-317. [CrossRef] [Medline]
  31. Bachofner Y, Seifert A, Sepahniya S, Fabian C. Online health information-seeking among older adults and predictors of use, motivations, and barriers in the context of healthy aging: cross-sectional study. Online J Public Health Inform. Jan 6, 2026;18:e77557. [CrossRef] [Medline]
  32. Qiu CS, Lunova T, Greenfield G, et al. Determinants of digital health literacy: international cross-sectional study. J Med Internet Res. Jun 30, 2025;27:e66631. [CrossRef] [Medline]
  33. Coşkun AB, Elmaoğlu E, Buran C, Yüzer Alsaç S. Integration of ChatGPT and E-Health literacy: opportunities, challenges, and a look towards the future. J Health Rep Technol. 2024;10(1):e139748. [CrossRef]
  34. Nutbeam D, Okan O, EUPHA-HL,-HP,-DH,-ETH,-FS,-GH, IUHPE Working Group on Health Literacy, Technical University of Munich (Germany) and Chair persons. 5.B. Round Table: improving digital health literacy in the era of generative artificial intelligence. Eur J Public Health. Nov 1, 2024;34(Supplement_3). [CrossRef]
  35. Planbureau Fryslân. URL: https://planbureau.frl/ [Accessed 2026-03-24]


DHLI: digital health literacy instrument
DHLI 2.0: revised version of the DHLI
FF: full form
SF: short form
SILS: Single Item Literacy Screener


Edited by Amy Schwartz; submitted 31.Oct.2025; peer-reviewed by Huiying Qi, Jong Long Guo; final revised version received 12.Feb.2026; accepted 23.Feb.2026; published 01.Apr.2026.

Copyright

© Rosalie van der Vaart, Inge Tuitert, Job van 't Veer, Constance Drossaert. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 1.Apr.2026.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.