Background: eHealth literacy describes the ability to locate, comprehend, evaluate, and apply web-based health information to a health problem. In studies of eHealth literacy, researchers have primarily assessed participants’ perceived eHealth literacy using a short self-report instrument, for which ample research has shown little to no association with actual performed eHealth-related skills. Performance-based measures of eHealth literacy may be more effective at assessing actual eHealth skills, yet such measures seem to be scarcer in the literature.
Objective: The primary purpose of this study was to identify tools that currently exist to measure eHealth literacy based on objective performance. A secondary purpose of this study was to characterize the prevalence of performance-based measurement of eHealth literacy in the literature compared with subjective measurement.
Methods: We conducted a systematic scoping review of the literature, aligning with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist, in 3 stages: conducting the search, screening articles, and extracting data into a summary table. The summary table includes terminology for eHealth literacy, description of participants, instrument design, health topics used, and a brief note on the evidence of validity for each performance-based measurement tool. A total of 1444 unique articles retrieved from 6 relevant databases (MEDLINE; PsycINFO; CINAHL; Library and Information Science Abstracts [LISA]; Library, Information Science & Technology Abstracts [LISTA]; and Education Resources Information Center [ERIC]) were considered for inclusion, of which 313 (21.68%) included a measure of eHealth literacy.
Results: Among the 313 articles that included a measure of eHealth literacy, we identified 33 (10.5%) that reported on 29 unique performance-based eHealth literacy measurement tools. The types of tools ranged from having participants answer health-related questions using the internet, having participants engage in simulated internet tasks, and having participants evaluate website quality to quizzing participants on their knowledge of health and the web-based health information–seeking process. In addition, among the 313 articles, we identified 280 (89.5%) that measured eHealth literacy using only a self-rating tool.
Conclusions: This study is the first research synthesis looking specifically at performance-based measures of eHealth literacy and may direct researchers toward existing performance-based measurement tools to be applied in future projects. We discuss some of the key benefits and drawbacks of different approaches to performance-based measurement of eHealth literacy. Researchers with an interest in gauging participants’ actual eHealth literacy (as opposed to perceived eHealth literacy) should make efforts to incorporate tools such as those identified in this systematic scoping review.
Individuals form beliefs, make decisions, and enact behavior based on what they perceive to be high-quality and relevant information . Ideas and beliefs about healthy and unhealthy behavior are shaped from various sources, but, above all, the internet plays an increasingly prominent role in people’s information diet regarding health-related topics. Research indicates that the internet is the first source people turn to for health information [ ] and in some cases is deemed more credible than physician diagnoses [ ]. The internet provides several affordances for accessing health information that cannot be matched by other resources: it is available 24 hours a day, queries are answered immediately, queries can be made anonymously, multiple points of view can be accessed and considered by the information seeker, and all of this is offered in the comfort of one’s own home [ ]. Improved public access to health-related information concerning treatment and prevention has the potential to make a dramatic positive impact on public health outcomes. However, the realization of this positive impact on health-related behavior largely depends on how the internet is used by the information seeker.
Although the affordances of the internet as a health information resource are plentiful, locating high-quality health information on the internet can be challenging. The volume of health information on the web is massive and perpetually increasing; for example, a Google search for “sore throat treatment” returns nearly 2 billion results. Within this plethora of information, users’ search results are often clouded with misleading, inaccurate, or unsubstantiated information ; for example, in a sample of 227 web pages related to “immunity boosting” and “coronavirus,” Rachul et al [ ] found that 85.5% of the sources portrayed “immune boosting” as beneficial for preventing COVID-19 infection despite no existent scientific evidence. Information seekers are tasked with sifting through search engine results to locate something relevant, trustworthy, and in an accessible format to inform their decisions. Norman and Skinner [ ] describe this ability as eHealth literacy, which they define as “the ability to seek, find, understand, and appraise health information from electronic sources and apply the knowledge gained to addressing or solving a health problem” [ ].
The construct of eHealth literacy was initially derived from 6 underlying foundational literacies: traditional literacy, information literacy, media literacy, health literacy, computer literacy, and scientific literacy . Norgaard et al [ ] extended this model further to identify 7 domains that ultimately contribute to an individual’s eHealth literacy: ability to process information (locating, interpreting, and applying health information), engagement in own health (interest in, and basic knowledge of, personal health and health care systems), ability to actively engage with digital services (competency with technology and navigating the internet), feel safe and in control (confidence and knowledge of securing personal health information), motivated to engage with digital services (accepting attitude toward web-based health resources and services), access to digital services that work (having the hardware and software to access web-based health resources and services), and digital services that suit individual needs (web-based health resources exist that are accessible and understandable to the individual user). Taken together, these models of eHealth literacy depict the diverse combination of critical skills needed to effectively acquire health information on the web.
Concurrent with their publication coining eHealth literacy, Norman and Skinner  published the eHealth Literacy Scale (eHEALS), an 8-item questionnaire wherein each item is rated using a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree), with higher scores indicating higher perceived eHealth literacy. In a systematic review of eHealth literacy measurement tools, Karnoe and Kayser [ ] noted that this was the only tool used in multiple studies at the time of publication, and it remains the most widely used instrument across the literature to assess eHealth literacy [ ]. As a short self-report assessment tool, the eHEALS is attractive for health care providers looking to optimize clinical efficiency and convenient for researchers to minimize participant burden. However, the self-report nature of this measure may limit its usefulness at providing an accurate depiction of how well people critically engage with web-based health information in today’s internet landscape.
Although self-report measurements have functioned as invaluable research tools in several settings, the efficacy of using self-report to evaluate one’s own skills, abilities, or proficiency in many contexts has been criticized for decades [, ]. In their seminal review and meta-analysis of self-evaluations of ability across 55 studies, Mabe and West [ ] found only a very modest (mean r=0.29) relationship between self-reported ability and performance-based measurement. Several psychological mechanisms may contribute to the considerable gap in estimates of ability and actual ability, many of which have been supported by research across a variety of cognitive skills [ ]; for example, individuals have a tendency to be confident in skills where they feel fluent [ ], meaning people may mistake the ability to fluidly perform a skill with the ability to perform that skill well. In the context of web-based health information seeking, people may mistake the ability to seamlessly locate some health information on the internet with the distinct ability to retrieve accurate, trustworthy, and relevant web-based health information. Many avid followers of the antivaccine movement have spent several hours surfing the internet for health-related information [ , ]. Their relatively high experience of web-based health information seeking may lead them to feel fluent in the skill, yet they are evidently not equipped to judge the quality of web-based information by the standards of evidence-based medicine.
Relating to web-based information seeking, ample literature indicates that people have a tendency to overestimate their ability with computers [, ] as well as their ability to locate and understand information on the web [ , ]. Frequent overestimation of ability in this context may result from overlap between the cognitive skills necessary for the task itself and the metacognitive skills necessary for accurate self-assessment, meaning that those without the competence to consistently identify trustworthy information on the web are unlikely to have the capacity to recognize their lack of competence. Dunning [ ] has put it more plainly: “If they had the skill to know they were making consistent mistakes, they would also have the skill to avoid those blunders in the first place.” Fittingly, in the context of web-based health information seeking, Stellefson et al [ ] found that the least proficient searchers commonly exhibited high confidence in their searching abilities, indicating that self-efficacy is not a consistent predictor of eHealth literacy skills. Similarly, Maitz et al [ ] found that adolescents who reported higher eHEALS scores did not perform better at selecting higher-quality websites to access health information. Illustrating this point further, Brown and Dickson [ ] found that a group of occupational therapy graduate students reported lower average eHEALS scores than the high school participants in the original paper by Norman and Skinner [ ]. Taken together, this literature indicates that intellectual hubris may play a bigger role in self-rated eHealth literacy than actual knowledge or ability; therefore, it should come as little surprise that a bulk of studies have noted no or little association between eHEALS scores and demonstrated eHealth skills [ - ]. Measures of eHealth literacy that involve more objective performance-based measurement of eHealth skills and abilities may be more promising at assessing how people actually behave on the web; however, such measures seem to be more scarcely used among researchers to date [ ].
Although research syntheses have been undertaken related to eHealth literacy measurement instruments and their properties [, ], no synthesis to date has looked specifically at performance-based measures of eHealth literacy. This work is needed to assist researchers in identifying performance-based measurement tools in the literature that may be applied in future projects.
The purposes of this study were 2-fold. The first purpose was to identify tools that currently exist to measure eHealth literacy based on objective performance. Within this purpose, we were interested in the design of these tools, their intended population, and whether their authors included some evidence of validity. The second purpose of this study was to characterize the prevalence of performance-based eHealth literacy measurement tools in the literature broadly compared with subjective eHealth literacy measurement tools.
We conducted a systematic scoping review of the literature. A scoping review was deemed appropriate for the purpose of this study, given its broad focus on how research involving eHealth literacy measurement is conducted . This scoping review was conducted in 3 main stages, aligning with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist [ ].
In the first stage, we devised an electronic search protocol that was completed by 2 researchers. We used the following search string, modified from the one used in a systematic review conducted by Karnoe and Kayser :
(“eHealth literacy” OR “electronic health literacy” OR “e-health literacy” OR “digital health literacy” OR (“health literacy” AND “digital literacy”) OR (“health literacy” AND “computer literacy”)) AND (scale OR measure OR survey OR questionnaire OR test OR assessment)
We applied this string to search 6 relevant databases (MEDLINE; PsycINFO; CINAHL; Library and Information Science Abstracts [LISA]; Library, Information Science & Technology Abstracts [LISTA]; and Education Resources Information Center [ERIC]) selected in consultation with a university librarian. All searches were conducted in June 2021. Across these 6 databases, 1735 search results were obtained, which we reduced to 1444 (83.23%) unique articles after removing duplicates.
In the second stage, we conducted 3 screening phases to narrow down the search results. In each screening phase, articles were sought that met the following inclusion criteria: (1) written in English; (2) published in peer-reviewed journals; and (3) created, revised, or used an eHealth literacy assessment tool (in accordance with the definition of eHealth literacy proposed by Norman and Skinner , quoted earlier in this paper). The first screening phase involved screening all articles using their titles, which was carried out by the first author. After this phase, of the 1444 articles, 788 (54.57%) were retained for possible inclusion. The second screening phase involved screening all remaining articles using their abstracts, which was carried out by the first and second authors. If either author opted to include the article, it was kept for the third screening phase. After this phase, of the 788 articles, 374 (47.5%) were retained for possible inclusion. The third screening phase involved reading the full text of all remaining articles, which was carried out by the first and second authors. During this phase, each author also classified each included article as containing only a subjective eHealth literacy measure (based on self-rated assessment) or as containing a performance-based eHealth literacy measure (based on practical skill or knowledge assessment).
After the third screening phase, of the 374 articles, 311 (83.2%) were retained for inclusion in this scoping review; 31 (10%) of these 311 articles contained a performance-based measure of eHealth literacy. In cases of disagreement between the first and second authors during the third screening phase, the third author was consulted, and discussion ensued until consensus was reached. Throughout the entire screening process, all authors only excluded articles that very evidently did not meet the inclusion criteria. At this point, the first author read the bibliographies of the articles containing a performance-based measure of eHealth literacy (n=31) and highlighted titles that could meet the inclusion criteria of this scoping review. After discarding studies that had already been considered in the screening process, new studies were selected (n=6) and read in full by the first and second authors. Of these 6 new studies, 2 (33%) were included in the scoping review, and both contained a performance-based measure of eHealth literacy. Thus, the total number of articles included in this scoping review is 313, of which 33 (10.5%) contain a performance-based measure of eHealth literacy. The scoping review process we undertook is summarized in.
In the third stage, we extracted data from each of the articles containing a performance-based measure and summarized them into a table. The first and second authors each reread the articles containing a performance-based measure of eHealth literacy (n=33) and input information from each into a comprehensive table devised by all 3 authors. This table was later simplified into appropriate headings, selected in the interest of providing a concise and relevant summary of the findings in this paper. In judging evidence of validity, we considered 3 relevant types of validity: ecological validity, criterion validity, and construct validity. Ecological validity describes the extent to which a measure represents or predicts behavior in real-world settings . In the context of measuring performance-based eHealth literacy, we considered a measure to be ecologically valid if it involved participants accessing real or authentically simulated websites to gather or evaluate health information. Criterion validity describes the extent to which the scores of a measure predict scores of another established measure of interest [ ]. We stated that a measure had criterion validity if the authors found statistically significant correlations between scores of their objective eHealth literacy measure and scores from another relevant construct or validated measure (eg, higher scores on this measure correlate with higher scores of another validated measure of computer-related skills). Construct validity describes the extent to which the scores of a measure reflect the actual phenomena that measure intends to assess [ ]. This concept has some overlap with criterion validity; however, for our purposes, we stated that a measure had construct validity if meaningful differences were reported based on relevant lifestyle or demographic factors or reported among groups that can be reasonably assumed to have different levels of eHealth literacy (eg, those with more health-related education score consistently higher). We also noted whether the performance-based measurement tool described in the article was included in its full form, partial form (examples provided but some parts omitted), or not at all.
Performance-Based eHealth Literacy Measurement Tools
Our scoping review identified 33 peer-reviewed studies using a tool to measure eHealth literacy that incorporated a performance-based aspect. Within these studies, there were just 2 measures used more than once. In total, 3 (9%) of the 33 articles used the eHealth literacy assessment toolkit (eHLA) created by Karnoe et al , and another 3 (9%) used the Research Readiness Self-Assessment created by Ivanitskaya et al [ ]. We thus identified a total of 29 unique performance-based measures of eHealth literacy. It is notable that many of these studies use differing terminology to describe the construct of eHealth literacy, as outlined in ; however, all studies included in this review measure constructs aligning with the definition of eHealth literacy originally proposed by Norman and Skinner [ ] based on our judgment.
We categorized the 29 unique performance-based measurement tools identified in this scoping review into 5 broad categories according to the structure of their main measurement technique. It should be noted that several of the measurement tools have components that fall into ≥2 of these categories; for instance, the Research Readiness Self-Assessment-health (RRSA-h) [, ] measures declarative knowledge about the internet and has participants respond to questions that simulate using web-based health information. We have described the instrument design of each performance-based measurement tool in such that readers can gain a fuller understanding of each tool compared with the brief descriptions within the body of this paper.
|Authors, year||Terminology for measured construct||Participants and context||Instrument design||Health topics||Evidence of validity||Instrument availability|
|Answering health-related questions using the internet|
|Agree et al , 2015||Web-based health literacy||323 participants aged 35 to 90 years||Six health-related questions were answered by performing web-based searches on a computer, limited to 15 minutes per task; answers were coded by 2 researchers for response accuracy (0 or 1) and specificity (0, 1, or 2) to give a score ranging from 0 to 18||Diet and nutrition guidelines, skin cancer, alternative medicine, vaccines, assistive health technology, and over-the-counter genetic testing||Construct validity was demonstrated in that having a college degree and daily internet use were positively associated with more successful health information searches, and the oldest age group had lower success scores compared with younger participants; criterion validity was demonstrated in that higher health literacy was positively associated with success on some search tasks||Partially|
|Blakemore et al , 2020||eHealth literacy||A massive open web-based course run 8 times||Participants responded to 1 health-related question and were asked to list the resources they used to inform that answer; answers were coded according to the extent that quality resources were used in this question||Epigenetics and cancer||Ecological validity was demonstrated via having participants access web-based resources to inform answers to a health-related question||Yes|
|Chang et al , 2021||Searching performance||11 older adult participants||Participants were asked to search for specific health information using a web browser on a computer; search completion time and problem correctness were measured by researchers during live observation||Vaccination for older adults, stroke, and angina||Ecological validity was demonstrated via participants accessing real-world web-based health information to answer health-related questions||No|
|Freund et al , 2017||eHealth literacy||79 older adult participants||Participants were asked to answer 6 questions (3 each for 2 health scenarios) while being given the option of using links to web-based medical databases with relevant information||Hypertension, high blood pressure, osteoporosis, breast cancer, and prostate cancer||Construct validity was demonstrated in that test scores for the intervention group improved; ecological validity was demonstrated in that participants responded to questions by referencing a web-based resource||Yes|
|Kordovski et al , 2020||eHealth search skills||56 undergraduate students enrolled in psychology courses||Participants were instructed to find the answer to 5 short questions and 1 vignette-based question using an internet browser of their choice; participants’ accuracy, time to complete each task, and total number of search queries were recorded||Headaches, migraines, and Lyme disease||Criterion validity was demonstrated in that long-answer accuracy was associated with better performance on a learning and memory composite test; construct validity was demonstrated in that lower performance on short questions was associated with lower maternal education and lower socioeconomic status||Yes|
|Loda et al , 2020||Information-seeking behavior||140 medical students||Students were randomly assigned to use a specific search engine (Google, Medisuch, or free choice) and had 10 minutes to fill in a worksheet outlining a diagnostic recommendation; to pass, students needed to give at least 1 of 3 recommendations matching those of a clinical expert||Histamine intolerance||None||No|
|Quinn et al , 2017||eHealth literacy||54 adults||Participants were presented 6 health questions, and they used a browser to search for information to answer the questions; each answer was scored as correct or incorrect, with a final sum score out of 6||Various topics||Criterion validity was demonstrated because the scores correlated with health literacy||Yes|
|Sharit et al , 2008||Internet search task performance||40 older adults||Participants were assigned 6 search problems involving health-related information, for which they had to provide an answer using information they found on the internet; participants had 15 minutes to solve each problem, and the problems were progressively more complex; the problem solutions were scored as incorrect, partially correct, or correct by the researcher to create a task performance score; scores were weighted by difficulty, and participants’ completion times for each problem were also measured and factored into the score such that faster times indicated better performance||Various||Criterion validity was demonstrated in that higher performance correlated with higher knowledge of the internet, as well as with measures of reasoning, working memory, and perceptual speed||Yes|
|Sharit et al , 2015||Search accuracy||60 adults||Participants were given a health scenario, followed by a series of questions related to it, and they could use the internet to find answers; to assess accuracy, researchers assigned a score for each question; questions were weighted based on their difficulty (differed in complexity and number of subtasks)||Multiple sclerosis||Criterion validity was demonstrated as search accuracy significantly correlated with reasoning, verbal ability, visuospatial ability, processing speed, and executive function||Yes|
|van Deursen and van Dijk , 2011||Internet skills performance||88 adults||Participants completed 9 health-related assignments using a computer with high-speed internet; assignment was deemed successfully completed if a correct answer was provided and deemed unsuccessful if no correct answer was provided in the given time frame||Various||Ecological validity was demonstrated via participants using unrestricted web-based searching to answer health-related questions||Yes|
|van Deursen , 2012||Internet skills performance||88 adults||Participants completed 9 health-related assignments using a computer with high-speed internet; assignment was deemed successfully completed if a correct answer was provided and deemed unsuccessful if no correct answer was provided in the given time frame||Various||Ecological validity was demonstrated via participants using unrestricted web-based searching to answer health-related questions; construct validity was demonstrated as education was predictive for making incorrect decisions based on the information found||Yes|
|Simulated internet tasks|
|Camiling , 2019||Actual eHealth literacy (distinct from perceived eHealth literacy)||40 grade-10 students from public and private schools||Participants completed 10 simulation tasks; 2 researchers used a rubric to rate eHealth literacy based on task performance||Not specified||Ecological validity was demonstrated via use of simulated internet research tasks resembling a realistic environment||No|
|Chan and Kaufman , 2011||eHealth literacy||20 adult participants aged between 18 and 65 years||Participants completed eHealth tasks while verbalizing their thoughts (think-aloud protocol); researchers observed their performance, rated accuracy, and denoted barriers based on video capture, audio recording, and notes taken during observation||Comparing hospital ratings||Ecological validity was demonstrated via participants actively completing health-related internet tasks in a realistic environment||Partially|
|Maitz et al , 2020||Health literacy (the authors note in their study that their understanding of health literacy includes “internet-based information literacy and reading literacy”)||14 secondary school students aged 12 to 14 years||Participants were asked to provide health-related advice in response to a short narrative text; they were asked to take screenshots of all searches and web pages opened; the web pages were later classified by researchers as good, fair, poor, or bad||Rhinoplasty and skin cancer||None||Yes|
|Neter and Brainin , 2017||eHealth literacy||88 older adults||Participants completed 15 computerized simulation tasks assessing digital and health literacy skills; tasks were rated as completed or not completed by the researcher upon reviewing the recorded performance; time needed to perform the task was also recorded; 2 researchers provided overall observational judgment on participants’ performance, ranging from 1 (poor) to 5 (good); a third researcher evaluated whether disagreements were present||Various topics||Construct validity was demonstrated because lower performers had significantly fewer years of experience using the internet||Yes|
|van der Vaart et al , 2011||eHealth literacy||88 adults||Participants completed 9 health-related assignments using a computer with high-speed internet; assignment was deemed successfully completed if a correct answer was provided and deemed unsuccessful if no correct answer was provided in the given time frame||Various||Ecological validity was demonstrated via participants using unrestricted web-based searching to answer health-related questions||Yes|
|van der Vaart et al , 2013||eHealth literacy||31 adult patients||In study 1, participants could use the internet freely to complete 6 health-related assignments; in study 2, participants used specific websites to complete 5 health-related assignments; researchers coded whether the assignment was completed and whether help was needed; in addition, the time needed to perform each assignment was recorded; the performance was ultimately scored as good, reasonable, or poor according to the skills participants used to execute the assignment||Various||Construct validity was demonstrated through correlations of higher performance with higher education||Yes|
|Witry et al , 2018||eHealth task performance||100 adult patients with COPDa||Participants completed a series of timed eHealth simulation exercises using a laptop computer and 2 different tablet devices; the time taken to complete each task was measured and used to indicate task performance, with faster times indicating better performance||COPD||Construct validity was demonstrated because those who reported using video chat took less time than nonusers to complete most of the tasks||No|
|Website evaluation tasks|
|Kalichman et al , 2006||Health information evaluation skills||448 adults who used the internet <3 times in the month before screening||Participants rated 2 preselected web pages—1 from a medical association and 1 with scientifically unsupported claims—on 5 dimensions of website quality; a larger difference in scores indicated higher health information evaluation skills||HIV and AIDS treatment||Construct validity was demonstrated in that those receiving internet skills training had better discrimination||Yes|
|Mitsuhashi , 2018||eHealth literacy evaluation skills||300 adult participants||Participants were shown a search engine results page with 5 websites and asked which should be viewed first; the list included 2 commercial websites, 2 personal health care websites, and 1 government website; participants choosing the government website were assigned 1 point, others were assigned 0 points||Not specified||Construct validity was demonstrated in that evaluation skills improved significantly in an e-learning intervention group compared with the control group||No|
|Schulz et al , 2021||Health literacy||362 adults||Participants rated 2 health information websites (one was of high quality, whereas the other was of low quality) using 3 seven-step semantic differential scales; in addition, participants were asked to choose beneficial depression treatments from a list of relevant and nonrelevant treatments||Depression treatment||Criterion validity was demonstrated in that those with high health literacy and accurate recognition of the low-quality website demonstrated good judgment for depression treatment||Partially|
|Trettin et al , 2008||Website evaluation||142 high school students||Two measures: first, a brief 2-item pretest of knowledge about how to evaluate a website, and second, participants ranked 2 different websites (assigned to them from a list of 12 websites) using 6 credibility factors, with scores ranging from 1 (very bad) to 5 (very good)||Not specified||Ecological validity was demonstrated in that participants ranked authentic health-related websites according to their credibility||Yes|
|Xie , 2011||eHealth literacy||124 older adults||Participants were asked to evaluate the quality of 20 health websites: 10 selected from the Medical Library Association’s recommended sites and 10 from a commercial search engine; each correct assessment received 1 point, whereas incorrect or uncertain assessments received 0 points||Not specified||Construct validity was demonstrated because scores improved after an educational intervention||No|
|Knowledge of theweb-basedhealth information–seeking process|
|Hanik and Stellefson , 2011||eHealth literacy||77 undergraduate health education majors||Researchers used the RRSA-hb, which is a questionnaire that tests participants’ declarative knowledge of concepts, skills, and thinking strategies related to using the internet to find health information||Various||Ecological validity was demonstrated in the study by Ivanitskaya et al ||No|
|Hanna et al , 2017||eHealth literacy||165 adult dental patients||Participants were asked to circle the web-based health information quality seals they recognized and report the purpose of 1 circled figure||Third molar knowledge||Criterion validity was demonstrated in that the eHEALSc scores correlated with the dental procedural web-based information–seeking measure; construct validity was demonstrated in that web-based dental procedural information seeking was significantly associated with educational attainment and dental decisional control preference||Yes|
|Ivanitskaya et al , 2006||Health information competency||400 college-aged students||Researchers used the RRSA-h, which is a web-based quiz that assesses declarative and procedural knowledge related to web-based health information seeking||Various||Ecological validity was demonstrated in that some questions had participants access real health-related websites to assess their credibility||No|
|Ivanitskaya et al , 2010||eHealth literacy skills||1914 undergraduate and graduate students enrolled in health-related courses||Researchers used the RRSA-h, which is a web-based quiz that assesses declarative and procedural knowledge related to web-based health information seeking; a proxy measure of critical judgment skills related to pharmacies was also included||Pharmaceuticals and various others||Construct validity was demonstrated in that evaluation skills positively correlated with the number of earned college credits and being enrolled in a health-related major||Partially|
|St. Jean et al , 2017||Digital health literacy||19 adolescents||Participants were given 13 questions related to searching for health information; researchers analyzed responses using thematic analysis; no evident scoring system used||Type 1 diabetes||None||Yes|
|van der Vaart and Drossaert , 2017||Digital health literacy, eHealth literacy||200 adults||Participants completed a 28-item questionnaire: 21 are self-report items, whereas 7 are performance-based items for each of which there is a correct answer||Various||None for performance-based items||Yes|
|Holt et al , 2019||eHealth literacy||246 adult patients with diabetes, other endocrine conditions, and gastrointestinal diseases||Researchers used eHLAd performance tests (tools 1 and 4); tool 1 is a performance-based health literacy test based on an information leaflet, and tool 4 is a performance test for knowledge of health and health care||Various||Construct validity was demonstrated in that educational level was positively correlated with tool 4||Partially|
|Holt et al , 2020||eHealth literacy||366 nursing students||Researchers used eHLA performance tests (tools 1 and 4); tool 1 is a performance-based health literacy test based on an information leaflet, and tool 4 is a performance test for knowledge of health and health care||Various||Construct validity was demonstrated in that graduate-level students scored higher than entry-level students, and performance on tools 1 and 4 was correlated with having at least 1 parent with experience in the social or health care system||Partially|
|Karnoe et al , 2018||eHealth literacy||475 adults used as a validation sample||Researchers used the eHLA, which consists of 7 tools, 2 of which are objective measures: tool 1 is a performance-based health literacy test based on an information leaflet, and tool 4 is a performance test for knowledge of health and health care||Various||None||Partially|
|Liu et al , 2020||Digital health literacy||1588 adult participants||Participants were provided 5 randomly selected items from a large web-based health information bank and asked whether the information was right or wrong; 2 items were designed to be relatively easy to judge accurately, 2 were moderately easy, and 1 was difficult; participants scored 1 for each accurate judgment and 0 for being incorrect or unsure||Various||Ecological validity was demonstrated in that the web-based health information bank was generated from real web-based sources; construct validity was demonstrated because participants at high risk for misjudging health information had lower education level, poorer health, and used the internet less||Partially|
aCOPD: chronic obstructive pulmonary disease.
bRRSA-h: Research Readiness Self-Assessment-health.
ceHEALS: eHealth Literacy Scale.
deHLA: eHealth literacy assessment toolkit.
Answering Health-Related Questions Using the Internet
One of the most common types of performance-based measurements identified in this scoping review involved participants answering health-related questions using the internet; these were questions for which responses could be scored according to correctness, completeness, and specificity. In the majority of the studies using this assessment method participants were allowed open access to the internet to inform their answers. An exception was the study by Freund et al  in which participants were provided expert-checked links to helpful resources that they could choose to use for answering health-related questions. In addition to the correctness of participants’ responses, some researchers factored completion time into participant scores such that faster completion time indicated greater proficiency [ , ]. Chang et al [ ] similarly considered faster completion times indicative of better web-based search performance, provided the responses were correct. The health questions posed in these measurement tools ranged from encompassing several diverse topics to being focused on 1 topic in depth; for example, both Agree et al [ ] and Quinn et al [ ] asked participants about 6 diverse health topics, whereas Kordovski et al [ ] and Loda et al [ ] asked participants to perform 1 in-depth medical diagnosis relating to symptoms of Lyme disease and histamine intolerance, respectively.
An advantage of eHealth literacy measurements where participants use the internet to answer health-related questions is that participants can be assessed without the need for researcher observation and video recording, except in cases where completion time is additionally factored into the scores. With these measurement tools, participants’ responses to questions were graded by ≥1 researchers according to a predetermined rubric. The majority of measurement tools in this category coded responses as simply correct (1 point) or incorrect (0 points); however, Agree et al  also examined specificity (on a scale ranging from 0 to 2 points) and Kordovski et al [ ] gave partial credit (1 out of 2 points) for coming close to the correct response by diagnosing something similar to Lyme disease. In contrast to considering response correctness, Blakemore et al [ ] gauged participants’ eHealth literacy skills based on whether they included a “detailed list of resources,” had “written about using resources,” or had no reference to web-based health resources in their response to a health-related question.
Simulated Internet Tasks
Another common type of performance-based eHealth literacy assessment identified in this scoping review involved web-based information–seeking simulations, where participants’ web-based behavior was observed (either in real time or via video recording) and assessed for proficiency by researchers. Proficiency was determined by assessing whether participants were able to complete a set of tasks; many researchers went further by also assessing the degree of efficiency and correctness that the participants exemplified throughout each simulated task; for example, in the study by van der Vaart et al , participants’ task performance was simply coded as successful or unsuccessful, depending on whether they accomplished the end result of the task (eg, adding a specified website as a bookmark or downloading a specific image), whereas in the study conducted by Camiling [ ], a rubric was developed and used to rate participants’ proficiency while researchers observed them completing 10 health-related tasks on an internet-connected computer. Similarly, Neter and Brainin [ ] viewed recordings of participants completing 15 simulation tasks on an internet-connected computer and rated each task on whether it was completed and additionally on the quality of task performance on a scale ranging from 1 (poor) to 5 (good). Video and audio recordings were commonly used to record participants’ actions and thoughts in simulation-based measurement tools, with the notable exception of Maitz et al [ ] who had participants record screenshots of the websites they visited.
eHealth literacy assessments based on simulated internet tasks tended to be the most time-consuming for participants: the 15 simulation tasks in the study by Neter and Brainin  took approximately 90 minutes for each participant to complete, and the 5 or 6 simulation tasks in the study by van der Vaart et al [ ] took an approximate median time of 30 minutes for each participant to complete. A notable exception is the study by Witry et al [ ] in which the assigned eHealth simulation tasks were far simpler and could be completed in 1 minute; however, it should be noted that participants’ completion time was the only metric used to gauge participants’ eHealth literacy in this study. It also stands to reason that evaluating the proficiency of participants’ recorded web-based behavior is quite time-consuming for researchers compared with assessment methods that are scored based on correct or incorrect responses to questions. In the studies conducted by Chan and Kaufman [ ] as well as van der Vaart et al [ ], researchers factored in the correctness of responses in addition to researcher-observed performance to gauge participants’ eHealth literacy, combining task simulation with the Health-related questions using the internet methods discussed previously.
Website Evaluation Tasks
A third prominent type of performance-based eHealth literacy assessment identified in this scoping review involved participants evaluating the quality of health-related websites. In the studies conducted by Kalichman et al , Schulz et al [ ], and Trettin et al [ ], participants rated 2 different websites on 5, 7, and 6 dimensions related to their credibility, respectively. In each of these studies, the authors presented participants with a website of high quality and another of low quality, and participants’ eHealth literacy was determined based on whether they rated the websites accordingly (with a larger difference between high- and low-quality website ratings indicating greater proficiency).
Deviating from the structure of these studies but still related to evaluating website quality, Mitsuhashi  had participants select which website they should view first from a search results page with 5 options (2 commercial websites, 2 personal health care websites, and 1 government website), in which participants who selected the government website were said to have proficient evaluation skills. Xie [ ] collected 20 websites to present to participants—10 from a commercial search engine and 10 from a medical association’s recommended websites—and had participants assess each website as high or low quality. Correct assessments earned participants 1 point, and incorrect or uncertain assessments earned 0 points, for a maximum score of 20.
The rating of the quality of real health-related websites by participants significant ecological validity in terms of participants’ knowledge of how to recognize signifiers of credible health information on the web; however, in contrast to the simulated internet task measurement tools, these tools do not require participants to form queries or extract information to apply to health-related problems. Although evaluating the credibility of a web-based source is undoubtedly a pivotal component of eHealth literacy, it does not encompass all components of eHealth literacy as defined by Norman and Skinner .
Knowledge of the Web-Based Health Information–Seeking Process
Distinct from measurement tools where participants demonstrate eHealth literacy skills through task completion or by correctly answering health-related questions using the internet, other tools tested participants on their knowledge of procedural internet skills and information-seeking strategies to gauge their eHealth literacy. The most used tool in this category is the RRSA-h [, ]. The RRSA-h uses multiple choice and true or false questions to assess participants’ declarative knowledge of web-based health information seeking, as well as some participative problem-based exercises to assess elements of their procedural knowledge. Ivanitskaya et al [ , ] as well as Hanik and Stellefson [ ] used this measure to assess eHealth literacy skills in undergraduate and graduate students. The RRSA-h takes approximately 30 minutes on average to complete.
In contrast to this measure that takes relatively long to complete, Hanna et al  used just 1 item to gauge participants’ ability to recognize high-quality web-based information by asking them to circle web-based health information quality seals that they recognized from a set of 19 images. In addition, participants were asked to explain the purpose of one of the images they circled. Researchers analyzed whether participants were able to identify ≥1 real quality seals, as well as whether they could identify that the image was used to signify credibility. van der Vaart and Drossaert [ ] used 7 performance-based multiple-choice quiz items to gauge 7 dimensions of participants’ eHealth literacy skills, which were added to supplement a subjective measure of eHealth literacy. Finally, in this category, St. Jean et al [ ] devised a written assignment where adolescents advised a fictional peer about using the internet to find information about type 1 diabetes through 13 open-ended questions. The researchers did not use an evident scoring system to rate the eHealth literacy skills of this population; rather, they applied thematic analysis to characterize the eHealth literacy skills of the participant group as a whole.
Quizzing participants on their knowledge related to eHealth literacy does not require recording or observing participants’ web-based activity, and nor does it require necessarily providing participants with internet access, which makes this method more accessible for some research settings in terms of resources and complexity than other tools mentioned previously. As participants are not actively carrying out web-based health information–seeking tasks with these measurement tools, the tools may not carry the same degree of ecological validity as those mentioned previously. However, basing eHealth literacy assessment on participants’ knowledge of proficient web-based information–seeking skills still avoids many of the pitfalls related to subjective self-report measures of eHealth literacy.
We identified 2 measurement tools that examined participants’ baseline knowledge of health and their ability to apply that knowledge as an indicator of eHealth literacy. The most prominent tool is the eHLA , which is composed of 7 distinct measurement tools. Tools 1 and 4 within this set are performance-based measures that assess functional health literacy and participant knowledge of health and disease, respectively. The performance-based aspects of this tool (as well as its subjective tools) were used by Holt et al [ , ] to measure eHealth literacy in medical outpatients and nursing students.
Liu et al  also created a performance-based eHealth literacy measurement tool that assesses participants’ health knowledge. In their study, the authors generated a 310-item bank of examples of web-based health information, which included items labeled “easy,” “moderate,” and “difficult.” Participants were randomly assigned 5 items (2 easy, 2 moderately easy, and 1 difficult) and asked to rate the information as correct, incorrect, or unsure. Participants were given 1 point for accurately identifying information as correct or incorrect and 0 points otherwise for a score out of 5 representing their eHealth literacy.
In these 2 eHealth literacy measurement tools, there are no components that directly assess computer literacy in a performance-based manner nor do they contain performance-based components related to actively seeking health information (eg, forming a query or selecting a source). We have included them in this scoping review because the authors themselves define these as measures of eHealth literacy (and it should be noted that the eHLA includes subjective measurements that touch upon computer-related components of eHealth literacy). However, judging their performance-based components solely by the definition of eHealth literacy proposed by Norman and Skinner , these measures may be more accurately characterized as partial measures of the construct.
Prevalence of Performance-Based eHealth Literacy Measurement Tools
Of the 313 studies included in this scoping review, 33 (10.5%) used a performance-based measurement tool, and 280 (89.5%) used measures of eHealth literacy that only incorporated subjective or self-rating aspects. Furthermore, it is notable that 210 (67.1%) of the 313 studies reported using eHEALS in either its original form or translated into another language. Our findings indicate that current research on eHealth literacy has a strong tendency to rely on self-rated perceptions of eHealth literacy rather than assessing actual ability.
The primary purpose of this scoping review was to identify and describe tools that measure eHealth literacy based on objective performance (as opposed to subjective self-rating). We identified 29 such measurement tools, of which only 2 had been used in >1 peer-reviewed study as of the date of our search. These measurement tools were the RRSA-h , which is a web-based quiz measuring declarative and procedural knowledge related to web-based health information seeking, and the eHLA [ ], which assesses eHealth literacy through 7 distinct tools, of which 2 are performance based. It is noteworthy that the 2 performance-based tools within the eHLA do not touch upon computer- or internet-specific skills or knowledge; as such, not all aspects of eHealth literacy are measured in a performance-based fashion. The same critique may be applied to the measurement tool created by Liu et al [ ] who similarly do not directly address computer literacy or media literacy in their performance-based assessment tool.
The second purpose of this scoping review was to characterize the prevalence of performance-based eHealth literacy measurement tools in the literature in contrast to subjective measurement tools. Our findings indicate that the vast majority of current research on eHealth literacy relies on self-rated perceptions of eHealth literacy rather than assessing actual ability. This is concerning, considering the limited utility of this short self-report measure for predicting performed eHealth literacy as indicated by a substantial body of literature [, , - ]. If researchers are interested in gauging participants’ true ability to locate, evaluate, and use web-based health information, more efforts should be made to incorporate performance-based measurement tools of eHealth literacy, such as those identified in this scoping review.
Another notable finding of this scoping review is that of the 29 unique eHealth literacy measurement tools with performance-based components identified, only 2 had been used in >1 peer-reviewed study at the time of this research. This comes in stark contrast to the prevalence of eHEALS, which was used by 210 (67.1%) of the 313 studies in various contexts and languages. This could be due in part to several of the studies (8/33, 24%) not including full versions of their performance-based instruments, making it challenging for other researchers to replicate these measures in other projects. Another implicit challenge to creating a performance-based eHealth literacy measure that may be adopted for widespread use is the changing state of scientific consensus on health-related topics, meaning that correct answers to health-related questions may need to be updated over time; for example, dietary guidelines have changed substantially in the past few decades because they have been updated based on our growing scientific knowledge [, ]. In addition, health-related topics can differ substantially in relevancy or saliency among different populations, meaning a performance-based eHealth literacy measure effective for a particular population may not be as useful for another; for example, Loda et al [ ] designed a performance-based measure for use by medical students where they needed to produce a histamine-intolerance diagnosis on par with a clinical expert using the internet for research, which is a task that is likely beyond the abilities of typical users without comparable existing medical knowledge.
Judging these performance-based measurement tools by the definition of eHealth literacy proposed by Norman and Skinner , the tools with the greatest perceived ecological validity include those having participants answer health-related questions using the internet and those having participants engage in simulated web-based health–related tasks. These measurement tools evaluate participants’ web-based health information–seeking abilities in settings similar to the ones they encounter when seeking information on their own computers. The time-consuming nature (for both participants and researchers) and the equipment needs of these measures present substantial barriers to their use. Efforts to streamline some of these measurement tools by including only strictly necessary components could assist with broader use. One example of a concise measure is offered by Witry et al [ ], who created a set of simple eHealth simulation tasks that could be completed in <1 minute; however, it should be noted that this measure assesses one’s ability to navigate an eHealth platform more so than to actively seek health information on the web. While considering the challenges to using performance-based eHealth literacy measures, researchers should also weigh the major advantages of these tools in producing a more accurate depiction of performed eHealth literacy compared with short self-report measures.
One glaring absence across most performance-based measures of eHealth literacy is any mention of social media. In discussing the modern utility of eHEALS, Norman  noted that the shifting nature of the internet, with everyday users routinely contributing information on the web as opposed to only accessing it, necessitates modifications to existing measurement tools to maintain validity. Consumers are more frequently turning to social media platforms with health-related queries, where they stand to gain social and emotional support from peers while gaining crowdsourced wisdom related to their health problem [ ]. The validity and trustworthiness of information quality on these platforms has been flagged as a major concern [ ]; however, public health organizations have also used these engaging platforms to distribute high-quality and up-to-date information [ , ]. As an acknowledgment that users are able to gain useful and accessible health information using social media, more eHealth literacy measurement tools should incorporate these platforms into their assessment. One example of an assessment tool with a social web-based component is provided in the study by van der Vaart et al [ ], where participants were asked to demonstrate interacting with a health care rating website and peer support forum, for which their proficiency was assessed based on independent task completion and overall task performance. Given the prominence of social media in today’s web-based informational landscape, future performance-based measures of eHealth literacy should consider assessing participants’ ability to identify credible health information on social media platforms.
This study includes a few notable limitations. This scoping review only considered peer-reviewed journal articles published in English, which may have caused bias in our findings. Furthermore, it is possible that we missed including studies that evaluated skills under the umbrella of eHealth literacy but did not describe this using any of the terminology in our search protocol. We did locate studies using terms such as “health information evaluation skills”  and “information-seeking behavior” [ ] through our search protocol, indicating some ability to detect studies deviating from our search terms. Finally, it should be noted that the evidence of validity provided in provides only a limited surface-level judgment of 3 types of validity (criterion, construct, and ecological validity) based on the evidence presented in each article. It could very well be the case that the authors of the included studies have additional evidence of validity that we did not recognize as falling within these types of validities or that did not make it into their published articles.
This is the first literature review that specifically identifies objective performance-based measurement tools of eHealth literacy in contrast to subjective self-report measurement tools. We identified 29 unique measurement tools of eHealth literacy with performance-based components used in various populations and covering various health-related topics. To better establish the utility and validity of performance-based measurement tools of eHealth literacy, scholars looking to incorporate performance-based measurement of eHealth literacy into their research should look to use and build from some of these existing measurement techniques rather than producing their own. In contexts where research participants can be provided a computer with internet access, measures from the Health-related questions using the internet, Simulated internet tasks, or Website evaluation tasks categories of this scoping review may offer ecologically valid options for assessing eHealth literacy. In contexts where providing such equipment may not be feasible, measures from the Knowledge of the web-based health information–seeking process category may offer a simpler performance-based eHealth literacy assessment tool that still reduces strict reliance on participants’ ability to gauge their own skill level.
In addition, to the best of our knowledge, this is the first literature review that quantifies the approximate prevalence of performance-based measurement versus subjective measurement of eHealth literacy in the literature broadly. In peer-reviewed scholarship, eHealth literacy assessment is predominantly conducted using subjective self-report measurement techniques. Performance-based measures of eHealth literacy are likely to provide a far better picture of how proficiently people find, evaluate, and use web-based health information. As such, researchers should consider using more objective measures of eHealth literacy such as those identified in this scoping review.
Conflicts of Interest
- Fishbein M, Ajzen I. Predicting and Changing Behavior The Reasoned Action Approach. Milton Park, Oxfordshire: Taylor & Francis; 2011.
- Gualtieri L. The doctor as the second opinion and the internet as the first. In: Proceedings of the CHI '09 Extended Abstracts on Human Factors in Computing Systems. 2009 Presented at: CHI '09: CHI Conference on Human Factors in Computing Systems; Apr 4 - 9, 2009; Boston MA USA [CrossRef]
- Sood N, Jimenez DE, Pham TB, Cordrey K, Awadalla N, Milanaik R. Paging Dr. Google: the effect of online health information on trust in pediatricians' diagnoses. Clin Pediatr (Phila) 2019 Jul 29;58(8):889-896 [CrossRef] [Medline]
- Lee ST, Lin J. The influence of offline and online intrinsic motivations on online health information seeking. Health Commun 2020 Aug;35(9):1129-1136 [CrossRef] [Medline]
- Kitchens B, Harle CA, Li S. Quality of health-related online search results. Decis Support Syst 2014 Jan;57(1):454-462 [CrossRef]
- Rachul C, Marcon AR, Collins B, Caulfield T. COVID-19 and 'immune boosting' on the internet: a content analysis of Google search results. BMJ Open 2020 Oct 26;10(10):e040989 [https://bmjopen.bmj.com/lookup/pmidlookup?view=long&pmid=33109677] [CrossRef] [Medline]
- Norman CD, Skinner HA. eHealth literacy: essential skills for consumer health in a networked world. J Med Internet Res 2006 Jun 16;8(2):e9 [https://www.jmir.org/2006/2/e9/] [CrossRef] [Medline]
- Norgaard O, Furstrand D, Klokker L, Karnoe A, Batterham R, Kayser L, et al. The e-health literacy framework: a conceptual framework for characterizing e-health users and their interaction with e-health systems. Knowl Manag E-Learning 2015;7(4):522-540 [CrossRef]
- Norman CD, Skinner HA. eHEALS: the eHealth literacy scale. J Med Internet Res 2006 Nov 14;8(4):e27 [https://www.jmir.org/2006/4/e27/] [CrossRef] [Medline]
- Karnoe A, Kayser L. How is eHealth literacy measured and what do the measurements tell us? A systematic review. Knowl Manag E-Learning 2015;7(4):576-600 [CrossRef]
- Griebel L, Enwald H, Gilstad H, Pohl A, Moreland J, Sedlmayr M. eHealth literacy research-Quo vadis? Informatics Heal Soc Care 2018 Dec;43(4):427-442 [CrossRef] [Medline]
- Mabe PA, West SG. Validity of self-evaluation of ability: a review and meta-analysis. J Appl Psychol 1982 Jun;67(3):280-296 [CrossRef]
- Dunning D, Heath C, Suls JM. Flawed self-assessment: implications for health, education, and the workplace. Psychol Sci Public Interes Suppl 2004 Dec 01;5(3):69-106 [CrossRef] [Medline]
- Dunning D. Self-Insight Roadblocks and Detours on the Path to Knowing Thyself. Milton Park, Oxfordshire: Taylor & Francis; 2005.
- Hussain A, Ali S, Ahmed M, Hussain S. The anti-vaccination movement: a regression in modern medicine. Cureus 2018 Jul 03;10(7):e2919 [https://europepmc.org/abstract/MED/30186724] [CrossRef] [Medline]
- Kata A. Anti-vaccine activists, Web 2.0, and the postmodern paradigm--An overview of tactics and tropes used online by the anti-vaccination movement. Vaccine 2012 May 28;30(25):3778-3789 [CrossRef] [Medline]
- Merritt K, Smith KD, Di Renzo, Jr JC. An investigation of self-reported computer literacy: is it reliable? Issues Inf Syst 2005;6(1):289-295 [CrossRef]
- Palczyńska M, Rynko M. ICT skills measurement in social surveys: can we trust self-reports? Qual Quant 2021 Aug 29;55(3):917-943 [CrossRef]
- Mahmood K. Do people overestimate their information literacy skills? A systematic review of empirical evidence on the Dunning-Kruger effect. Commun Inf Lit 2016;10(2):198-213 [CrossRef]
- Eysenbach G, Köhler C. How do consumers search for and appraise health information on the world wide web? Qualitative study using focus groups, usability tests, and in-depth interviews. BMJ 2002 Mar 09;324(7337):573-577 [https://europepmc.org/abstract/MED/11884321] [CrossRef] [Medline]
- Stellefson M, Hanik B, Chaney JD, Tennant B. Analysis of eHealth search perspectives among female college students in the health professions using Q methodology. J Med Internet Res 2012 Apr 27;14(2):e60 [https://www.jmir.org/2012/2/e60/] [CrossRef] [Medline]
- Maitz E, Maitz K, Sendlhofer G, Wolfsberger C, Mautner S, Kamolz L, et al. Internet-based health information-seeking behavior of students aged 12 to 14 years: mixed methods study. J Med Internet Res 2020 May 26;22(5):e16281 [https://www.jmir.org/2020/5/e16281/] [CrossRef] [Medline]
- Brown CA, Dickson R. Healthcare students' e-literacy skills. J Allied Health 2010;39(3):179-184 [Medline]
- Neter E, Brainin E. Perceived and performed eHealth literacy: survey and simulated performance test. JMIR Hum Factors 2017 Jan 17;4(1):e2 [https://humanfactors.jmir.org/2017/1/e2/] [CrossRef] [Medline]
- Quinn S, Bond R, Nugent C. Quantifying health literacy and eHealth literacy using existing instruments and browser-based software for tracking online health information seeking behavior. Comput Human Behav 2017 Apr;69:256-267 [CrossRef]
- van der Vaart R, van Deursen AJ, Drossaert CH, Taal E, van Dijk JA, van de Laar MA. Does the eHealth Literacy Scale (eHEALS) measure what it intends to measure? Validation of a Dutch version of the eHEALS in two adult populations. J Med Internet Res 2011 Nov 09;13(4):e86 [https://www.jmir.org/2011/4/e86/] [CrossRef] [Medline]
- Lee J, Lee E, Chae D. eHealth literacy instruments: systematic review of measurement properties. J Med Internet Res 2021 Nov 15;23(11):e30644 [https://www.jmir.org/2021/11/e30644/] [CrossRef] [Medline]
- Munn Z, Peters MD, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol 2018 Nov 19;18(1):143 [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-018-0611-x] [CrossRef] [Medline]
- Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018 Oct 02;169(7):467-473 [https://www.acpjournals.org/doi/abs/10.7326/M18-0850?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub 0pubmed] [CrossRef] [Medline]
- Encyclopedia of Research Design. Thousand Oaks, California, United states: SAGE Publications; 2010.
- Karnoe A, Furstrand D, Christensen KB, Norgaard O, Kayser L. Assessing competencies needed to engage with digital health services: development of the eHealth literacy assessment toolkit. J Med Internet Res 2018 May 10;20(5):e178 [https://www.jmir.org/2018/5/e178/] [CrossRef] [Medline]
- Ivanitskaya L, Laus R, Casey AM. Research readiness self-assessment. J Library Administration 2004 Jan 20;41(1-2):167-183 [CrossRef]
- Agree EM, King AC, Castro CM, Wiley A, Borzekowski DL. "It's got to be on this page": age and cognitive style in a study of online health information seeking. J Med Internet Res 2015 Mar 24;17(3):e79 [https://www.jmir.org/2015/3/e79/] [CrossRef] [Medline]
- Blakemore LM, Meek SE, Marks LK. Equipping learners to evaluate online health care resources: longitudinal study of learning design strategies in a health care massive open online course. J Med Internet Res 2020 Feb 26;22(2):e15177 [https://www.jmir.org/2020/2/e15177/] [CrossRef] [Medline]
- Chang SJ, Yang E, Lee K, Ryu H. Internet health information education for older adults: a pilot study. Geriatr Nurs 2021 Mar;42(2):533-539 [CrossRef] [Medline]
- Freund O, Reychav I, McHaney R, Goland E, Azuri J. The ability of older adults to use customized online medical databases to improve their health-related knowledge. Int J Med Inform 2017 Jun;102:1-11 [CrossRef] [Medline]
- Kordovski VM, Babicz MA, Ulrich N, Woods SP. Neurocognitive correlates of internet search skills for eHealth fact and symptom information in a young adult sample. Percept Mot Skills 2020 Oct 01;127(5):960-979 [CrossRef] [Medline]
- Loda T, Erschens R, Junne F, Stengel A, Zipfel S, Herrmann-Werner A. Correction: undergraduate medical students' search for health information online: explanatory cross-sectional study. JMIR Med Inform 2020 Aug 11;8(8):e23253 [https://medinform.jmir.org/2020/8/e23253/] [CrossRef] [Medline]
- Sharit J, Hernández MA, Czaja SJ, Pirolli P. Investigating the roles of knowledge and cognitive abilities in older adult information seeking on the web. ACM Trans Comput Hum Interact 2008 May;15(1):3-25 [https://europepmc.org/abstract/MED/20011130] [CrossRef] [Medline]
- Sharit J, Taha J, Berkowsky RW, Profita H, Czaja SJ. Online information search performance and search strategies in a health problem-solving scenario. J Cogn Eng Decis Mak 2015 May 21;9(3):211-228 [https://europepmc.org/abstract/MED/29056885] [CrossRef] [Medline]
- van Deursen AJ, van Dijk JA. Internet skills performance tests: are people ready for eHealth? J Med Internet Res 2011 Apr 29;13(2):e35 [https://www.jmir.org/2011/2/e35/] [CrossRef] [Medline]
- van Deursen A. Internet skill-related problems in accessing online health information. Int J Med Inform 2012 Jan;81(1):61-72 [CrossRef] [Medline]
- Camiling MK. eHealth literacy of high school students in the Philippines. IAFOR J Educ 2019 Dec 01;7(2):69-87 [CrossRef]
- Chan CV, Kaufman DR. A framework for characterizing eHealth literacy demands and barriers. J Med Internet Res 2011 Nov 17;13(4):e94 [https://www.jmir.org/2011/4/e94/] [CrossRef] [Medline]
- van der Vaart R, Drossaert CH, de Heus M, Taal E, van de Laar M. Measuring actual eHealth literacy among patients with rheumatic diseases: a qualitative analysis of problems encountered using Health 1.0 and Health 2.0 applications. J Med Internet Res 2013 Feb 11;15(2):e27 [https://www.jmir.org/2013/2/e27/] [CrossRef] [Medline]
- Witry M, Comellas A, Simmering J, Polgreen P. The association between technology use and health status in a chronic obstructive pulmonary disease cohort: multi-method study. J Med Internet Res 2018 Apr 02;20(4):e125 [https://www.jmir.org/2018/4/e125/] [CrossRef] [Medline]
- Kalichman SC, Cherry C, Cain D, Pope H, Kalichman M, Eaton L, et al. Internet-based health information consumer skills intervention for people living with HIV/AIDS. J Consult Clin Psychol 2006 Jun;74(3):545-554 [CrossRef]
- Mitsuhashi T. Effects of two-week e-learning on eHealth literacy: a randomized controlled trial of Japanese Internet users. PeerJ 2018;6:e5251 [https://europepmc.org/abstract/MED/30013857] [CrossRef] [Medline]
- Schulz PJ, Pessina A, Hartung U, Petrocchi S. Effects of objective and subjective health literacy on patients' accurate judgment of health information and decision-making ability: survey study. J Med Internet Res 2021 Jan 21;23(1):e20457 [https://www.jmir.org/2021/1/e20457/] [CrossRef] [Medline]
- Trettin LD, May JC, McKeehan NC. Teaching teens to "Get Net Smart for Good Health": comparing interventions for an Internet training program. J Med Libr Assoc 2008 Oct;96(4):370-374 [https://europepmc.org/abstract/MED/18974815] [CrossRef] [Medline]
- Xie B. Experimenting on the impact of learning methods and information presentation channels on older adults' e-health literacy. J Am Soc Inf Sci 2011 Jun 06;62(9):1797-1807 [CrossRef]
- Hanik B, Stellefson M. E-health literacy competencies among undergraduate health education students: a preliminary study. Int Electron J Health Educ 2011;14:46-58
- Ivanitskaya L, O'Boyle I, Casey AM. Health information literacy and competencies of information age students: results from the interactive online Research Readiness Self-Assessment (RRSA). J Med Internet Res 2006 Apr 21;8(2):e6 [https://www.jmir.org/2006/2/e6/] [CrossRef] [Medline]
- Hanna K, Sambrook P, Armfield J, Brennan D. Internet use, online information seeking and knowledge among third molar patients attending public dental services. Aust Dent J 2017 Sep 31;62(3):323-330 [https://onlinelibrary.wiley.com/doi/10.1111/adj.12509] [CrossRef] [Medline]
- Ivanitskaya L, Brookins-Fisher J, O'Boyle I, Vibbert D, Erofeev D, Fulton L. Dirt cheap and without prescription: how susceptible are young US consumers to purchasing drugs from rogue internet pharmacies? J Med Internet Res 2010 Apr 26;12(2):e11 [https://www.jmir.org/2010/2/e11/] [CrossRef] [Medline]
- St. Jean B, Greene Taylor N, Kodama C, Subramaniam M. Assessing the digital health literacy skills of tween participants in a school-library-based after-school program. J Consum Health Internet 2017 Mar 10;21(1):40-61 [CrossRef]
- van der Vaart R, Drossaert C. Development of the digital health literacy instrument: measuring a broad spectrum of health 1.0 and health 2.0 skills. J Med Internet Res 2017 Jan 24;19(1):e27 [https://www.jmir.org/2017/1/e27/] [CrossRef] [Medline]
- Holt KA, Karnoe A, Overgaard D, Nielsen SE, Kayser L, Røder ME, et al. Differences in the level of electronic health literacy between users and nonusers of digital health services: an exploratory survey of a group of medical outpatients. Interact J Med Res 2019 Apr 05;8(2):e8423 [https://www.i-jmr.org/2019/2/e8423/] [CrossRef] [Medline]
- Holt KA, Overgaard D, Engel LV, Kayser L. Health literacy, digital literacy and eHealth literacy in Danish nursing students at entry and graduate level: a cross sectional study. BMC Nurs 2020 Apr 10;19(1):22 [https://bmcnurs.biomedcentral.com/articles/10.1186/s12912-020-00418-w] [CrossRef] [Medline]
- Liu P, Yeh L, Wang J, Lee S. Relationship between levels of digital health literacy based on the Taiwan digital health literacy assessment and accurate assessment of online health information: cross-sectional questionnaire study. J Med Internet Res 2020 Dec 21;22(12):e19767 [https://www.jmir.org/2020/12/e19767/] [CrossRef] [Medline]
- DeSalvo KB, Olson R, Casavale KO. Dietary guidelines for Americans. JAMA 2016 Feb 02;315(5):457-458 [CrossRef] [Medline]
- Herforth A, Arimond M, Álvarez-Sánchez C, Coates J, Christianson K, Muehlhoff E. A global review of food-based dietary guidelines. Adv Nutr 2019 Jul 01;10(4):590-605 [https://linkinghub.elsevier.com/retrieve/pii/S2161-8313(22)00403-3] [CrossRef] [Medline]
- Norman C. eHealth literacy 2.0: problems and opportunities with an evolving concept. J Med Internet Res 2011 Dec 23;13(4):e125 [https://www.jmir.org/2011/4/e125/] [CrossRef] [Medline]
- Zhao Y, Zhang J. Consumer health information seeking in social media: a literature review. Health Info Libr J 2017 Dec;34(4):268-283 [https://onlinelibrary.wiley.com/doi/10.1111/hir.12192] [CrossRef] [Medline]
- De Martino I, D'Apolito R, McLawhorn AS, Fehring KA, Sculco PK, Gasparini G. Social media for patients: benefits and drawbacks. Curr Rev Musculoskelet Med 2017 Mar 21;10(1):141-145 [https://europepmc.org/abstract/MED/28110391] [CrossRef] [Medline]
- Merchant RM, South EC, Lurie N. Public health messaging in an era of social media. JAMA 2021 Jan 19;325(3):223-224 [CrossRef] [Medline]
- Moorhead SA, Hazlett DE, Harrison L, Carroll JK, Irwin A, Hoving C. A new dimension of health care: systematic review of the uses, benefits, and limitations of social media for health communication. J Med Internet Res 2013 Apr 23;15(4):e85 [https://www.jmir.org/2013/4/e85/] [CrossRef] [Medline]
|eHEALS: eHealth Literacy Scale|
|eHLA: eHealth literacy assessment toolkit|
|ERIC: Education Resources Information Center|
|LISA: Library and Information Science Abstracts|
|LISTA: Library, Information Science & Technology Abstracts|
|PRISMA-ScR: Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews|
|RRSA-h: Research Readiness Self-Assessment-health|
Edited by A Mavragani; submitted 27.11.22; peer-reviewed by J Dratva, T Mitsuhashi; comments to author 13.01.23; revised version received 23.01.23; accepted 01.03.23; published 02.06.23Copyright
©Bradley Crocker, Olivia Feng, Lindsay R Duncan. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 02.06.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.