Published on in Vol 21 , No 8 (2019) :August

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/13423, first published .
Electronic Health Literacy Among Magnetic Resonance Imaging and Computed Tomography Medical Imaging Outpatients: Cluster Analysis

Electronic Health Literacy Among Magnetic Resonance Imaging and Computed Tomography Medical Imaging Outpatients: Cluster Analysis

Electronic Health Literacy Among Magnetic Resonance Imaging and Computed Tomography Medical Imaging Outpatients: Cluster Analysis

Original Paper

1School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Callaghan, Australia

2Priority Research Centre for Health Behaviour, University of Newcastle, Callaghan, Australia

3Hunter Medical Research Institute, New Lambton Heights, Australia

4Hunter Cancer Research Alliance, Newcastle, Australia

Corresponding Author:

Lisa Lynne Hyde, BPsych (Hons)

School of Medicine and Public Health

Faculty of Health and Medicine

University of Newcastle

University Drive

Callaghan, 2308

Australia

Phone: 61 2 4913 8799

Fax:61 2 4042 0040

Email: Lisa.L.Hyde@uon.edu.au


Background: Variations in an individual’s electronic health (eHealth) literacy may influence the degree to which health consumers can benefit from eHealth. The eHealth Literacy Scale (eHEALS) is a common measure of eHealth literacy. However, the lack of guidelines for the standardized interpretation of eHEALS scores limits its research and clinical utility. Cut points are often arbitrarily applied at the eHEALS item or global level, which assumes a dichotomy of high and low eHealth literacy. This approach disregards scale constructs and results in inaccurate and inconsistent conclusions. Cluster analysis is an exploratory technique, which can be used to overcome these issues, by identifying classes of patients reporting similar eHealth literacy without imposing data cut points.

Objective: The aim of this cross-sectional study was to identify classes of patients reporting similar eHealth literacy and assess characteristics associated with class membership.

Methods: Medical imaging outpatients were recruited consecutively in the waiting room of one major public hospital in New South Wales, Australia. Participants completed a self-report questionnaire assessing their sociodemographic characteristics and eHealth literacy, using the eHEALS. Latent class analysis was used to explore eHealth literacy clusters identified by a distance-based cluster analysis, and to identify characteristics associated with class membership.

Results: Of the 268 eligible and consenting participants, 256 (95.5%) completed the eHEALS. Consistent with distance-based findings, 4 latent classes were identified, which were labeled as low (21.1%, 54/256), moderate (26.2%, 67/256), high (32.8%, 84/256), and very high (19.9%, 51/256) eHealth literacy. Compared with the low class, participants who preferred to receive a lot of health information reported significantly higher odds of moderate eHealth literacy (odds ratio 16.67, 95% CI 1.67-100.00; P=.02), and those who used the internet at least daily reported significantly higher odds of high eHealth literacy (odds ratio 4.76, 95% CI 1.59-14.29; P=.007).

Conclusions: The identification of multiple classes of eHealth literacy, using both distance-based and latent class analyses, highlights the limitations of using the eHEALS global score as a dichotomous measurement tool. The findings suggest that eHealth literacy support needs vary in this population. The identification of low and moderate eHealth literacy classes indicate that the design of eHealth resources should be tailored to patients’ varying levels of eHealth literacy. eHealth literacy improvement interventions are needed, and these should be targeted based on individuals’ internet use frequency and health information amount preferences.

J Med Internet Res 2019;21(8):e13423

doi:10.2196/13423

Keywords



Electronic Health Literacy Is Important for the Use and Receipt of Benefits From Electronic Health Programs

Web-based interventions have been reported to be consistently more effective than non-Web-based modalities in changing patient health behaviors and health-related knowledge [1]. Information and communication technology is also recognized as a promising enabler of safe, integrated, and high-quality health care, yet more scientifically rigorous research is needed [2,3]. Accordingly, internet-enabled health care is a strategic priority globally [4-7]. Electronic health (eHealth) literacy is one important factor influencing the use and receipt of benefits from Web-based health resources [8-10]. eHealth literacy refers to an individual’s ability to seek, find, understand, and appraise health information from electronic sources, and apply the knowledge gained to addressing or solving a health problem [11]. The concept is derived from 6 literacy types (ie, health, computer, media, science, information, traditional literacy, and numeracy), which play an important role in facilitating engagement with Web-based health resources [11]. Inadequate eHealth literacy has been self-reported as a barrier to use of the internet for health information seeking purposes among the chronically ill [12]. Furthermore, descriptive research indicates that eHealth literacy is associated with positive cognitive (eg, understanding of health status) [8], instrumental (eg, self-management, physical exercise, and dieting) [8-10], and interpersonal (eg, physician interaction) [8] outcomes from Web-based health information searches. Individuals with lower eHealth literacy have been suggested to be older [8,13,14], less educated [8,14,15], have lower access to, or use of, the internet [15-17], and have poorer health [8].

Interpretations of Electronic Health Literacy Data are Inconsistent

Approaches used to assess eHealth literacy have included objective performance testing [18,19] and self-reported measurement [20-23]. The most commonly used self-reported measure is the 8-item, eHealth Literacy Scale (eHEALS) [20]. Compared with other self-report measures of eHealth literacy, strengths of the eHEALS include its psychometric rigor, brevity, ease of administration, and availability in a number of languages [17,19,20,24-26]. One of the key issues limiting the utility of the eHEALS is the lack of information about interpretation of these data. Although there is a convention that higher scores represent a higher level of eHealth literacy [20], there is an absence of guidance for the standardized interpretation of these scores. This guidance is needed to inform decision-making and follow-up actions [27]. eHEALS mean and median scores [8,13,14,28], as well as item response frequencies [14,29,30], are typically reported. Cut points have been arbitrarily applied at the item level [15], which disregards scale constructs. Furthermore, the common use of a single cut point to the global scale [8,16,28] implies a dichotomy of high versus low eHealth literacy and does not account for respondent self-perceived competency across the multiple eHEALS factors (ie, awareness, skills, and evaluation) [24,31]. These factors have only recently been identified [24,31], demonstrating that our understanding of the eHEALS and its psychometric properties is continuing to evolve more than a decade after the scale was published.

A Robust Approach to Analyzing Electronic Health Literacy Data Is Required

Shortcomings in the interpretation of eHEALS scores highlight the need for a robust approach to analyzing and interpreting eHealth literacy data. In line with the principles of scale development [27,32], measures should be refined as new data about a scale’s properties accumulates. This includes retesting a scale when it is used in new populations and as new analytical techniques become available [27,32]. Cluster analysis is a sophisticated analytical approach, which has not previously been applied to eHealth literacy research. This powerful technique is used to identify natural groupings or structures within data and can therefore classify individuals who score similarly on an outcome measure, such as the eHEALS [33]. It has several strengths including: First, it is a data-driven exploratory technique and therefore not dependent on scoring thresholds, which are arbitrarily imposed by the author(s). Second, being able to observe and characterize natural structures or groupings means that researchers have a better understanding of subgroups of eHealth literacy in the sample population. If classes (or clusters) exist, ignoring their presence by analyzing the data as a single group could lead to an averaging out of any effects of interest [34]. Third, this approach allows for the multiple eHEALS domains (ie, skill, awareness, and evaluate) to be considered simultaneously across subgroups. For example, it can be known if one subgroup self-rates their awareness as highest, whereas another subgroup self-rates their skills as highest. Finally, regression analyses can be completed to examine patient characteristics associated with assignment to each eHealth literacy class.

By understanding the number and characteristics of groupings, it can be known whether a one size fits all approach to eHealth literacy improvement is appropriate, or whether more tailored interventions are required. If tailoring is needed, understanding how different classes scored across the eHEALS factors allows researchers and clinicians to ensure interventions are designed to specifically address the needs of that subgroup. Furthermore, understanding patient characteristics associated with class membership allows the identification of individuals who should be targeted for interventions, or who will require more intensive support throughout periods of eHealth delivery. A cluster analysis of eHEALS data is therefore an important next step to better understand the multicomponent nature of eHealth literacy and how these eHEALS factors coexist in subgroups of patients.

This study aimed to determine (1) whether there are identifiable eHealth literacy classes among magnetic resonance imaging (MRI) and computed tomography (CT) medical imaging outpatients; and (2) sociodemographic and internet use characteristics associated with each eHealth literacy class.


Design and Setting

This cross-sectional study was completed with MRI and CT medical imaging outpatients attending the imaging department of a large, tertiary hospital, located within New South Wales, Australia. The results of this study have been reported in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology checklist [35] and the Checklist for Reporting Results of Internet E-Surveys [36].

Participants

Eligible participants were: (1) attending for an outpatient MRI or CT scan; (2) 18 years or older; and (3) reported having access to the internet for personal use. Participants were excluded if they were: (1) non-English speaking; (2) deemed by reception staff to be cognitively or physically unable to consent or complete the survey; or (3) identified as having completed the survey previously. MRI and CT medical imaging outpatients were the focus of this research because they have high unmet information preferences, which could potentially be met by eHealth capabilities [37].

Procedure

Medical imaging department receptionists identified potentially eligible participants when they presented for their outpatient appointment. Potentially eligible participants were informed about the research and invited to speak with a trained research assistant. Interested patients were provided with a written information sheet and introduced to the research assistant, who gave an overview of the study and obtained the patient’s verbal consent to participate. During this overview, interested patients were told that the Web-based questionnaire would take approximately 10-15 mins to complete, participation was voluntary, and responses would remain confidential. The age, gender, and scan type of noninterested and nonconsenting patients were recorded. Consenting patients were provided with a tablet computer and asked to complete a Web-based questionnaire before their scan. Participants’ study identification number, assigned by the receptionist and entered by the research assistant, provided access to the questionnaire. Each participant could move freely through each screen using next and back buttons. The questionnaire was pilot tested with MRI and CT medical imaging outpatients 2 weeks before study commencement, which confirmed the acceptability and feasibility of electronic survey administration in this study setting. A paper-and-pen version of the questionnaire was available to participants who requested it. If the patient was called for their procedure before finishing the questionnaire, only those questions that had been completed were used for data analysis. Electronic responses were deidentified, collected using the QuON platform [38], and stored securely on an access-restricted part of the University of Newcastle server. Ethics approval was obtained from the Human Research Ethics Committees of the Hunter New England Local Health District (16/10/19/5.11) and University of Newcastle (H-2016-0386).

Measure

eHealth literacy was assessed using the 8-item eHEALS. All 8 eHEALS items were administered on 1 screen within the Web-based questionnaire, and the presentation of these items was not random. Respondents indicated their level of agreement with each statement on a 5-point Likert scale from 1 strongly disagree to 5 strongly agree. Responses were summed to give a final score ranging from 8 to 40, with higher scores indicating higher eHealth literacy. The tool has demonstrated test-retest reliability [17], internal consistency [17,19,28], and measurement invariance across English speaking countries [24]. Previous studies, largely employing exploratory factor analysis, have suggested that the scale measures a single factor [8,17,19,20]. Emerging research using confirmatory factor analysis and based on the theoretical underpinnings of eHealth literacy suggests that the scale measures 3 factors: awareness, skills, and evaluate [24,31]. This 3-factor eHEALS structure has been identified in the medical imaging study setting (standardized root mean residual=0.038; confirmatory fit index=0.944; and root mean square error of approximation=0.156) [31]. As such, self-rated awareness, skills, and evaluate competencies of patients within each subgroup were explored within this study.

Study Factors

On the basis of previous research indicating an association with eHealth literacy, standard self-report items assessed participant gender, age, marital status, education, internet use frequency, and overall health status [8,13-17]. Remoteness of residence, health information amount preference (no information; some information; and a lot of information), and internet use for scan preparation (yes; no; and don’t know) were hypothesized to influence eHealth literacy and were, therefore, included as covariates. Participant postcodes were mapped to the Accessibility/Remoteness Index of Australia Plus to categorize participant remoteness as metropolitan (major cities of Australia) or nonmetropolitan (inner regional, outer regional, remote, or very remote Australia) [39].

Data Analysis

Participant characteristics were summarized as frequencies and percentages or means and standard deviations. Consent bias was assessed for gender, scan type, and age group using Chi-square tests. Given the high completion rate (98.1%, 256/261 for individuals starting eHEALS items), only complete eHEALS data were included in the analyses. Items relating to each eHEALS factor were summed to generate separate awareness, skill, and evaluate factor scores.

Identification of Electronic Health Literacy Classes

Cluster analysis was completed using a 2-phased approach. Distance-based unsupervised clustering was undertaken as an initial exploratory knowledge discovery technique, to identify natural clusters of patients according to their responses (refer Multimedia Appendix 1 for methods and results). Secondary clustering of patients, using latent class analysis (LCA) as a statistical modeling approach, was to be completed as a follow-up if distance-based cluster structures were observed. LCA was subsequently performed to verify the 4-cluster structure identified. LCA is less sensitive to choice of parameters (eg, distance metric), allows for uncertainty in class membership, and has greater power and lower type 1 error rates when compared with other clustering techniques [34], and was, therefore, selected as the primary analysis technique. Latent class membership probabilities were calculated to determine the proportion of the sample that belonged to each of the classes. Item response probabilities were calculated to determine the probability of endorsing each response option, conditional on class membership. The Bayesian Information Criterion (BIC) and G2-statistic were computed to aid in determining the optimal number of classes (with plateauing indicating no improvements to model fit) [40], as were overall class interpretability and model parsimony. Model entropy was computed, with values closer to 1 representing clear class delineation [41]. The maximum posterior probability of class membership was also calculated for each participant, based on the optimal number of classes, with values greater than .5 indicating adequate probability for class assignment [42].

Characteristics Associated With Class Membership

An LCA regression analysis was performed to identify participant sociodemographic and internet use characteristics associated with class membership. Given the exploratory nature of data analysis, all covariates were initially cross-tabulated with class membership (assigned according to maximum posterior probability) to identify model sparseness, and then analyzed using univariate LCA regression: gender; age (<65 years vs 65+ years); geographic location of residence (major city vs regional or rural); marital status (married or living with partner vs not married); education (high school or less vs more than high school); overall health (fair or worse; good or better than good); information amount preference (a lot of information vs not a lot of information); internet use for scan preparation; and internet use frequency (daily vs less than daily). Likelihood ratio tests (based on the univariate results) were performed to determine whether each predictor significantly improved the fit of the model. Covariates with a statistically significant likelihood ratio test (P<.05) were included in the final multivariable LCA regression. Distance-based and latent class analyses were performed in R 3.4 [43]. Descriptive statistics were computed in STATA v13.

Sample Size

Sample sizes of at least 200 have been suggested as adequate for LCA, dependent on subsequent model fit and number of classes [40,44]. As such, a sample of at least 200 was deemed appropriate for this study.


Sample

A total of 405 potentially eligible patients were invited to discuss the study with a research assistant during the 7-week recruitment period, of which 354 (87.4%) were interested in participating. Of 268 eligible participants, 261 (97.4%) started the eHEALS, 256 (95.5%) completed all eHEALS items, and 222 (82.8%) completed all eHEALS and study factor items. There were no significant differences between patients who were and were not interested in participating in the study based on gender, scan type, or age group. Table 1 provides a summary of the sociodemographic, scan, and internet characteristics of the study sample.

Table 1. Participant sociodemographic, scan, and internet characteristics (N=256). Number of observations for each characteristic may not total 256 because of missing data.
CharacteristicValue
Age (years), mean (SD)53 (15.0)
Electronic Health Scale (eHEALS) domain score, mean (SD)

Awareness (possible total=10)6.9 (2.0)

Skills (possible total=15)10.9 (2.9)

Evaluate (possible total=15)10.0 (3.1)
Gender, n (%)

Male112 (43.8)

Female144 (56.3)
Marital status, n (%)

Married or living with partner146 (64.6)

Not married or living with partner80 (35.4)
Education completed, n (%)

High school or less128 (56.6)

More than high school98 (43.4)
Geographic location, n (%)

Metropolitan200 (78.1)

Nonmetropolitan56 (21.9)
Overall health, n (%)

Poor17 (7.7)

Fair75 (34.1)

Good94 (42.7)

Very good34 (15.5)
Scan type, n (%)

Computed tomography101 (39.4)

Magnetic resonance imaging152 (59.4)

Don’t know3 (1.2)
Used internet for scan, n (%)

Yes27 (10.5)

No228 (89.1)

Don’t know1 (0.4)
Frequency of internet use, n (%)

Less than once a month11 (4.3)

Once a month5 (1.9)

A few times a month14 (5.5)

A few times a week33 (12.9)

About once a day47 (18.4)

Several times a day146 (57.0)
Information amount preference, n (%)

No information2 (0.8)

Some information58 (25.9)

A lot of information165 (73.3)

Identification of Electronic Health Literacy Classes

The BIC and G2-statistic continued to decrease as the number of classes (K) increased, but the improvement was progressively smaller after 3 classes (see Table 2). On the basis of the interpretability of the latent classes, the reduction in class size beyond K=4, and parsimony, the 4 class model was selected as the optimal class structure. The lowest maximum posterior probability under this 4 class model was .516. As such, all participants exceeded the threshold of .5 for maximum posterior probability and were assigned to a class. Hence, LCA findings on number of classes were consistent with that of distance-based clustering (see Multimedia Appendix 1).

Multimedia Appendix 2 shows the unconditional item response probabilities of each eHEALS response option based on class assignment. Classes were named according to likely level of eHealth literacy, with respect to that of other classes identified in the analysis:

  • Class 1—low eHealth literacy (21.1% of respondents, 54/256): when compared with other classes, class 1 had the highest probability of responding disagree and strongly disagree across all eHEALS items. The probability of this group responding either disagree or strongly disagree was highest for awareness items (0.88 and 0.89), followed by evaluate items (0.79, 0.81, and 0.88) and skills items (0.66, 0.75, and 0.84).
  • Class 2—moderate eHealth literacy (26.2% of respondents, 67/256): when compared with other classes, class 2 had the highest probability of responding undecided across all eHEALS items, and the second highest probability of responding agree across awareness and skills items. This group was most likely to respond undecided to awareness items (0.56 and 0.59), either agree (0.54 and 0.58) or undecided (0.48) to skills items, and undecided to evaluate items (0.55, 0.61, and 0.63).
  • Class 3—high eHealth literacy (32.8% of respondents, 84/256): when compared with other classes, class 3 had the highest probability of responding agree across all eHEALS items. The probability of this class responding agree was greatest for skills items (0.97, 0.97, and 1.00), followed by awareness (0.80 and 0.91), and evaluate items (0.68, 0.71, and 0.81).
  • Class 4—very high eHealth literacy (19.9% of respondents, 51/256): when compared with other classes, class 4 had the highest probability of responding strongly agree across all eHEALS items. The probability of this class responding strongly agree was greatest for skills items (0.71, 0.79 and 0.90), followed by evaluate (0.57, 0.74, and 0.86), and awareness items (0.53 and 0.61).

Characteristics Associated With Class Membership

Internet use for scan preparation was not included in regression analyses because of sparseness (ie, 10.5%, 27/256 of participants responded yes to internet use for scan preparation). Following univariate analyses, likelihood ratio difference tests indicated that age; education, marital status, overall health status, information amount preference, and internet use frequency all significantly improved the fit of the model (P<.05; see Multimedia Appendix 3) and were included in the multivariable regression analysis (see Table 3).

Class 1 (low eHealth literacy) was selected as a reference class for multivariable regression. This was because these participants likely need additional support to engage with eHealth, making identification of the characteristics of participants in this subgroup a priority. As shown in Table 3, participants who indicated that they preferred not to receive a lot of information about their health had 0.06 times the odds of belonging to class 2 (moderate eHealth literacy), compared with class 1 (low eHealth literacy), and this difference was statistically significant. Furthermore, participants who reported using the internet less than daily had 0.21 times the odds of belonging to class 3 (high eHealth literacy), compared with class 1 (low eHealth literacy), and this difference was statistically significant. There were no other significant differences in sociodemographic or internet use attributes between participants in class 1 (low eHealth literacy) and classes 2, 3, and 4 (moderate, high, and very high eHealth literacy, respectively).

Table 2. Goodness of fit indices for 1 to 5 class structures.
Class structureBICaG2-statisticEntropy
1 class structure5893.743402.831.00
2 class structure5148.662474.760.97
3 class structure4651.681794.790.98
4 class structure4556.811516.930.92
5 class structure4545.211322.340.90

aBIC: Bayesian Information Criterion.

Table 3. Adjusted odds ratios associated with membership of classes 2, 3, and 4, compared with class 1.
VariableClass 1 versus class 2 (low vs moderate)Class 1 versus class 3 (low vs high)Class 1 versus class 4 (low vs very high)

Odds ratio (95% CI)P valueOdds ratio (95% CI)P valueOdds ratio (95% CI)P value
Age

<65 yearsRefaRefRefRefRefRef

65 years or older0.37 (0.06-2.11).260.32 (0.10-1.03).060.37 (0.07-2.00).25
Education

High school or lessRefRefRefRefRefRef

More than high school1.09 (0.15-7.65).932.21 (0.52-9.47).293.89 (0.67-22.76).14
Marital status

Married or living with partnerRefRefRefRefRefRef

Not married or living with partner1.63 (0.26-10.23).600.96 (0.27-3.41).960.91 (0.14-6.01).92
Information amount preference

A lot of informationRefRefRefRefRefRef

Not a lot of information0.06 (0.01-0.60).02b0.61 (0.18-2.04).430.23 (0.04-1.29).10
Overall health

Fair or worseRefRefRefRefRefRef

Good or better than good1.10 (0.24-5.02).911.16 (0.35-3.87).811.48 (0.33-6.68).61
Internet use frequency

DailyRefRefRefRefRefRef

Less than once a day0.62 (0.14-2.67).520.21 (0.07-0.63).007b0.17 (0.02-1.76).14

aRef: reference category.

bStatistically significant.


Principal Findings

This study was the first to identify classes of patients based on eHealth literacy, and to assess characteristics associated with class membership. The identification of multiple classes, using both distance-based and latent class analyses, highlights issues with using the eHEALS global score as a dichotomous measurement tool. In particular, these findings suggest that it may be important to account for multiple eHealth literacy subgroups when developing standardized guidance for the interpretation of eHEALS scores. Furthermore, the identification of multiple classes suggests that the design and delivery of eHealth resources may need to be tailored based on eHealth literacy. Patient characteristics, such as internet use frequency and health-related information amount preferences, may provide an indication of eHealth literacy, and related support needs.

Multiple Electronic Health Literacy Subgroups Were Identified

In total, 4 eHealth literacy classes were identified, and the probabilities of belonging to each of the 4 classes were similar (ie, range 19.9%-32.8%). The finding that eHealth literacy varied substantially in this population suggests that MRI and CT medical imaging outpatients may have differing support needs relating to the use of eHealth technology. Subgroups of patients were characterized by having either very high, high, moderate, or low eHealth literacy. Within the very high eHealth literacy subgroup, awareness was the lowest scoring competency. This may be because consumers who are familiar with eHealth also understand the masses of Web-based information that is available and the common difficulty of locating valid and reliable information sources [12]. Across all classes, participants reported being most competent in their skills using eHealth resources. Such skills may be perceived highly because they align to the computer and media literacy types, which comprise eHealth literacy [11]. These literacy types are increasingly used in the digital era, with 87% of Australians being identified as internet users in 2016-2017 [45].

In total, 2 out of 4 classes, comprising 52.7% of respondents, had the highest probability of responding either agree or strongly agree to eHEALS items, reflecting high and very high eHealth literacy. Despite this, there was room for improvement in awareness, skills, and evaluation competencies for the remaining 2 classes, comprising 47.3% of respondents and reflecting low and moderate eHealth literacy. This approximately even split in eHealth literacy capabilities is also apparent in other studies completed with cardiovascular disease patients [16] and chronic disease patients [46], which used arbitrary cut points to dichotomize high versus low eHealth literacy. It is possible that the application of dichotomous cut points prevented the identification of such diverse eHealth literacy subgroups. Further research using cluster analyses should be conducted to determine whether multiple eHealth literacy subgroups exist across other health consumer populations. This information may inform the development of more targeted eHealth literacy improvement interventions.

Internet Use Frequency and Health Information Amount Preferences Predicted Class Membership

Those who had used the internet less than daily had approximately 5 times the odds of belonging to the low eHealth literacy class compared with the high eHealth literacy class. Although mixed findings exist [19], an association between internet use and eHealth literacy has been reported in studies with chronically ill patients and the general public [15-17]. Our findings may suggest that frequent internet users do use the internet for health, and this may result in greater self-reported eHealth literacy. Alternatively, they may indicate that frequent internet users self-perceive that their ability to engage with and evaluate general internet resources is transferable to health-related content.

Those with a preference not to receive a lot of information about their health had over 16 times the odds of belonging to the low eHealth literacy class, compared with the moderate eHealth literacy class. To the authors’ knowledge, this study is the first to explore the association between preferred amount of information and eHealth literacy. It is possible that the inclusion of an undecided response option resulted in imposter syndrome for those in the moderate class [47]. In this case, participants underestimate their competency, opting for a neutral response option, to prevent being perceived as overconfident. Therefore, those in the moderate class may be more eHealth literate than findings suggest, which could contribute to a significant finding when comparing low and moderate classes. It may also be possible that those who prefer to receive a lot of information about their health are Web-based health-related information seekers, hence requiring eHealth literacy. An evidence review completed by the Australian Commission on Quality and Safety in Health Care found that patients typically use the internet as a supplement to advice from a health professional [48]. It is therefore likely that those who have greater preferences for health-related information require and develop the awareness, skills, and evaluation abilities needed to use this Web-based supplementary information. An analysis of the potentially moderating effects of Web-based health-related information seeking on the association between information amount preference and eHealth literacy should be explored in the future. This analysis should include an examination of the types of eHealth resources being accessed and used.

The technology acceptance model provides a theoretical justification for the characteristics related to a subgroup assignment [49]. Under this model, technology acceptance is influenced by perceived ease of use, and usefulness of the internet [49]. Accordingly, those who use the internet more frequently may be more likely to perceive ease of use of Web-based health resources. Similarly, those who prefer to receive a lot of health-related information may be more likely to deem eHealth as useful. Such perceived acceptability may result in greater self-rated eHealth literacy. Continued studies are needed to investigate this association and determine whether other factors not explored in this study, which promote perceived ease of use and usefulness of eHealth (eg, speed and availability of the internet, and self-management of chronic conditions, respectively), are associated with eHealth literacy. Contrary to expectations and inconsistent with previous studies [8,13-15], no other examined sociodemographic characteristics significantly influenced class membership. Inconsistencies with existing literature may indicate that the predictors of eHealth literacy differ across populations, settings, or cut points applied.

Practice Implications

The identification of low and moderate eHealth literacy classes suggests that eHealth literacy improvement interventions may be warranted within this population. However, there is minimal high-quality research investigating the effectiveness of such interventions, highlighting a need for continued research in this area [50]. Given their association with low class membership, those who use the internet less than daily and prefer not to receive a lot of health information should be the focus of such eHealth literacy improvement interventions. In the interim, researchers and clinicians should tailor the design and delivery of eHealth resources to patients’ eHealth literacy, to maximize engagement and potential receipt of benefits. As skills were the highest rated competency across all classes within this study population, future eHealth interventions should be designed with a focus on promoting awareness and reducing the need to evaluate eHealth resources within the imaging setting. A written provider recommendation, which directs consumers toward credible eHealth resources, may be one scalable strategy to do this [31,51]. In cases where skills are low, alternative strategies may be needed, such as clear instructions on how to appropriately navigate Web-based content, reduced click-through requirements to retrieve Web-based materials, and the use of persuasive system design elements to enhance usability and maintain engagement [52].

Limitations and Future Research

To aid in the interpretation of findings, labels (ie, very high, high, moderate, and low) were arbitrarily assigned to eHealth literacy classes. It is therefore unclear whether, for example, those classified as very high eHealth literacy were indeed very high. As this study applied a novel approach to data analysis and interpretation, the generalizability of findings across medical imaging settings and to other patient groups is unknown. This class structure and the predictors of class membership should be studied and replicated in other populations. Furthermore, it is possible that the setting influenced responses as participants may have assumed that eHEALS questions related to scan-specific information on the internet rather than general eHealth resources.

The eHEALS was selected because of its established psychometric properties, emerging research proposing a 3-factor structure, and wide application [17,19,20,24,28,31]. However, it has been criticized for not measuring health 2.0. (ie, user-generated content and interactivity) and, therefore, lacking relevance to modern technology [21,24,53]. Some studies have adapted the scale to address this limitation, yet the body of research is small and as a result, the impacts on scale psychometric properties remain unclear [21,24]. The generation of new Web-based content is, however, not highly relevant within the context of preparatory information provision for medical imaging procedures and this limitation is, therefore, not expected to influence our study.

Conclusions

This study used sophisticated analytical techniques to add to evidence about the nature of eHEALS scores within a clinical population. Cluster analyses were used to identify 4 classes of patients with differing eHealth literacy within this sample of MRI and CT medical imaging outpatients. The proportion of participants assigned to each latent class was similar, suggesting that eHealth literacy varies within this study setting. Across all classes, skills were perceived as the highest rated competency followed by either awareness or evaluation. The frequency of participants’ personal internet use and their health-related information preferences predicted class membership. Tools such as the eHEALS may need to be administered to identify class assignment, and inform eHealth literacy improvement interventions, as well as the design and delivery of eHealth resources. Findings from this study should also contribute to the development of guidance for eHEALS scoring interpretation, which is a necessary next step to improve scale utility [27]. Study findings should be replicated in other populations and settings to increase the generalizability of results.

Acknowledgments

The authors thank the patients for their involvement in this study, as well as the administrative and clinical staff at Hunter New England Imaging for assistance with recruitment and data collection. This paper is supported by a Priority Research Centre for Health Behaviour Small Grant, and Hunter Cancer Research Alliance Implementation Science Flagship 2018 Student Award. LH is supported by an Australian Government Research Training Program Scholarship. AB is supported by a National Health and Medical Research Council Early Career Fellowship (APP1073317) and Cancer Institute New South Wales Early Career Fellowship (13/ECF/1-37). LM is supported by a postdoctoral fellowship (PF-16-011) from the Australian National Breast Cancer Foundation.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Distance-based cluster analysis.

PDF File (Adobe PDF File), 330KB

Multimedia Appendix 2

Unconditional item response probabilities for a 4-class model of electronic health literacy.

PDF File (Adobe PDF File), 159KB

Multimedia Appendix 3

Log likelihood difference tests.

PDF File (Adobe PDF File), 12KB

  1. Wantland DJ, Portillo CJ, Holzemer WL, Slaughter R, McGhee EM. The effectiveness of web-based vs non-web-based interventions: a meta-analysis of behavioral change outcomes. J Med Internet Res 2004 Nov 10;6(4):e40 [FREE Full text] [CrossRef] [Medline]
  2. Car J, Black A, Anandan C, Cresswell K, Pagliari C, McKinstry B, et al. University of Birmingham. 2008. The Impact of eHealth on the Quality & Safety of Healthcare: A Systemic Overview & Synthesis of the Literature. Report for the NHS Connecting for Health Evaluation Programme   URL: https:/​/www.​birmingham.ac.uk/​Documents/​college-mds/​haps/​projects/​cfhep/​projects/​001/​NHSCFHEP001FinalReport.​pdf [accessed 2019-08-08]
  3. Black AD, Car J, Pagliari C, Anandan C, Cresswell K, Bokun T, et al. The impact of ehealth on the quality and safety of health care: a systematic overview. PLoS Med 2011 Jan 18;8(1):e1000387 [FREE Full text] [CrossRef] [Medline]
  4. Queensland Health. 2015. eHealth Investment Strategy   URL: https:/​/www.​health.qld.gov.au/​system-governance/​strategic-direction/​plans/​ehealth-investment-strategy [accessed 2018-12-13] [WebCite Cache]
  5. Health and Social Care Board. 2016. eHealth and Care Strategy for Northern Ireland: Improving Health and Wealth Through the Use of Information and Communication Technology   URL: http:/​/www.​hscboard.hscni.net/​download/​Consultations/​2014-15%20eHealth_and_Care_Strategy_Consultation/​Consultation_Document-e-Health_and_Care-PDF_2mb.​pdf [accessed 2019-08-08]
  6. NHS Scotland. 2015. eHealth Strategy 2014-2017   URL: https://www.gov.scot/publications/ehealth-strategy-2014-2017/ [accessed 2019-08-08]
  7. United States Department of Health and Human Services. 2015. Federal Health IT Strategic Plan: 2015-2020   URL: https://www.healthit.gov/sites/default/files/9-5-federalhealthitstratplanfinal_0.pdf [accessed 2019-08-08]
  8. Neter E, Brainin E. eHealth literacy: extending the digital divide to the realm of health information. J Med Internet Res 2012 Jan 27;14(1):e19 [FREE Full text] [CrossRef] [Medline]
  9. Kim SH, Son YJ. Relationships between ehealth literacy and health behaviors in Korean adults. Comput Inform Nurs 2017 Feb;35(2):84-90. [CrossRef] [Medline]
  10. Mitsutake S, Shibata A, Ishii K, Oka K. Associations of ehealth literacy with health behavior among adult internet users. J Med Internet Res 2016 Jul 18;18(7):e192 [FREE Full text] [CrossRef] [Medline]
  11. Norman CD, Skinner HA. eHealth literacy: essential skills for consumer health in a networked world. J Med Internet Res 2006 Jun 16;8(2):e9 [FREE Full text] [CrossRef] [Medline]
  12. Lee K, Hoti K, Hughes JD, Emmerton L. Dr Google and the consumer: a qualitative study exploring the navigational needs and online health information-seeking behaviors of consumers with chronic health conditions. J Med Internet Res 2014 Dec 2;16(12):e262 [FREE Full text] [CrossRef] [Medline]
  13. Choi NG, Dinitto DM. The digital divide among low-income homebound older adults: internet use patterns, ehealth literacy, and attitudes toward computer/internet use. J Med Internet Res 2013 May 2;15(5):e93 [FREE Full text] [CrossRef] [Medline]
  14. Tennant B, Stellefson M, Dodd V, Chaney B, Chaney D, Paige S, et al. eHealth literacy and web 2.0 health information seeking behaviors among baby boomers and older adults. J Med Internet Res 2015 Mar 17;17(3):e70 [FREE Full text] [CrossRef] [Medline]
  15. Milne RA, Puts MT, Papadakos J, Le LW, Milne VC, Hope AJ, et al. Predictors of high ehealth literacy in primary lung cancer survivors. J Cancer Educ 2015 Dec;30(4):685-692. [CrossRef] [Medline]
  16. Richtering SS, Hyun K, Neubeck L, Coorey G, Chalmers J, Usherwood T, et al. eHealth literacy: predictors in a population with moderate-to-high cardiovascular risk. JMIR Hum Factors 2017 Jan 27;4(1):e4. [CrossRef] [Medline]
  17. Chung SY, Nahm ES. Testing reliability and validity of the ehealth literacy scale (eHEALS) for older adults recruited online. Comput Inform Nurs 2015 Apr;33(4):150-156 [FREE Full text] [CrossRef] [Medline]
  18. Neter E, Brainin E. Perceived and performed ehealth literacy: survey and simulated performance test. JMIR Hum Factors 2017 Jan 17;4(1):e2 [FREE Full text] [CrossRef] [Medline]
  19. van der Vaart R, van Deursen A, Drossaert C, Taal E, van Dijk J, van de Laar M. Does the ehealth literacy scale (eHEALS) measure what it intends to measure? Validation of a Dutch version of the eHEALS in two adult populations. J Med Internet Res 2011 Nov 9;13(4):e86 [FREE Full text] [CrossRef] [Medline]
  20. Norman CD, Skinner HA. eHEALS: the ehealth literacy scale. J Med Internet Res 2006 Nov 14;8(4):e27 [FREE Full text] [CrossRef] [Medline]
  21. van der Vaart R, Drossaert C. Development of the digital health literacy instrument: measuring a broad spectrum of Health 1.0 and Health 2.0 skills. J Med Internet Res 2017 Jan 24;19(1):e27 [FREE Full text] [CrossRef] [Medline]
  22. Hsu W, Chiang C, Yang S. The effect of individual factors on health behaviors among college students: the mediating effects of ehealth literacy. J Med Internet Res 2014 Dec 12;16(12):e287 [FREE Full text] [CrossRef] [Medline]
  23. Koopman RJ, Petroski GF, Canfield SM, Stuppy JA, Mehr DR. Development of the PRE-HIT instrument: patient readiness to engage in health information technology. BMC Fam Pract 2014 Jan 28;15:18 [FREE Full text] [CrossRef] [Medline]
  24. Sudbury-Riley L, FitzPatrick M, Schulz PJ. Exploring the measurement properties of the ehealth literacy scale (eHEALS) among baby boomers: a multinational test of measurement invariance. J Med Internet Res 2017 Feb 27;19(2):e53 [FREE Full text] [CrossRef] [Medline]
  25. Diviani N, Dima AL, Schulz PJ. A psychometric analysis of the Italian version of the ehealth literacy scale using item response and classical test theory methods. J Med Internet Res 2017 Apr 11;19(4):e114 [FREE Full text] [CrossRef] [Medline]
  26. Chung S, Park BK, Nahm ES. The Korean ehealth literacy scale (K-eHEALS): reliability and validity testing in younger adults recruited online. J Med Internet Res 2018 Apr 20;20(4):e138 [FREE Full text] [CrossRef] [Medline]
  27. HealthMeasures. 2013. PROMIS® Instrument Maturity Model   URL: http://www.healthmeasures.net/images/PROMIS/PROMISStandards_Vers_2_0_MaturityModelOnly_508.pdf [accessed 2019-06-07] [WebCite Cache]
  28. Mitsutake S, Shibata A, Ishii K, Oka K. Association of ehealth literacy with colorectal cancer knowledge and screening practice among internet users in Japan. J Med Internet Res 2012 Nov 13;14(6):e153 [FREE Full text] [CrossRef] [Medline]
  29. Knapp C, Madden V, Wang H, Sloyer P, Shenkman E. Internet use and ehealth literacy of low-income parents whose children have special health care needs. J Med Internet Res 2011 Sep 29;13(3):e75 [FREE Full text] [CrossRef] [Medline]
  30. Manafò E, Wong S. Assessing the ehealth literacy skills of older adults: a preliminary study. J Consum Health Internet 2012 Oct;16(4):369-381. [CrossRef]
  31. Hyde LL, Boyes AW, Evans TJ, Mackenzie LJ, Sanson-Fisher R. Three-factor structure of the ehealth literacy scale among magnetic resonance imaging and computed tomography outpatients: a confirmatory factor analysis. JMIR Hum Factors 2018 Feb 19;5(1):e6 [FREE Full text] [CrossRef] [Medline]
  32. McDowell I. The theoretical and technical foundations of health measurement. In: Measuring Health: A Guide to Rating Scales and Questionnaires. Third Edition. New York: Oxford University Press; 2006:10-46.
  33. Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Second Edition. New York: Springer-Verlag; 2009.
  34. Lanza ST, Rhoades BL. Latent class analysis: an alternative perspective on subgroup analysis in prevention and treatment. Prev Sci 2013 Apr;14(2):157-168 [FREE Full text] [CrossRef] [Medline]
  35. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, STROBE Initiative. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med 2007 Oct 16;4(10):e296 [FREE Full text] [CrossRef] [Medline]
  36. Eysenbach G. Improving the quality of web surveys: the checklist for reporting results of internet e-surveys (CHERRIES). J Med Internet Res 2004 Sep 29;6(3):e34 [FREE Full text] [CrossRef] [Medline]
  37. Hyde L, Mackenzie L, Boyes AW, Evans TJ, Symonds M, Sanson-Fisher R. Prevalence and correlates of patient-centred preparatory information provision to computed tomography and magnetic resonance imaging outpatients: a cross-sectional study. Patient Educ Couns 2018 Oct;101(10):1814-1822. [CrossRef] [Medline]
  38. Paul D, Wallis M, Henskens F, Nolan K. QuON: A Generic Platform for the Collation and Sharing of Web Survey Data. In: Proceedings of the 9th International Conference on Web Information Systems and Technologies. 2013 Presented at: WEBIST'13; May 8-10, 2013; Auchen, Germany p. 111-116   URL: https://ogma.newcastle.edu.au/vital/access/manager/Repository/uon:15474
  39. The University of Adelaide. 2005. Accessibility/Remoteness Index of Australia (ARIA)   URL: https://www.adelaide.edu.au/hugo-centre/services/aria [accessed 2019-08-08]
  40. Nylund KL, Asparouhov T, Muthén BO. Deciding on the number of classes in latent class analysis and growth mixture modeling: a Monte Carlo simulation study. Struct Equ Modeling 2007 Dec 5;14(4):535-569. [CrossRef]
  41. Celeux G, Soromenho G. An entropy criterion for assessing the number of clusters in a mixture model. J Classif 1996 Sep;13(2):195-212. [CrossRef]
  42. Clark S, Muthén B. Muthén & Muthén, Mplus. 2009. Relating Latent Class Analysis Results to Variables not Included in the Analysis   URL: https://www.statmodel.com/download/relatinglca.pdf [accessed 2019-06-07] [WebCite Cache]
  43. R Project. 2018. The R Project for Statistical Computing   URL: https://www.r-project.org/ [accessed 2019-08-09]
  44. Tein JY, Coxe S, Cham H. Statistical power to detect the correct number of classes in latent profile analysis. Struct Equ Modeling 2013 Oct 1;20(4):640-657 [FREE Full text] [CrossRef] [Medline]
  45. Australian Bureau of Statistics, Australian Government. 2018. 8146.0 - Household Use of Information Technology, Australia, 2016-17   URL: http://www.abs.gov.au/ausstats/abs@.nsf/mf/8146.0 [accessed 2019-06-24] [WebCite Cache]
  46. Paige SR, Krieger JL, Stellefson M, Alber JM. eHealth literacy in chronic disease patients: an item response theory analysis of the ehealth literacy scale (eHEALS). Patient Educ Couns 2017 Feb;100(2):320-326 [FREE Full text] [CrossRef] [Medline]
  47. Clance PR, Imes SA. The imposter phenomenon in high achieving women: dynamics and therapeutic intervention. Psychotherapy 1978;15(3):241-247. [CrossRef]
  48. Ramsay I, Peters M, Corsini N, Eckert M. Australia Commission on Safety and Quality in Healthcare. 2017. Consumer Health Information Needs and Preferences: A Rapid Evidence Review   URL: https:/​/www.​safetyandquality.gov.au/​sites/​default/​files/​migrated/​Consumer-health-information-needs-and-preferences-A-rapid-evidence-review.​pdf [accessed 2019-08-08]
  49. Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q 1989 Sep;13(3):319-340. [CrossRef]
  50. Car J, Lang B, Colledge A, Ung C, Majeed A. Interventions for enhancing consumers' online health literacy. Cochrane Database Syst Rev 2011 Jun 15(6):CD007092 [FREE Full text] [CrossRef] [Medline]
  51. Usher W. Gold Coast general practitioners' recommendations of health websites to their patients. Med J Aust 2007;187(2):82-83. [CrossRef]
  52. Kelders SM, Kok RN, Ossebaard HC, van Gemert-Pijnen JE. Persuasive system design does matter: a systematic review of adherence to web-based interventions. J Med Internet Res 2012 Nov 14;14(6):e152 [FREE Full text] [CrossRef] [Medline]
  53. van de Belt TH, Engelen LJ, Berben SA, Schoonhoven L. Definition of Health 2.0 and Medicine 2.0: a systematic review. J Med Internet Res 2010 Jun 11;12(2):e18 [FREE Full text] [CrossRef] [Medline]


BIC: Bayesian Information Criterion
CT: computed tomography
eHEALS: eHealth Literacy Scale
eHealth: electronic health
LCA: latent class analysis
MRI: magnetic resonance imaging


Edited by G Eysenbach; submitted 16.01.19; peer-reviewed by PJ Schulz, S Antani; comments to author 01.05.19; revised version received 20.06.19; accepted 19.07.19; published 28.08.19

Copyright

©Lisa Lynne Hyde, Allison W Boyes, Lisa J Mackenzie, Lucy Leigh, Christopher Oldmeadow, Carlos Riveros, Rob Sanson-Fisher. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 28.08.2019.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.