The Karma system is currently undergoing maintenance (Monday, January 29, 2018).
The maintenance period has been extended to 8PM EST.

Karma Credits will not be available for redeeming during maintenance.
Advertisement

Journal Description

The Journal of Medical Internet Research (JMIR), now in its 20th year, is the pioneer open access eHealth journal and is the flagship journal of JMIR Publications. It is the leading digital health journal globally in terms of quality/visibility (Impact Factor 2017: 4.671, ranked #1 out of 22 journals) and in terms of size (number of papers published). The journal focuses on emerging technologies, medical devices, apps, engineering, and informatics applications for patient education, prevention, population health and clinical care. As leading high-impact journal in its' disciplines (health informatics and health services research), it is selective, but it is now complemented by almost 30 specialty JMIR sister journals, which have a broader scope. Peer-review reports are portable across JMIR journals and papers can be transferred, so authors save time by not having to resubmit a paper to different journals. 

As open access journal, we are read by clinicians, allied health professionals, informal caregivers, and patients alike, and have (as all JMIR journals) a focus on readable and applied science reporting the design and evaluation of health innovations and emerging technologies. We publish original research, viewpoints, and reviews (both literature reviews and medical device/technology/app reviews).

We are also a leader in participatory and open science approaches, and offer the option to publish new submissions immediately as preprints, which receive DOIs for immediate citation (eg, in grant proposals), and for open peer-review purposes. We also invite patients to participate (eg, as peer-reviewers) and have patient representatives on editorial boards.

Be a widely cited leader in the digitial health revolution and submit your paper today!

 

Recent Articles:

  • Source: Flickr; Copyright: Mate Marschalko; URL: https://www.flickr.com/photos/mares87/5995194777/in/album-72157627325839292/; License: Creative Commons Attribution (CC-BY).

    Hookah-Related Posts to Twitter From 2017 to 2018: Thematic Analysis

    Abstract:

    Background: Hookah (or tobacco waterpipe) use has recently become prevalent in the United States. The contexts and experiences associated with hookah use are unclear, yet such information is abundant via publicly available hookah users’ social media postings. Objective: In this study, we utilized Twitter data to characterize Twitter users’ recent experiences with hookah. Methods: Twitter posts containing the term “hookah” were obtained from April 1, 2017 to 29 March, 2018. Text classifiers were used to identify clusters of topics that tended to co-occur in posts (n=176,706). Results: The most prevalent topic cluster was Person Tagging (use of @username to tag another Twitter account in a post) at 21.58% (38,137/176,706) followed by Promotional or Social Events (eg, mentions of ladies’ nights, parties, etc) at 20.20% (35,701/176,706) and Appeal or Abuse Liability (eg, craving, enjoying hookah) at 18.12% (32,013/176,706). Additional topics included Hookah Use Behavior (eg, mentions of taking a “hit” of hookah) at 11.67% (20,603/176,706), Polysubstance Use (eg, hookah use along with other substances) at 10.95% (19,353/176,706), Buying or Selling (eg, buy, order, purchase, sell) at 9.37% (16,552/176,706), and Flavors (eg, mint, cinnamon, watermelon) at 1.66% (2927/176,706). The topic Dislike of Hookah (eg, hate, quit, dislike) was rare at 0.59% (1043/176,706). Conclusions: Social events, appeal or abuse liability, flavors, and polysubstance use were the common contexts and experiences associated with Twitter discussions about hookah in 2017-2018. Considered in concert with traditional data sources about hookah, these results suggest that social events, appeal or abuse liability, flavors, and polysubstance use warrant consideration as targets in future surveillance, policy making, and interventions addressing hookah.

  • Lisbon, Portugal. Source: Flickr; Copyright: islandjoe; URL: https://www.flickr.com/photos/islandjoe/5580156262/in/album-72157626281805233/; License: Creative Commons Attribution (CC-BY).

    New Integrated Model Approach to Understand the Factors That Drive Electronic Health Record Portal Adoption: Cross-Sectional National Survey

    Abstract:

    Background: The future of health care delivery is becoming more patient-focused, and electronic health record (EHR) portals are gaining more attention from worldwide governments that consider this technology as a valuable asset for the future sustainability of the national health care systems. Overall, this makes the adoption of EHR portals an important field to study. Objective: The aim of this study is to understand the factors that drive individuals to adopt EHR portals. Methods: We applied a new adoption model that combines 3 different theories, namely, extended unified theory of acceptance and use of technology, health belief model, and the diffusion of innovation; all the 3 theories provided relevant contributions for the understanding of EHR portals. To test the research model, we used the partial least squares causal modeling approach. We executed a national survey based on randomly generated mobile phone numbers. We collected 139 questionnaires. Results: Performance expectancy (beta=.203; t=2.699), compatibility (beta=.530; t=6.189), and habit (beta=.251; t=2.660) have a statistically significant impact on behavior intention (R2=76.0%). Habit (beta=.378; t=3.821), self-perception (beta=.233; t=2.971), and behavior intention (beta=.263; t=2.379) have a statistically significant impact on use behavior (R2=61.8%). In addition, behavior intention (beta=.747; t=10.737) has a statistically significant impact on intention to recommend (R2=69.0%), results demonstrability (beta=.403; t=2.888) and compatibility (beta=.337; t=2.243) have a statistically significant impact on effort expectancy (R2=48.3%), and compatibility (beta=.594; t=6.141) has a statistically significant impact on performance expectancy (R2=42.7%). Conclusions: Our research model yields very good results, with relevant R2 in the most important dependent variables that help explain the adoption of EHR portals, behavior intention, and use behavior.

  • Source: Image created by the Authors; Copyright: The Authors; URL: http://www.jmir.org/2018/11/e12052/; License: Creative Commons Attribution (CC-BY).

    Evaluation of a Web-Based Intervention for Multiple Health Behavior Changes in Patients With Coronary Heart Disease in Home-Based Rehabilitation: Pilot...

    Abstract:

    Background: Web-based and theory-based interventions for multiple health behaviors appears to be a promising approach with respect to the adoption and maintenance of a healthy lifestyle in cardiac patients who have been discharged from the hospital. Until now, no randomized controlled trials have tested this assumption among Chinese rehabilitation patients with coronary heart disease using a Web-based intervention. Objective: The study aim was to evaluate the effect of an 8-week Web-based intervention in terms of physical activity (PA), fruit and vegetable consumption (FVC), lifestyle changes, social-cognitive outcomes, and health outcomes compared with a waiting control group in Chinese cardiac patients. The intervention content was theory-based on the health action process approach. Self-reported data were evaluated, including PA, FVC, healthy lifestyle (the synthesis of PA and FVC), internal resources (combination of intention, self-efficacy, and planning), and an external resource (social support) of PA and FVC behaviors, as well as perceived health outcomes (body mass index, quality of life, and depression). Methods: In a randomized controlled trial, 136 outpatients with coronary heart disease from the cardiac rehabilitation center of a hospital in China were recruited. After randomization and exclusion of unsuitable participants, 114 patients were assigned to 1 of the 2 groups: (1) the intervention group: first 4 weeks on PA and subsequent 4 weeks on FVC and (2) the waiting control group. A total of 2 Web-based assessments were conducted, including 1 at the beginning of the intervention (T1, N=114), and 1 at the end of the 8-week intervention (T2, N=83). The enrollment and follow-up took place from December 2015 to May 2016. Results: The Web-based intervention outperformed the control condition for PA, FVC, internal resources of PA and FVC, and an external resource of FVC, with an eta-squared effect size ranging from 0.06 to 0.43. Furthermore, the intervention effect was seen in the improvement of quality of life (F1,79=16.36, P<.001, η2=.17). When predicting a healthy lifestyle at follow-up, baseline lifestyle (odds ratio, OR 145.60, 95% CI 11.24-1886; P<.001) and the intervention (OR 21.32, 95% CI 2.40-189.20; P=.006) were found to be significant predictors. Internal resources for FVC mediated the effect of the intervention on the adoption of a healthy lifestyle (R2adj=.29; P=.001), indicating that if the intervention increased the internal resource of behavior, the adoption of a healthy lifestyle was more likely. Conclusions: Patients’ psychological resources such as motivation, self-efficacy, planning, and social support as well as lifestyle can be improved by a Web-based intervention that focuses on both PA and FVC. Such an intervention enriches extended rehabilitation approaches for cardiac patients to be active and remain healthy in daily life after hospital discharge. Trial Registration: ClinicalTrials.gov NCT01909349; https://clinicaltrials.gov/ct2/show/NCT01909349 (Archived by WebCite at http://www.webcitation.org/6pHV1A0G1)

  • Star system for measuring engagement. Source: iStock by Getty Images; Copyright: Camille Short; URL: https://www.istockphoto.com/au/photo/rating-five-stars-on-blackboard-gm935832236-256033894; License: Licensed by the authors.

    Measuring Engagement in eHealth and mHealth Behavior Change Interventions: Viewpoint of Methodologies

    Abstract:

    Engagement in electronic health (eHealth) and mobile health (mHealth) behavior change interventions is thought to be important for intervention effectiveness, though what constitutes engagement and how it enhances efficacy has been somewhat unclear in the literature. Recently published detailed definitions and conceptual models of engagement have helped to build consensus around a definition of engagement and improve our understanding of how engagement may influence effectiveness. This work has helped to establish a clearer research agenda. However, to test the hypotheses generated by the conceptual modules, we need to know how to measure engagement in a valid and reliable way. The aim of this viewpoint is to provide an overview of engagement measurement options that can be employed in eHealth and mHealth behavior change intervention evaluations, discuss methodological considerations, and provide direction for future research. To identify measures, we used snowball sampling, starting from systematic reviews of engagement research as well as those utilized in studies known to the authors. A wide range of methods to measure engagement were identified, including qualitative measures, self-report questionnaires, ecological momentary assessments, system usage data, sensor data, social media data, and psychophysiological measures. Each measurement method is appraised and examples are provided to illustrate possible use in eHealth and mHealth behavior change research. Recommendations for future research are provided, based on the limitations of current methods and the heavy reliance on system usage data as the sole assessment of engagement. The validation and adoption of a wider range of engagement measurements and their thoughtful application to the study of engagement are encouraged.

  • Manage My Pain app (montage). Source: The Authors / pngpix; Copyright: The Authors; URL: http://www.jmir.org/2018/11/e12001/; License: Creative Commons Attribution + ShareAlike (CC-BY-SA).

    Defining and Predicting Pain Volatility in Users of the Manage My Pain App: Analysis Using Data Mining and Machine Learning Methods

    Abstract:

    Background: Measuring and predicting pain volatility (fluctuation or variability in pain scores over time) can help improve pain management. Perceptions of pain and its consequent disabling effects are often heightened under the conditions of greater uncertainty and unpredictability associated with pain volatility. Objective: This study aimed to use data mining and machine learning methods to (1) define a new measure of pain volatility and (2) predict future pain volatility levels from users of the pain management app, Manage My Pain, based on demographic, clinical, and app use features. Methods: Pain volatility was defined as the mean of absolute changes between 2 consecutive self-reported pain severity scores within the observation periods. The k-means clustering algorithm was applied to users’ pain volatility scores at the first and sixth month of app use to establish a threshold discriminating low from high volatility classes. Subsequently, we extracted 130 demographic, clinical, and app usage features from the first month of app use to predict these 2 volatility classes at the sixth month of app use. Prediction models were developed using 4 methods: (1) logistic regression with ridge estimators; (2) logistic regression with Least Absolute Shrinkage and Selection Operator; (3) Random Forests; and (4) Support Vector Machines. Overall prediction accuracy and accuracy for both classes were calculated to compare the performance of the prediction models. Training and testing were conducted using 5-fold cross validation. A class imbalance issue was addressed using a random subsampling of the training dataset. Users with at least five pain records in both the predictor and outcome periods (N=782 users) are included in the analysis. Results: k-means clustering algorithm was applied to pain volatility scores to establish a threshold of 1.6 to differentiate between low and high volatility classes. After validating the threshold using random subsamples, 2 classes were created: low volatility (n=611) and high volatility (n=171). In this class-imbalanced dataset, all 4 prediction models achieved 78.1% (611/782) to 79.0% (618/782) in overall accuracy. However, all models have a prediction accuracy of less than 18.7% (32/171) for the high volatility class. After addressing the class imbalance issue using random subsampling, results improved across all models for the high volatility class to greater than 59.6% (102/171). The prediction model based on Random Forests performs the best as it consistently achieves approximately 70% accuracy for both classes across 3 random subsamples. Conclusions: We propose a novel method for measuring pain volatility. Cluster analysis was applied to divide users into subsets of low and high volatility classes. These classes were then predicted at the sixth month of app use with an acceptable degree of accuracy using machine learning methods based on the features extracted from demographic, clinical, and app use information from the first month.

  • Hardware and software used for digital fingerprinting and study data collection. Source: The Authors; Copyright: The Authors; URL: http://www.jmir.org/2018/11/e11541/; License: Licensed by JMIR.

    Feasibility, Acceptability, and Adoption of Digital Fingerprinting During Contact Investigation for Tuberculosis in Kampala, Uganda: A Parallel-Convergent...

    Abstract:

    Background: In resource-constrained settings, challenges with unique patient identification may limit continuity of care, monitoring and evaluation, and data integrity. Biometrics offers an appealing but understudied potential solution. Objective: The objective of this mixed-methods study was to understand the feasibility, acceptability, and adoption of digital fingerprinting for patient identification in a study of household tuberculosis contact investigation in Kampala, Uganda. Methods: Digital fingerprinting was performed using multispectral fingerprint scanners. We tested associations between demographic, clinical, and temporal characteristics and failure to capture a digital fingerprint. We used generalized estimating equations and a robust covariance estimator to account for clustering. In addition, we evaluated the clustering of outcomes by household and community health workers (CHWs) by calculating intraclass correlation coefficients (ICCs). To understand the determinants of intended and actual use of fingerprinting technology, we conducted 15 in-depth interviews with CHWs and applied a widely used conceptual framework, the Technology Acceptance Model 2 (TAM2). Results: Digital fingerprints were captured for 75.5% (694/919) of participants, with extensive clustering by household (ICC=.99) arising from software (108/179, 60.3%) and hardware (65/179, 36.3%) failures. Clinical and demographic characteristics were not markedly associated with fingerprint capture. CHWs successfully fingerprinted all contacts in 70.1% (213/304) of households, with modest clustering of outcomes by CHWs (ICC=.18). The proportion of households in which all members were successfully fingerprinted declined over time (ρ=.30, P<.001). In interviews, CHWs reported that fingerprinting failures lowered their perceptions of the quality of the technology, threatened their social image as competent health workers, and made the technology more difficult to use. Conclusions: We found that digital fingerprinting was feasible and acceptable for individual identification, but problems implementing the hardware and software lead to a high failure rate. Although CHWs found fingerprinting to be acceptable in principle, their intention to use the technology was tempered by perceptions that it was inconsistent and of questionable value. TAM2 provided a valuable framework for understanding the motivations behind CHWs’ intentions to use the technology. We emphasize the need for routine process evaluation of biometrics and other digital technologies in resource-constrained settings to assess implementation effectiveness and guide improvement of delivery.

  • Source: Sun Yat-sen University Zhongshan Ophthalmic Center; Copyright: Sun Yat-sen University Zhongshan Ophthalmic Center; URL: http://www.jmir.org/2018/11/e11144/; License: Licensed by JMIR.

    An Interpretable and Expandable Deep Learning Diagnostic System for Multiple Ocular Diseases: Qualitative Study

    Abstract:

    Background: Although artificial intelligence performs promisingly in medicine, few automatic disease diagnosis platforms can clearly explain why a specific medical decision is made. Objective: We aimed to devise and develop an interpretable and expandable diagnosis framework for automatically diagnosing multiple ocular diseases and providing treatment recommendations for the particular illness of a specific patient. Methods: As the diagnosis of ocular diseases highly depends on observing medical images, we chose ophthalmic images as research material. All medical images were labeled to 4 types of diseases or normal (total 5 classes); each image was decomposed into different parts according to the anatomical knowledge and then annotated. This process yields the positions and primary information on different anatomical parts and foci observed in medical images, thereby bridging the gap between medical image and diagnostic process. Next, we applied images and the information produced during the annotation process to implement an interpretable and expandable automatic diagnostic framework with deep learning. Results: This diagnosis framework comprises 4 stages. The first stage identifies the type of disease (identification accuracy, 93%). The second stage localizes the anatomical parts and foci of the eye (localization accuracy: images under natural light without fluorescein sodium eye drops, 82%; images under cobalt blue light or natural light with fluorescein sodium eye drops, 90%). The third stage carefully classifies the specific condition of each anatomical part or focus with the result from the second stage (average accuracy for multiple classification problems, 79%-98%). The last stage provides treatment advice according to medical experience and artificial intelligence, which is merely involved with pterygium (accuracy, >95%). Based on this, we developed a telemedical system that can show detailed reasons for a particular diagnosis to doctors and patients to help doctors with medical decision making. This system can carefully analyze medical images and provide treatment advices according to the analysis results and consultation between a doctor and a patient. Conclusions: The interpretable and expandable medical artificial intelligence platform was successfully built; this system can identify the disease, distinguish different anatomical parts and foci, discern the diagnostic information relevant to the diagnosis of diseases, and provide treatment suggestions. During this process, the whole diagnostic flow becomes clear and understandable to both doctors and their patients. Moreover, other diseases can be seamlessly integrated into this system without any influence on existing modules or diseases. Furthermore, this framework can assist in the clinical training of junior doctors. Owing to the rare high-grade medical resource, it is impossible that everyone receives high-quality professional diagnosis and treatment service. This framework can not only be applied in hospitals with insufficient medical resources to decrease the pressure on experienced doctors but also deployed in remote areas to help doctors diagnose common ocular diseases.

  • A  literature search conducted on PubMed using search terms related to internet-based therapy for trauma (montage). Source: PubMed / Placeit; Copyright: JMIR Publications; URL: http://www.jmir.org/2018/11/e280/; License: Creative Commons Attribution (CC-BY).

    Internet-Delivered Early Interventions for Individuals Exposed to Traumatic Events: Systematic Review

    Abstract:

    Background: Over 75% of individuals are exposed to a traumatic event, and a substantial minority goes on to experience mental health problems that can be chronic and pernicious in their lifetime. Early interventions show promise for preventing trauma following psychopathology; however, a face-to-face intervention can be costly, and there are many barriers to accessing this format of care. Objective: The aim of this study was to systematically review studies of internet-delivered early interventions for trauma-exposed individuals. Methods: A literature search was conducted in PsycINFO and PubMed for papers published between 1991 and 2017. Papers were included if the following criteria were met: (1) an internet-based intervention was described and applied to individuals exposed to a traumatic event; (2) the authors stated that the intervention was intended to be applied early following trauma exposure or as a preventive intervention; and (3) data on mental health symptoms at pre-and postintervention were described (regardless of whether these were primary outcomes). Methodological quality of included studies was assessed using the Downs and Black checklist. Results: The interventions in the 7 studies identified were categorized as selected (ie, delivered to an entire sample after trauma regardless of psychopathology symptoms) or indicated (ie, delivered to those endorsing some level of posttraumatic distress). Selected interventions did not produce significant symptom improvement compared with treatment-as-usual or no intervention control groups. However, indicated interventions yielded significant improvements over other active control conditions on mental health outcomes. Conclusions: Consistent with the notion that many experience natural recovery following trauma, results imply that indicated early internet-delivered interventions hold the most promise in future prevention efforts. More studies that use rigorous methods and clearly defined outcomes are needed to evaluate the efficacy of early internet-delivered interventions. Moreover, basic research on risk and resilience factors following trauma exposure is necessary to inform indicated internet-delivered interventions.

  • Source: Pixabay; Copyright: picjumbo_com; URL: https://pixabay.com/en/student-notebook-female-study-type-865073/; License: Public Domain (CC0).

    Automatic Classification of Online Doctor Reviews: Evaluation of Text Classifier Algorithms

    Abstract:

    Background: An increasing number of doctor reviews are being generated by patients on the internet. These reviews address a diverse set of topics (features), including wait time, office staff, doctor’s skills, and bedside manners. Most previous work on automatic analysis of Web-based customer reviews assumes that (1) product features are described unambiguously by a small number of keywords, for example, battery for phones and (2) the opinion for each feature has a positive or negative sentiment. However, in the domain of doctor reviews, this setting is too restrictive: a feature such as visit duration for doctor reviews may be expressed in many ways and does not necessarily have a positive or negative sentiment. Objective: This study aimed to adapt existing and propose novel text classification methods on the domain of doctor reviews. These methods are evaluated on their accuracy to classify a diverse set of doctor review features. Methods: We first manually examined a large number of reviews to extract a set of features that are frequently mentioned in the reviews. Then we proposed a new algorithm that goes beyond bag-of-words or deep learning classification techniques by leveraging natural language processing (NLP) tools. Specifically, our algorithm automatically extracts dependency tree patterns and uses them to classify review sentences. Results: We evaluated several state-of-the-art text classification algorithms as well as our dependency tree–based classifier algorithm on a real-world doctor review dataset. We showed that methods using deep learning or NLP techniques tend to outperform traditional bag-of-words methods. In our experiments, the 2 best methods used NLP techniques; on average, our proposed classifier performed 2.19% better than an existing NLP-based method, but many of its predictions of specific opinions were incorrect. Conclusions: We conclude that it is feasible to classify doctor reviews. Automatically classifying these reviews would allow patients to easily search for doctors based on their personal preference criteria.

  • Source: Freepik; Copyright: Prakasit Khuansuwan; URL: https://www.freepik.com/free-photo/young-business-woman-stressed-from-work-sitting-staircase-take-a-look-on-her-laptop_1603389.htm; License: Licensed by JMIR.

    The Generalizability of Randomized Controlled Trials of Self-Guided Internet-Based Cognitive Behavioral Therapy for Depressive Symptoms: Systematic Review...

    Abstract:

    Background: Self-guided internet-based cognitive behavioral therapies (iCBTs) for depressive symptoms may substantially increase accessibility to mental health treatment. Despite this, questions remain as to the generalizability of the research on self-guided iCBT. Objective: We sought to describe the clinical entry criteria used in studies of self-guided iCBT, explore the criteria’s effects on study outcomes, and compare the frequency of use of these criteria with their use in studies of face-to-face psychotherapy and antidepressant medications. We hypothesized that self-guided iCBT studies would use more stringent criteria that would bias the sample toward those with a less complex clinical profile, thus inflating treatment outcomes. Methods: We updated a recently published meta-analysis by conducting a systematic literature search in PubMed, MEDLINE, PsycINFO, and EMBASE. We conducted a meta-regression analysis to test the effect of the different commonly used psychiatric entry criteria on the treatment-control differences. We also compared the frequency with which exclusion criteria were used in the self-guided iCBT studies versus studies of face-to-face psychotherapy and antidepressants from a recently published review. Results: Our search yielded 5 additional studies, which we added to the 16 studies identified by Karyotaki and colleagues in 2017. Few self-guided iCBT studies excluded patients with severe depressive symptoms (6/21, 29%), but self-guided iCBT studies were more likely than antidepressant (14/170, 8.2%) studies to use this criterion. However, self-guided iCBT studies did not use this criterion more frequently than face-to-face psychotherapy studies (6/16, 38%). Beyond this, we found no evidence that self-guided iCBTs used more stringent entry criteria. Strong evidence suggested that they were actually less likely to use most entry criteria, especially exclusions on the basis of substance use or personality pathology. None of the entry criteria used had an effect on outcomes. Conclusions: A conservative interpretation of our findings is that the patient population sampled in the literature on self-guided iCBT is relatively comparable with that of studies of antidepressants or face-to-face psychotherapy. Alternatively, studies of unguided cognitive behavioral therapy may sample from a more heterogeneous and representative patient population. Until evidence emerges to suggest otherwise, the patient population sampled in self-guided iCBT studies cannot be considered as less complex than the patient population from face-to-face psychotherapy or antidepressant studies.

  • US News and World Report Best Hospital Rankings and hospital Twitter feed (montage). Source: JMIR Publications/Placeit; Copyright: JMIR Publications; URL: http://www.jmir.org/2018/11/e289/; License: Creative Commons Attribution (CC-BY).

    Correlations Between Hospitals’ Social Media Presence and Reputation Score and Ranking: Cross-Sectional Analysis

    Abstract:

    Background: The US News and World Report reputation score correlates strongly with overall rank in adult and pediatric hospital rankings. Social media affects how information is disseminated to physicians and is used by hospitals as a marketing tool to recruit patients. It is unclear whether the reputation score for adult and children’s hospitals relates to social media presence. Objective: The objective of our study was to analyze the association between a hospital’s social media metrics and the US News 2017-2018 Best Hospital Rankings for adult and children’s hospitals. Methods: We conducted a cross-sectional analysis of the reputation score, total score, and social media metrics (Twitter, Facebook, and Instagram) of hospitals who received at least one subspecialty ranking in the 2017-2018 US News publicly available annual rankings. Regression analysis was employed to analyze the partial correlation coefficients between social media metrics and a hospital’s total points (ie, rank) and reputation score for both adult and children’s hospitals while controlling for the bed size and time on Twitter. Results: We observed significant correlations for children’s hospitals’ reputation score and total points with the number of Twitter followers (total points: r=.465, P<.001; reputation: r=.524, P<.001) and Facebook followers (total points: r=.392, P=.002; reputation: r=.518, P<.001). Significant correlations for the adult hospitals reputation score were found with the number of Twitter followers (r=.848, P<.001), number of tweets (r=.535, P<.001), Klout score (r=.242, P=.02), and Facebook followers (r=.743, P<.001). In addition, significant correlations for adult hospitals total points were found with Twitter followers (r=.548, P<.001), number of tweets (r=.358, P<.001), Klout score (r=.203, P=.05), Facebook followers (r=.500, P<.001), and Instagram followers (r=.692, P<.001). Conclusions: A statistically significant correlation exists between multiple social media metrics and both a hospital’s reputation score and total points (ie, overall rank). This association may indicate that a hospital’s reputation may be influenced by its social media presence or that the reputation or rank of a hospital drives social media followers.

  • Sample Facebook ad from the UCare study (montage). Source: The Authors / Placeit; Copyright: JMIR Publications; URL: http://www.jmir.org/2018/11/e29; License: Creative Commons Attribution (CC-BY).

    Using Facebook for Large-Scale Online Randomized Clinical Trial Recruitment: Effective Advertising Strategies

    Abstract:

    Targeted Facebook advertising can be an effective strategy to recruit participants for a large-scale online study. Facebook advertising is useful for reaching people in a wide geographic area, matching a specific demographic profile. It can also target people who would be unlikely to search for the information and would thus not be accessible via Google AdWords. It is especially useful when it is desirable not to raise awareness of the study in a demographic group that would be ineligible for the study. This paper describes the use of Facebook advertising to recruit and enroll 1145 women over a 15-month period for a randomized clinical trial to teach support skills to female partners of male smokeless tobacco users. This tutorial shares our study team’s experiences, lessons learned, and recommendations to help researchers design Facebook advertising campaigns. Topics covered include designing the study infrastructure to optimize recruitment and enrollment tracking, creating a Facebook presence via a fan page, designing ads that attract potential participants while meeting Facebook’s strict requirements, and planning and managing an advertising campaign that accommodates the rapid rate of diminishing returns for each ad.

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Latest Submissions Open for Peer-Review:

View All Open Peer Review Articles
  • Mood prediction of patients with mood disorder by machine learning using passive digital phenotypes based on circadian rhythm: a prospective observational cohort study

    Date Submitted: May 16, 2018

    Open Peer Review Period: Nov 17, 2018 - Jan 12, 2019

    Background: All organisms on Earth have their own circadian rhythm, and humans are no exception. Circadian rhythms are associated with various human states, especially mood disorders and disturbance o...

    Background: All organisms on Earth have their own circadian rhythm, and humans are no exception. Circadian rhythms are associated with various human states, especially mood disorders and disturbance of circadian rhythm is known to be very closely related. Attempts have also been made to derive clinical implications associated with mood disorders using the acquiring vast amounts of digital log as digital technologies develop and using computational analysis techniques. Objective: The present study was conducted to evaluate the mood state/episode, activity, sleep, light exposure, and heart rate during a period of about two years by acquiring various digital log data through wearable devices and smartphone applications as well as conventional clinical assessments. We investigated a mood prediction algorithm developed with machine learning using passive data phenotypes based on circadian rhythms. Methods: We performed a prospective observational cohort study on sixty patients with mood disorders (major depressive disorder, bipolar disorder type 1 and 2; MDD, BD I, and BD II, respectively) for two years. A smartphone application for self-recording daily mood scores and detecting light exposure (using installed sensor) were provided. From daily worn activity trackers, digital log data of activity, sleep, and heart rate were collected. Passive digital phenotypes were processed into 130 features based on circadian rhythms, and a mood prediction algorithm was developed by random forest. Results: The mood state prediction accuracies in all patients, MDD, BD I, and BD II were 76, 78, 76, and 79%, with 0.83, 0.84, 0.84, and 0.81 area under the curve (AUC) values, respectively. The accuracies of all patients for no episode (NE), depressive episode (DE), manic episode (ME) and hypomanic episode (HME) were 91.3, 91.2, 99.3, and 98.2%, with 0.972, 0.965, 1, and 0.999 of AUCs, respectively. The prediction accuracy in BD II patients was distinctively balanced high showing 92.1, 93.1, and 96.8% of the accuracies, with 0.975, 0.98, and 0.997 of the AUCs for NE, DE, and HME, respectively. Conclusions: Based on the theoretical basis of chronobiology, this study proposed a good model of future research by developing a mood prediction algorithm using machine learning by processing and reclassifying digital log data. In addition to academic value, it is expected that this study will be of practical help to improve the prognosis of patients with mood disorder by making it possible to apply actual clinical application due to rapid expansion of digital technology. Clinical Trial: ClinicalTrials.gov: NCT03088657

  • A Realist Evaluation of the development of a set of guidelines for technological interventions made for children and young people with ADHD to self-manage their condition

    Date Submitted: Nov 16, 2018

    Open Peer Review Period: Nov 17, 2018 - Jan 12, 2019

    Background: Attention Deficit Hyperactivity Disorder (ADHD) is a complex neurodevelopmental disorder characterised by inattention, hyperactivity and impulsivity. ADHD can affect the individual, their...

    Background: Attention Deficit Hyperactivity Disorder (ADHD) is a complex neurodevelopmental disorder characterised by inattention, hyperactivity and impulsivity. ADHD can affect the individual, their family and the community. ADHD is managed using pharmacological and non-pharmacological treatments, which principally involves others helping children and young people (CAYP) manage their ADHD rather than learning self-management strategies themselves. Over recent years, technological developments have meant that technology has been harnessed to create interventions to facilitate the self-management of ADHD in CAYP. Despite a clear potential to improve the effectiveness and personalisation of interventions, there are currently no guidelines based on existing evidence or theories to underpin the development of technologies that aim to help CAYP self-manage their ADHD. Objective: To create evidence-based guidelines with key stakeholders that will provide recommendations for the future development of technological interventions, which aim to facilitate the self-management of ADHD. Methods: A realist evaluation approach was adopted in five phases. Phase one involved identifying propositions (or hypotheses) outlining what could work for such an intervention. Phase two involved the identification of middle-range theories of behaviour change to underpin the propositions. Phase three involved the identification and development of Context Mechanism Outcome Configurations (CMOCs), which essentially state, which elements of the intervention could be effected by which contexts and what the outcome of these could be. Phase four involved the validation and refinement of the propositions via interviews with key stakeholders (CAYP with ADHD, their parents and specialist clinicians). Phase five involved the development of the guidelines based on the identified middle-range theories and interview data. Results: Six specialist clinicians, eight parents and seven CAYP were recruited to this study. Seven key themes were identified 1) Positive rewarding feedback, 2) Downloadable gaming resources, 3) Personalisable and adaptable components, 4) Psychoeducation component, 5) Integration of self-management strategies, 6) Goal setting and 7) Context (environmental and personal). The identified mechanisms interacted with the variable contexts a complex technological intervention of this nature could be delivered in. Conclusions: Complex intervention development for complex populations such as CAYP with ADHD should adopt various methodologies and methods such as realist evaluation and user-centered design that involves developing the intervention with key stakeholders to increase the likelihood that the intervention will succeed. The guidelines we describe can be used for the future development of technologies that aim to facilitate self-managed ADHD for CAYP.

  • Best Practices for Data Visualization: Creating and Evaluating a Report for an Evidence-Based Fall Prevention Program

    Date Submitted: Nov 12, 2018

    Open Peer Review Period: Nov 17, 2018 - Nov 22, 2018

    Background: Data visualization experts have identified core principles to follow when creating visual displays of data that facilitate comprehension. Such principles can be applied to creating effecti...

    Background: Data visualization experts have identified core principles to follow when creating visual displays of data that facilitate comprehension. Such principles can be applied to creating effective reports for clinicians that display compliance with quality improvement protocols. A basic tenet of implementation science is continuous monitoring and feedback. Applying best practices for data visualization to reports for clinicians can catalyze implementation and sustainment of new protocols. Objective: To apply best practices for data visualization to create reports that clinicians find clear and useful. Methods: Using an evidence-based fall prevention program, Fall TIPS (Tailoring Interventions for Patient Safety), we created a report showing program compliance. First, we conducted a systematic literature review to identify best practices for data visualization. We applied these findings to a monthly report displaying compliance with the Fall TIPS protocol. We refined the Fall TIPS Monthly Report (FTMR) based on feedback collected via a questionnaire we developed. This questionnaire was based on the requirements for effective data display suggested by expert Stephen Few. We then evaluated usability of the FTMR using a 15-item Health Information Technology Usability Evaluation Scale (Health-ITUES). Items were rated on a 5-point Likert scale from strongly disagree (1) to strongly agree (5). Results: The results of the systematic literature review emphasized that the ideal data display maximizes the information communicated while minimizing the cognitive efforts involved with data interpretation. Factors to consider include selecting the correct type of display (e.g. line vs bar graph) and creating simplistic reports. The pre (n=79) and post (n=72) qualitative and quantitative evaluations of the final FTMR revealed improved perceptions of the visual display of the reports and their usability. Themes that emerged from the staff interviews emphasized the value of simplified reports, meaningful data, and usability to clinicians. The mean (SD) rating on the Health-ITUES scale in the pre-modification period was 3.86 (.19) and increased to 4.29 (0.11) in the post-modification survey period (Mann Whitney U Test, z=-12.25, P<0.001). Conclusions: Best practices identified through a systematic review can be applied to create effective reports for clinician use. The lessons learned from evaluating FTMR perceptions and measuring usability can be applied to creating effective reports for clinician use in the context of other implementation science projects.

  • PACO - Physical Activity Concept Ontology

    Date Submitted: Nov 14, 2018

    Open Peer Review Period: Nov 16, 2018 - Jan 11, 2019

    Background: Physical activity data provides important information on disease onset, progression, and treatment outcomes. Although analyzing physical activity data in conjunction with other clinical an...

    Background: Physical activity data provides important information on disease onset, progression, and treatment outcomes. Although analyzing physical activity data in conjunction with other clinical and microbiological data will lead to new insights crucial for improving human health, it has been hampered partly due to the large variations in the way the data are collected and presented. Objective: The goal of this study was to develop a Physical Activity Concept Ontology (PACO) to support structuring and standardizing heterogeneous descriptions of physical activities. Methods: We prepared a corpus of 1140 unique questions collected from various physical activity questionnaires and scales, as well as existing standardized terminologies and ontologies. We extracted concepts relevant to physical activity from the corpus using MUTT (Multipurpose Text processing Tool). The target concepts were formalized into an ontology using Protégé (version 4). Evaluation of PACO was performed along two aspects: structural consistency and structural cohesiveness. Evaluations were conducted using the Ontology Debugger plugin of Protégé and OntOlogy Pitfall Scanner (OOPS!). A use case application of PACO was demonstrated by structuring and standardizing 36 exercise habit statements and then automatically classifying them to a defined class of either sufficiently active or insufficiently active using FaCT++, an ontology reasoner available in Protégé. Results: PACO was constructed using the 268 unique concepts extracted from the questionnaires and assessment scales. PACO contains 225 classes including 9 defined classes, 8 object properties, 1 data property, and 23 instances (excluding 36 exercise statements). The maximum depth of classes is 4 and the maximum number of siblings is 38. The evaluations with ontology auditing tool confirmed that PACO is structurally consistent and cohesive. We showed in a small sample of 36 exercise habit statements that we could map text segments to relevant PACO concepts (e.g., exercise type class, intensity, and total minutes exercised per week) and infer from these concepts output determinations of sufficiently active or insufficiently active, using the FaCT++ reasoner. Conclusions: As a first step toward standardizing and structuring heterogeneous descriptions of physical activities for integrative data analyses, PACO was built with the concepts collected from physical activity questionnaires and scale. PACO was evaluated to be structurally consistent and cohesive, and also demonstrated to be potentially useful in standardizing heterogeneous physical activity descriptions and classifying them into clinically meaningful categories that reflect adequacy of exercise. Clinical Trial: NA

  • Artificial Intelligence and the Future of Primary Care: An Exploratory Qualitative Study of UK GPs’ Views

    Date Submitted: Nov 13, 2018

    Open Peer Review Period: Nov 13, 2018 - Jan 8, 2019

    Background: The potential for machine learning to disrupt the medical professions is the subject of ongoing debate within biomedical informatics and related fields. Objective: To explore GPs’ opinio...

    Background: The potential for machine learning to disrupt the medical professions is the subject of ongoing debate within biomedical informatics and related fields. Objective: To explore GPs’ opinions about the potential impact of future technology on key tasks in primary care. Methods: Context and Setting: A web-based survey of 720 UK GPs’ opinions about the likelihood of future technology to fully replace GPs in performing six key primary care tasks; and if respondents considered replacement for a particular task likely, to estimate how soon the technological capacity might emerge. Qualitative descriptive analysis of written responses (‘comments’) to an open-ended question. Results: Comments were classified into three major categories in relation to primary care: (i) limitations of future technology; (ii) potential benefits of future technology; and (iii) social and ethical concerns. Perceived limitations included the beliefs that communication and empathy are exclusively human competencies; many GPs also considered clinical reasoning, and the ability to provide value-based care as necessitating physicians’ judgements. Perceived benefits of technology included expectations about improved efficiencies in particular with respect to the reduction of administrative burdens on physicians. Social and ethical concerns encompassed multiple, divergent themes including the need to train more doctors to overcome workforce shortfalls, and misgivings about the acceptability of future technology to patients. However, some GPs believed that the failure to adopt technological innovations could incur harms to both patients and physicians. Conclusions: This study presents timely information on physicians’ views about the scope of artificial intelligence in primary care. Overwhelmingly, GPs considered the potential of artificial intelligence to be limited. These views differ from the predictions of biomedical informaticians. More extensive, stand-alone qualitative work would provide a more in-depth understanding of GPs’ views. Clinical Trial: (Not applicable)

  • Assessment of CHA2DS2-VASc Score for the Risk Stratification of Hospital Admission in Patients with Cardiovascular Diseases Receiving a Fourth-Generation Synchronous Telehealth Program

    Date Submitted: Nov 11, 2018

    Open Peer Review Period: Nov 11, 2018 - Nov 21, 2018

    Background: The telehealth program is diverse with mixed results. A comprehensive and integrated approach is needed to evaluate who gets benefits from the program to improve clinical outcomes. Objecti...

    Background: The telehealth program is diverse with mixed results. A comprehensive and integrated approach is needed to evaluate who gets benefits from the program to improve clinical outcomes. Objective: The CHA2DS2-VASc score has been widely used for the prediction of stroke in patients with atrial fibrillation. This study adopts the predictive concept of the CHA2DS2-VASc score and investigated this score for risk stratification in hospital admission in patients with cardiovascular diseases receiving a fourth-generation synchronous telehealth program. Methods: This was a retrospective cohort study. We recruited patients with cardiovascular disease who received the fourth-generation synchronous telehealth program at the National Taiwan University Hospital between October 2012 and June 2015. We enrolled 431 patients who had joined a telehealth program and compared them with 1549 control patients. Cardiovascular hospitalization was estimated with Kaplan-Meier curves. The CHA2DS2-VASc score was used as the composite parameter to stratify the severity of the patients. The association between baseline characteristics and the clinical outcomes was assessed via the Cox proportional hazard model. Results: The mean follow-up duration was 886.1 ± 531.0 days in patients receiving the fourth-generation synchronous telehealth program and 707.1 ± 431.4 days in the control group. (p<0.0001). The telehealth group had more comorbidities at baseline than the control group. Patients with higher CHA2DS2-VASc score (≥ 4) were associated with a lower estimated rate of free from cardiovascular hospitalization (46.5% vs. 54.8%, log-rank test p = 0.0028). Patients receiving the telehealth program with CHA2DS2-VASc score ≥ 4 were less likely to be admitted for cardiovascular disease (61.5% vs. 41.8%, log-rank test p = 0.010). The telehealth program remains a significant prognostic factor after multivariable Cox analysis in patients with CHA2DS2-VASc score ≥ 4 (HR=0.36 [CI: 0.22 -0.62], p < 0.0001) Conclusions: A higher CHA2DS2-VASc score is associated with higher cardiovascular admission. Patients with CHA2DS2-VASc ≥4 benefits most for free from cardiovascular hospitalization after accepting the fourth-generation telehealth program. Clinical Trial: N/A

Advertisement