Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Monday, March 11, 2019 at 4:00 PM to 4:30 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Advertisement

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 25.10.19 in Vol 21, No 10 (2019): October

This paper is in the following e-collection/theme issue:

    Original Paper

    An Online Survey for Pharmacoepidemiological Investigation (Survey of Non-Medical Use of Prescription Drugs Program): Validation Study

    1Rocky Mountain Poison & Drug Center, Denver Health and Hospital Authority, Denver, CO, United States

    2Injury Prevention Research Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States

    Corresponding Author:

    Joshua Curtis Black, PhD

    Rocky Mountain Poison & Drug Center

    Denver Health and Hospital Authority

    1391 N Speer Blvd, #600, M/C 0180

    Denver, CO,

    United States

    Phone: 1 303 389 1652

    Fax:1 303 389 1482

    Email: joshua.black@rmpds.org


    ABSTRACT

    Background: In rapidly changing fields such as the study of drug use, the need for accurate and timely data is paramount to properly inform policy and intervention decisions. Trends in drug use can change rapidly by month, and using study designs with flexible modules could present advantages. Timely data from online panels can inform proactive interventions against emerging trends, leading to a faster public response. However, threats to validity from using online panels must be addressed to create accurate estimates.

    Objective: The objective of this study was to demonstrate a comprehensive methodological approach that optimizes a nonprobability, online opt-in sample to provide timely, accurate national estimates on prevalence of drug use.

    Methods: The Survey of Non-Medical Use of Prescription Drugs Program from the Researched Abuse, Diversion and Addiction Related Surveillance (RADARS) System is an online, cross-sectional survey on drug use in the United States, and several best practices were implemented. To optimize final estimates, two best practices were investigated in detail: exclusion of respondents showing careless or improbable responding patterns and calibration of weights. The approach in this work was to cumulatively implement each method, which improved key estimates during the third quarter 2018 survey launch. Cutoffs for five exclusion criteria were tested. Using a series of benchmarks, average relative bias and changes in bias were calculated for 33 different weighting variable combinations.

    Results: There were 148,274 invitations sent to panelists, with 40,021 who initiated the survey (26.99%). After eligibility assessment, 20.23% (29,998/148,274) of the completed questionnaires were available for analysis. A total of 0.52% (157/29,998) of respondents were excluded based on careless or improbable responses; however, these exclusions had larger impacts on lower volume drugs. Number of exclusions applied were negatively correlated to total dispensing volume by drug (Spearman ρ=–.88, P<.001). A weighting scheme including three demographic and two health characteristics reduced average relative bias by 31.2%. After weighting, estimates of drug use decreased, reflecting a weighted sample that had healthier benchmarks than the unweighted sample.

    Conclusions: Our study illustrates a new approach to using nonprobability online panels to achieve national prevalence estimates for drug abuse. We were able to overcome challenges with using nonprobability internet samples, including misclassification due to improbable responses. Final drug use and health estimates demonstrated concurrent validity to national probability-based drug use and health surveys. Inclusion of multiple best practices cumulatively improved the estimates generated. This method can bridge the information gap when there is a need for prompt, accurate national data.

    J Med Internet Res 2019;21(10):e15830

    doi:10.2196/15830

    KEYWORDS



    Introduction

    Large governmental surveys, such as the National Survey on Drug Use and Health (NSDUH) in the United States, are used for nationwide drug use surveillance, offering researchers substantial statistical power for subgroup analyses, questionnaire consistency over decades, comprehensive and validated questionnaires, and probability-based geographic sampling for nationally representative estimates. However, these types of surveys cost millions of dollars a year to conduct, require training of field agents, and have a 2-year lag for data publication [1].

    Trends in drug use can change rapidly by month, and using study designs with flexible modules could present advantages. Timely data can inform proactive interventions against emerging trends, leading to a faster public response. Large population surveys have used computer-assisted interviewing [2-4] with increased accuracy of self-reported socially stigmatized behaviors [5,6]. Internet-based questionnaires are an extension of computer-assisted interviewing, albeit with additional sampling and validity concerns, but van Gelder et al [7] have specifically suggested that illicit drug use may be a use case for internet-based questionnaires in epidemiology [7].

    The use of online panels for public health research has grown in recent years [8-11]. Survey panels are groups of individuals who opt in to take surveys for modest compensation on a wide array of topics, maintained by commercial panel-access vendors. The sampling frame is theoretically suitable, since 90% of US adults use the internet [12]. Beyond efficient and rapid recruitment, panels offer superior anonymity and reductions in social desirability bias compared with in-person interviews [8,9].

    However, threats to validity unique to internet surveys require removing careless or improbable responses [13], calibrating sample representativeness [14], preventing missing data [15], and addressing low response rates [16]. Crucially, representativeness of the sample to the target population requires methodological development since straightforward approaches, like poststratification demographic weighting, are insufficient [17].

    This paper describes the development of a comprehensive methodology that addresses threats to validity of using survey panels for national drug use estimates. The approach encompasses mobile-friendly interface, skip logic [18], response randomization [19], careless/improbable response exclusions [13,20], and calibration weighting [21,22]. External validity was assessed compared with three probability-based national surveys. To our knowledge, this is the first use of online panel data incorporating multiple best practices to produce nationally valid estimates regarding drug use.


    Methods

    Survey Overview

    The Researched Abuse, Diversion, and Addiction-Related Surveillance (RADARS) System comprises multiple data sources that characterize and monitor drug use [23]. The goal of the Survey of Non-Medical Use of Prescription Drugs (NMURx) Program described here is to provide accurate and timely estimates of prescription drug nonmedical use (NMU) and associated motivations and behaviors in the adult general US population. The NMURx Program employs a cross-sectional, opt-in online self-administered questionnaire drawn from a commercial survey panel. Respondents’ personal information is kept confidential by the survey administrators; personally identifiable information is not collected on the questionnaire, and information held by the survey administrators is not available to the researchers. Following best practices for implementation of online questionnaires [19], three methodological practices are described: reduction of order effect bias by randomization of question order, exclusion criteria based on careless/improbable responses, and calibration weighting for generalizability.

    Questionnaire Development

    The main body of the questionnaire covered motivations and behaviors surrounding prescription drug use of four prescription drug classes (pain relievers, sedatives, stimulants, and cannabinoids), documenting lifetime and last 12-month NMU. NMU of prescription drugs was defined as use “in a way not directed by your health care provider.” Examples of the questionnaire questions can be found in Multimedia Appendix 1, Section A, followed by a list of drug classes and substances included in the questionnaire (Multimedia Appendix 1, Section B). Additional sections in the survey (some not included in this analysis) are: demographics, Drug Abuse Screening Test (DAST-10) measuring severity of problematic drug use [24], motivations and drug use behavior (eg, reasons for use, route of administration, source of acquisition), and health status (eg, substance use disorder treatment history, mental health disorders). Skip logic was used to minimize the number of questions a respondent was required to answer, with focus on preventing motivated underreporting [18]. Due to the large number of drugs included on the questionnaire and the follow-up questions on behaviors, motivated underreporting was of particular concern. The Checklist for Reporting Results of Internet E-Surveys (CHERRIES) is provided for further details on survey development and implementation (Multimedia Appendix 1, Section C).

    Two strategies were used to reduce order effect bias. First, the order of drug classes was randomized, followed by order of substances within each drug class. Block randomization kept together similar substances (eg, all pain relievers), with the order consistently maintained throughout the survey sections. Second, respondents were forced to provide product-specific answers for last 12-month NMU of specific drug products that had been endorsed at the class level. This was intended to improve internal validity and further reduce order effect bias [19].

    Participant Sampling

    The survey was open from September 28 through November 21, 2018. To be eligible for the survey, respondents must have been aged 18 years or older, and they must not have completed a NMURx Program survey in the same calendar year, excluding a small number of potential respondents who participated in pilot surveys. The panel company recruited panelists from the US population without specific consideration for the NMURx Program survey; selection into the panel was nonprobability-based and was a self-selected population of people who take surveys for compensation. The panel company employs evolving proprietary techniques to ensure panelists are providing reliable responses, with inactive or fraudulent accounts culled regularly. The panel company calculated each panelist’s expected response likelihood based on recent activity. These probabilities were used to select a random sample of panelists expected to yield 30,000 completed questionnaires. The email invitation did not include information about the survey topic to minimize selection bias; the topic was provided once a panelist opened the link during the consenting process.

    To reduce the possibility of extreme analytical weights, 8-stratum sampling quotas were devised, proportionally based on the adult residential population from the American Community Survey (ACS) [25], stratified by male/female gender for four Census regions (Northeast, Midwest, South, and West). Based on pilot experience, each stratum was allowed a –25% and +10% range of acceptable number of surveys. Once all quotas reached their minimum and at least 30,000 questionnaires were completed, the survey link was closed. Survey respondents were compensated roughly US $1.

    Exclusion Criteria Assessment

    Due to programmable internal data consistency checks, an outstanding concern after survey administration was the identification of completed questionnaires exhibiting careless or improbable response patterns (eg, endorsing all drugs at biologically improbable frequency). Methods used to exclude responses were adapted [13,20] to generate exclusion criteria that were validated against other questionnaire elements.

    Based on previous implementation experience and literature review of consumer product surveys, seven different metrics were investigated as possible exclusion criterion using multiple hurdles [13], and four were chosen based on performance: (1) consecutive positive use endorsements of up to 42 prescription drugs based on the LongString method [20], (2) alternating patterns of yes/no endorsement of prescription drugs based on the even-odd consistency method [20] with Pearson correlations no more negative than –0.6, (3) alternating patterns of illicitly manufactured drugs with Pearson correlations no more negative than –0.8 for fewer drugs, and (4) total number of specific products endorsed for NMU in last 12 months via modified outlier analysis (out of 298 possible, most respondents only endorsed a handful) [13]. Completion time of 8 introductory questions in less than 16 seconds [13] did not provide additional discriminatory value (data not shown).

    Since no gold standard was available for validation of the critical lifetime and 12-month prevalence questions, three internal consistency metrics were developed. Derived from other survey sections, these metrics provided biologically plausible support for responses to NMU: (1) survey response time for lifetime prescription drug use was a proxy for completion speed; (2) Mahalanobis distances were calculated on lifetime prescription, nonprescription, and illicitly manufactured drug use responses, representing deviance compared to the entire sample [13]; and (3) proportion of contradictory answers was calculated. For example, respondents were asked the time frame in which they had first initiated NMU and when they most recently nonmedically used, and the skip logic of the questionnaire allows for contradictory answers. Cut points were identified based on established theories (described in Results), visual inspection, and correlation coefficients [26].

    To evaluate internal consistency, cut points were evaluated against demographics, drug use behaviors, and overall drug endorsements. To evaluate external consistency for careless responses, we compared relative endorsements to national opioid dispensing data from the US-based Longitudinal Patient Databases (IQVIA Inc), a standard source that provides estimates of dosage units dispensed in retail pharmacies. Since low-volume drugs should result in fewer endorsements, we hypothesized that careless responses would be roughly proportional to dispensing (using Spearman correlation), and excluded responses would account for a larger proportion of low-volume drugs.

    Calibration Weighting for National Estimates

    A calibration weighting scheme was developed to generate national estimates for the adult population. The goal of the weighting scheme was to reduce the bias in estimates resulting from the self-selection of survey panelists by forcing the distribution of our sample to look similar to national estimates across demographics and health-related variables. Generalized raking using auxiliary information with incomplete stratification was selected as the method of calibration weighting [21,22] because raking has been shown to be equivalent or superior to propensity score methods or sample matching in reducing bias [27,28]. Briefly, this method matches the marginal distributions of each variable in the sample to the marginal distribution from the population by iteratively adjusting the base weights. The base weight (wb) was calculated where N is the adult population in 2017 (N=252,063,800) and n is the sample size (Figure 1).

    Figure 1. Base weight equation.
    View this figure

    The analytical weight was calculated using established procedures [29]. Maximum tolerance was 0.1 percentage points; convergence occurred at a tolerance of 0.1 weighted frequency. The national marginal values were obtained from two 2017 probability-based national surveys, ACS and National Health Interview Survey (NHIS) [2,25].

    Eight potential weighting variables from ACS and NHIS were selected based on associations with three lifetime measures also appearing in our questionnaire: any illicit drug use, any prescription pain reliever NMU, and any prescription NMU. Three demographic variables (age, sex, Census region of residence) and two household characteristics (household income and number of people in the home) were derived from ACS. Three health-related characteristics (self-assessment of general health, limitations in daily activities, and smoking tobacco) came from NHIS. To match basic demographics of the adult population, the three demographic variables were included in every model. Raking was tested against all remaining combinations (33 possible schemes).

    To evaluate the 33 possible schemes, 26 benchmark national estimates were compared between the NMURx Program and four national surveys: ACS, NHIS, NSDUH, and National Health and Nutrition Examination Survey [3,4]. The absolute value of the relative difference (Di) for the ith benchmark between the NMURx Program estimate (pi) and the national survey estimate (πi) were calculated (Figure 2, equation 2). These were averaged across the 26 benchmark estimates (Figure 2, equation 3, where bn is the number of benchmarks).

    Figure 2. Absolute value of the relative difference and average relative difference equations.
    View this figure

    The average relative difference in estimates across the weighting schemes for health-related benchmark estimates (overnight stay in a hospital, pain reliever use, illicit drug use, and alcohol use) compared with nonhealth-related benchmarks (race/ethnicity, marital status, education, employment, and insurance status) was evaluated. Final analytical weights represented the number of adults that a survey respondent would represent in the United States, generating national prevalence estimates, with 95% confidence intervals using variance estimation through Taylor series linearization [30].

    Ethics Review

    The protocol and survey instrument were reviewed and approved by the Colorado Multiple Institutional Review Board; a certificate of exemption was granted on July 5, 2016.


    Results

    Participant Recruitment

    There were 148,274 invitations sent to panelists, and 40,021 (26.99%) people initiated the survey. After eligibility assessment, 74.96% (29,998/40,021) of the completed questionnaires were available for analysis (Figure 3). After careless responses were removed, the final participation rate was 20.13% (29,841/148,274). Out of 910 3-digit zip codes in the United States, 883 zip codes had at least one respondent. An order effect was present, and the likelihood of endorsement for individual active pharmaceutical ingredients was associated with the position in which the item was presented in their questionnaire (Multimedia Appendix 1, Section D).

    Figure 3. Flow diagram of respondents invited to the survey.
    View this figure

    Evaluations of Exclusion Criteria

    There were 157 respondents (0.52% of the sample) excluded based on careless or improbable responses (Multimedia Appendix 1, Section E). The final sample had a median completion time of 10 minutes, 40 seconds. For criterion 1, only 27 responses were identified, where over half of the 42 drug use questions were consecutively endorsed (Table 1). Consistent with recommendations [13], this cut point requires a respondent to endorse at least two separate drug classes in an unbroken string of “Yes” responses to be excluded. Criteria 2 and 3 for alternating responses resulted in 33 and 17 surveys being excluded, respectively. For criterion 4 on total drug endorsements, 91.01% (27,301/29,998) did not endorse any NMU in the last 12 months. Among those endorsing at least one product, median products endorsed was 3 (interquartile range 1 to 7) of 298 possible. Given the highly skewed distribution, visual inspection of Mahalanobis distance, question completion time, and contradictory answers were used to select 35 products in the last 12 months endorsed as a conservative cut point (Multimedia Appendix 1, Section F), resulting in 96 responses excluded. There was very little overlap across multiple criteria. There were 6 respondents who were identified by both criterion 1 and criterion 4. A total of 10 respondents were identified by both criterion 2 and criterion 3. Table 1 demonstrates that groups of respondents excluded by each criterion also demonstrated other behaviors indicative of careless response. Excluded respondents answered questions more quickly. Excluded respondents had greater mean Mahalanobis distances, and the proportion of excluded respondents who provided at least one contradictory answer on the survey was very different from the proportion of included respondents who provided contradictory answers (67/157, 42.7%, and 224/29,841, 0.75%, respectively). Excluded respondents more frequently reported being male, younger, and Hispanic compared with respondents who were not excluded, although statistical tests of differences were not conducted.

    While a small proportion of responses were excluded (157/29,998, 0.52%), these exclusions had a larger impact on unweighted endorsements of lower volume active pharmaceutical ingredients (Table 2). The relative percentage decrease in responses endorsing NMU of opioid ingredients was negatively correlated to dispensing volume (Spearman ρ=–.88, P<.001), confirming our hypothesis that misclassification due to careless response would have a greater impact on low-volume products.

    Table 1. Respondent characteristics of excluded and included respondents.
    View this table
    Table 2. Relative decrease in prescription drug nonmedical use endorsements after exclusions compared with drug availability.
    View this table

    Selection of Weighting Scheme

    The remaining 29,841 surveys were used for calibration weighting. The unweighted NMURx Program data had an average relative difference of 36.1% compared with weighted estimates (Table 3). Across the 33 weighting schemes, the addition variables resulted in decreases in the average relative difference while relative standard error increased (Multimedia Appendix 1, Section G). Little variation in average relative difference was observed among nonhealth-related benchmarks; however, there were large changes in average difference among health-related benchmarks (Figure 4). Five-variable weighting schemes appeared to maximize reduction in relative difference and with minor increases in relative standard error, resulting in the selection including age, gender, region, limitation in daily activities, and tobacco use. This scheme had a 31.2% reduction in average relative difference compared with unweighted, resulting in 381 unique weights, none of which appeared extreme. The median weight was 7782.2 (interquartile range 4690.2 to 12,662.9).

    Table 3. Relative difference in benchmark estimates.
    View this table
    Figure 4. Average relative difference in nonhealth-related (A) and health-related (B) benchmarks with calibration weighting are shown for the 33 weighting schemes.
    View this figure

    External Validation Results

    After weighting, the NMURx Program estimates were closely aligned with the national benchmarks on demographic and other characteristics (Table 4). The unweighted sample had an overrepresentation of older adults, males, higher education, and lower household incomes, possibly a reflection of internet panel participants in general. The health profile of the sample was more similar to the national estimates after weighting. Proportions of good or excellent self-assessed health status and private insurance coverage increased while DAST-10 scores and the estimated proportion with chronic pain in the last year decreased. After weighting, NMU of any pain reliever decreased from 7.8% to 6.2%, sedatives decreased from 4.5% to 3.4%, and stimulants decreased from 2.4% to 2.0%. In addition, when comparing drug use and health indicator estimates across multiple probability surveys with similar questions, there was variation in estimates across probability-based national surveys, and NMURx Program weighted estimates were within similar ranges to the national surveys (Figure 5). Weighted estimates were closer to estimates from probability-based national surveys. Between national surveys, estimates of sex, age, and race were similar and confidence intervals generally overlapped; estimates of education varied slightly and in many cases confidence intervals didn’t overlap (Multimedia Appendix 1, Section H).

    Table 4. Characteristics and national estimates before and after weighting.
    View this table
    Figure 5. Comparison of estimates available across national surveys. NMURx: Survey of Non-Medical Use of Prescription; NSDUH: National Survey on Drug Use and Health; NHIS: National Health Interview Survey; NHANES: National Health and Nutrition Examination Survey.
    View this figure

    Discussion

    Principal Findings

    While use of internet-based questionnaires for epidemiologic research has been previously described [31-33], our study illustrates a new approach to using nonprobability online panels to achieve national prevalence estimates for drug abuse. We were able to overcome challenges with using nonprobability internet samples [17,27,34,35], including misclassification due to careless or improbable responses. External validity of reweighted survey data demonstrated concurrent performance compared with large national probability surveys on demographics, health indicators, and drug use.

    The value of internet samples is increasingly recognized [7,27,35,36], and our approach has strengths that may be relevant to drug use surveillance. Using calibration weights derived from two independent probability-sampled studies provided a hedge against overfitting [28,37]. The survey was fielded over the course of 8 weeks collecting at least 30,000 unique responses at a fraction of the cost of national probability samples. The entire process from fielding the survey to national estimates takes about 6 weeks. The ability to rapidly and inexpensively add new drugs to the survey is a considerable benefit against the background of the opioid crisis, which has evolved into its third phase, characterized by heroin-fentanyl deaths [38]. Beyond opioids, new drugs of abuse are being documented (eg, tianeptine, kratom) [39,40], while problematic drugs of the past are resurging (eg, methamphetamine, cocaine) [41]. Noncontrolled (nonscheduled) prescription drugs with abuse potential such as antidepressants [42], anticonvulsants [43], and novel psychoactive substances are not currently tracked on national probability surveys but could easily be added to online questionnaires. Emerging novel routes of administration (eg, intra-arterial injection), fluctuations in infectious disease risk factors, and uptake of harm reduction strategies could be queried in-depth. Our results also suggest that randomization is useful in mitigating order effects on surveys and skip logic is required to prevent motivated underreporting, neither of which is common practice yet on many national surveys. The method presented here cannot replace traditional probability-based surveys; in fact, it intentionally relies on those surveys to create optimized estimates. But this method can bridge the information gap when there is a need for prompt, accurate national data.

    Limitations

    Ostensibly, the online-only setting creates the perception of anonymity between the respondent and researcher and reduces interviewer bias, but the role of the panel company as an intermediary and fears of data breaches may exert selection bias. There are putative gaps in the sampling frame since not all US adults use the internet. In terms of precision, 95% confidence intervals do not represent true 95% coverage probabilities because the exact selection probability from the sampling frame into the sample is not known, limiting statistical inferences within a purely frequentist context. Rather, the confidence intervals demonstrate precision of the estimates within the sampling framework, and inferences are useful when combined with an understanding of how the sample was obtained and weighted. Finally, a nonresponse adjustment was not included in this method. A drawback of using online panels is that information on nonresponding panelists is not available, and future extension of this work will be to obtain sufficient information in other ways to address this.

    Conclusions

    We describe a practical approach to providing a timely perspective on drug abuse in the United States, with results obtained within 6 weeks of questionnaire deployment. The approach presented mitigates many valid concerns about the use of nonprobability internet panels and could be of use to other subject domains.

    Acknowledgments

    The authors thank Dr Lori A Crane from the University of Colorado, Colorado School of Public Health, for her consultation on question development and questionnaire structure and Joshua Swanson for tremendous work organizing the questionnaire. This work was supported by the RADARS System, which is supported by subscriptions from pharmaceutical manufacturers and government and nongovernment agencies for surveillance, research, and reporting services. RADARS System is the property of Denver Health and Hospital Authority, a political subdivision of the State of Colorado. Denver Health retains exclusive ownership of all data, databases, and systems. No subscriber participated in the conception, analysis, drafting, or review of this manuscript. All authors contributed equally to the work.

    Conflicts of Interest

    Authors were employed by Denver Health and Hospital Authority during this work.

    Multimedia Appendix 1

    Supplemental methodological information and supporting results.

    PDF File (Adobe PDF File), 1150 KB

    References

    1. Reports and detailed tables from the 2017 National Survey on Drug Use and Health (NSDUH). Rockville: Substance Abuse and Mental Health Services Administration; 2018.   URL: https://www.samhsa.gov/data/nsduh/reports-detailed-tables-2017-NSDUH [accessed 2019-09-29]
    2. About the National Health Interview Survey. Hyattsville: National Center for Health Statistics   URL: https://www.cdc.gov/nchs/nhis/about_nhis.htm [accessed 2019-09-29]
    3. 2017 National Survey on Drug Use and Health: public use file codebook. Rockville: Substance Abuse and Mental Health Services Administration, Center for Behavioral Health Statistics and Quality   URL: http:/​/samhda.​s3-us-gov-west-1.amazonaws.com/​s3fs-public/​field-uploads-protected/​studies/​NSDUH-2017/​NSDUH-2017-datasets/​NSDUH-2017-DS0001/​NSDUH-2017-DS0001-info/​NSDUH-2017-DS0001-info-codebook.​pdf [accessed 2019-09-29]
    4. Zipf G, Chiappa M, Porter KS, Ostchega Y, Lewis BG, Dostal J. National Health and Nutrition Examination Survey: plan and operations, 1999-2010. Vital Health Stat 1 2013 Aug(56):1-37 [FREE Full text] [Medline]
    5. Metzger DS, Koblin B, Turner C, Navaline H, Valenti F, Holte S, et al. Randomized controlled trial of audio computer-assisted self-interviewing: utility and acceptability in longitudinal studies. HIVNET Vaccine Preparedness Study Protocol Team. Am J Epidemiol 2000 Jul 15;152(2):99-106. [CrossRef] [Medline]
    6. Newman JC, Des Jarlais DC, Turner CF, Gribble J, Cooley P, Paone D. The differential effects of face-to-face and computer interview modes. Am J Public Health 2002 Feb;92(2):294-297. [Medline]
    7. van Gelder MMHJ, Bretveld RW, Roeleveld N. Web-based questionnaires: the future in epidemiology? Am J Epidemiol 2010 Dec 1;172(11):1292-1298 [FREE Full text] [CrossRef] [Medline]
    8. Barber J, Kusunoki Y, Gatny H, Schulz P. Participation in an intensive longitudinal study with weekly web surveys over 2.5 years. J Med Internet Res 2016 Jun 23;18(6):e105 [FREE Full text] [CrossRef] [Medline]
    9. Erens B, Burkill S, Couper MP, Conrad F, Clifton S, Tanton C, et al. Nonprobability Web surveys to measure sexual behaviors and attitudes in the general population: a comparison with a probability sample interview survey. J Med Internet Res 2014 Dec 08;16(12):e276 [FREE Full text] [CrossRef] [Medline]
    10. Iachan R, Berman L, Kyle TM, Martin KJ, Deng Y, Moyse DN, et al. Weighting nonprobability and probability sample surveys in describing cancer catchment areas. Cancer Epidemiol Biomarkers Prev 2019 Mar;28(3):471-477. [CrossRef] [Medline]
    11. Miller EA, Berman L, Atienza A, Middleton D, Iachan R, Tortora R, et al. A feasibility study on using an internet-panel survey to measure perceptions of e-cigarettes in 3 metropolitan areas, 2015. Public Health Rep 2017;132(3):336-342 [FREE Full text] [CrossRef] [Medline]
    12. Anderson M, Perrin A, Jiang J, Kumar M. 10% of Americans don't use the internet. Who are they?. Washington: Pew Internet and American Life Project; 2019 Apr 22.   URL: https://www.pewresearch.org/fact-tank/2019/04/22/some-americans-dont-use-the-internet-who-are-they/ [accessed 2019-09-29]
    13. Curran PG. Methods for the detection of carelessly invalid responses in survey data. J Exp Soc Psychol 2016 Sep;66:4-19. [CrossRef]
    14. Craig BM, Hays RD, Pickard AS, Cella D, Revicki DA, Reeve BB. Comparison of US panel vendors for online surveys. J Med Internet Res 2013 Nov 29;15(11):e260 [FREE Full text] [CrossRef] [Medline]
    15. Roster CA, Rogers RD, Albaum G, Klein D. A comparison of response characteristics from web and telephone surveys. Int J Market Res 2018 Jan 30;46(3):359-373. [CrossRef]
    16. Leece P, Bhandari M, Sprague S, Swiontkowski MF, Schemitsch EH, Tornetta P, et al. Internet versus mailed questionnaires: a randomized comparison (2). J Med Internet Res 2004 Sep 24;6(3):e30 [FREE Full text] [CrossRef] [Medline]
    17. Yeager DS, Krosnick JA, Chang L, Javitz HS, Levendusky MS, Simpser A, et al. Comparing the accuracy of RDD telephone surveys and internet surveys conducted with probability and non-probability samples. Pub Opin Q 2011 Oct 05;75(4):709-747. [CrossRef]
    18. Kreuter F, McCulloch S, Presser S, Tourangeau R. The effects of asking filter questions in interleafed versus grouped format. Sociolog Methods Res 2011 Jan 11;40(1):88-104. [CrossRef]
    19. Dillman D, Smyth J, Christian L. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 4th Edition. Hoboken: John Wiley & Sons; 2014.
    20. Meade AW, Craig SB. Identifying careless responses in survey data. Psychol Methods 2012 Sep;17(3):437-455. [CrossRef] [Medline]
    21. Deming WE, Stephan FF. On a least squares adjustment of a sampled frequency table when the expected marginal totals are known. Ann Math Statist 1940 Dec;11(4):427-444. [CrossRef]
    22. Deville J, Sarndal C, Sautory O. Generalized raking procedures in survey sampling. J Am Statist Assoc 1993 Sep;88(423):1013. [CrossRef]
    23. Dart RC, Surratt HL, Cicero TJ, Parrino MW, Severtson SG, Bucher-Bartelson B, et al. Trends in opioid analgesic abuse and mortality in the United States. N Engl J Med 2015 Jan 15;372(3):241-248. [CrossRef] [Medline]
    24. McCabe SE, Boyd CJ, Cranford JA, Morales M, Slayden J. A modified version of the Drug Abuse Screening Test among undergraduate students. J Subst Abuse Treat 2006 Oct;31(3):297-303 [FREE Full text] [CrossRef] [Medline]
    25. American Community Survey: 2014 Design and Methodology Report. Washington: US Census Bureau   URL: http:/​/www2.​census.gov/​programs-surveys/​acs/​methodology/​design_and_methodology/​acs_design_methodology_report_2014.​pdf [accessed 2019-09-29]
    26. Davenport E, El-Sanhurry N. Phi/Phimax: review and synthesis. Educ Psychol Meas 1991;51(4):1. [CrossRef]
    27. Dutwin D, Buskirk T. Apples to oranges or gala versus golden delicious? Pub Opin Q 2017;81(S1):213-239. [CrossRef]
    28. Kennedy C, Mercer A, Keeter S, Hatley N, McGeeney K, Gimenez A. Evaluating online nonprobability surveys. Washington: Pew Internet and American Life Project; 2016 May 02.   URL: https://www.pewresearch.org/wp-content/uploads/2016/04/Nonprobability-report-May-2016-FINAL.pdf [accessed 2019-09-29]
    29. Izrael D, Hoaglin D, Battaglia M. A SAS macro for balancing a weighted sample. 2000.   URL: https://support.sas.com/resources/papers/proceedings/proceedings/sugi25/25/st/25p258.pdf [accessed 2019-09-29]
    30. Lohr S. Sampling: Design and Analysis. 2nd Edition. Boston: Cengage; 2009.
    31. Andreeva VA, Deschamps V, Salanave B, Castetbon K, Verdot C, Kesse-Guyot E, et al. Comparison of dietary intakes between a large online cohort study (Etude NutriNet-Santé) and a nationally representative cross-sectional study (Etude Nationale Nutrition Santé) in France: addressing the issue of generalizability in e-epidemiology. Am J Epidemiol 2016 Nov 01;184(9):660-669 [FREE Full text] [CrossRef] [Medline]
    32. De Vera MA, Ratzlaff C, Doerfling P, Kopec J. Reliability and validity of an internet-based questionnaire measuring lifetime physical activity. Am J Epidemiol 2010 Nov 15;172(10):1190-1198 [FREE Full text] [CrossRef] [Medline]
    33. Russell CW, Boggs DA, Palmer JR, Rosenberg L. Use of a web-based questionnaire in the Black Women's Health Study. Am J Epidemiol 2010 Dec 1;172(11):1286-1291 [FREE Full text] [CrossRef] [Medline]
    34. Greenacre ZA. The importance of selection bias in internet surveys. Open J Statist 2016;06(03):397-404. [CrossRef]
    35. Tourangeau R, Conrad FG, Couper MP. The Science of Web Surveys. Oxford: Oxford University press; 2013.
    36. Elliott MR, Valliant R. Inference for nonprobability samples. Statist Sci 2017 May;32(2):249-264. [CrossRef]
    37. Schonlau M. Will web surveys ever become part of mainstream research? J Med Internet Res 2004 Sep 23;6(3):e31 [FREE Full text] [CrossRef] [Medline]
    38. Dasgupta N, Beletsky L, Ciccarone D. Opioid crisis: no easy fix to its social and economic determinants. Am J Public Health 2018 Feb;108(2):182-186. [CrossRef] [Medline]
    39. Olsen EO, O'Donnell J, Mattson CL, Schier JG, Wilson N. Notes from the field: unintentional drug overdose deaths with kratom detected—27 states, July 2016–December 2017. MMWR Morb Mortal Wkly Rep 2019 Apr 12;68(14):326-327 [FREE Full text] [CrossRef] [Medline]
    40. El Zahran T, Schier J, Glidden E, Kieszak S, Law R, Bottei E, et al. Characteristics of tianeptine exposures reported to the National Poison Data System—United States, 2000-2017. MMWR Morb Mortal Wkly Rep 2018 Aug 03;67(30):815-818 [FREE Full text] [CrossRef] [Medline]
    41. Kariisa M, Scholl L, Wilson N, Seth P, Hoots B. Drug overdose deaths involving cocaine and psychostimulants with abuse potential—United States, 2003-2017. MMWR Morb Mortal Wkly Rep 2019 May 03;68(17):388-395 [FREE Full text] [CrossRef] [Medline]
    42. Evans EA, Sullivan MA. Abuse and misuse of antidepressants. Subst Abuse Rehabil 2014;5:107-120 [FREE Full text] [CrossRef] [Medline]
    43. Hawkins KL, Gidal BE. When adverse effects are seen as desirable: abuse potential of the newer generation antiepileptic drugs. Epilepsy Behav 2017 Dec;77:62-72. [CrossRef] [Medline]


    Abbreviations

    ACS: American Community Survey
    CHERRIES: Checklist for Reporting Results of Internet E-Surveys
    DAST-10: 10-item Drug Abuse Screening Test
    NHIS: National Health Interview Survey
    NMU: nonmedical use
    NMURx: Survey of Non-Medical Use of Prescription Drugs
    NSDUH: National Survey of Drug Use and Health
    RADARS: Researched Abuse, Diversion and Addiction-Related Surveillance


    Edited by G Eysenbach; submitted 13.08.19; peer-reviewed by X Chen, A Dregan; comments to author 05.09.19; revised version received 11.09.19; accepted 18.09.19; published 25.10.19

    ©Joshua Curtis Black, Karilynn Rockhill, Alyssa Forber, Elise Amioka, K Patrick May, Colleen M Haynes, Nabarun Dasgupta, Richard C Dart. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 25.10.2019.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.