Published on 12.05.14 in Vol 16, No 5 (2014): May
Pain-QuILT: Clinical Feasibility of a Web-Based Visual Pain Assessment Tool in Adults With Chronic Pain
Background: Chronic pain is a prevalent and debilitating problem. Accurate and timely pain assessment is critical to pain management. In particular, pain needs to be consistently tracked over time in order to gauge the effectiveness of different treatments. In current clinical practice, paper-based questionnaires are the norm for pain assessment. However, these methods are not conducive to capturing or tracking the complex sensations of chronic pain. Pain-QuILT (previously called the Iconic Pain Assessment Tool) is a Web-based tool for the visual self-report and tracking of pain (quality, intensity, location, tracker) in the form of time-stamped records. It has been iteratively developed and evaluated in adolescents and adults with chronic pain, including usability testing and content validation.
Clinical feasibility is an important stepping-stone toward widespread implementation of a new tool. Our group has demonstrated Pain-QuILT clinical feasibility in the context of a pediatric chronic pain clinic. We sought to extend these findings by evaluating Pain-QuILT clinical feasibility from the perspective of adults with chronic pain, in comparison with standard paper-based methods (McGill Pain Questionnaire [MPQ] and Brief Pain Inventory [BPI]).
Objective: The goal of our study was to assess Pain-QuILT for (1) ease of use, (2) time for completion, (3) patient preferences, and (4) to explore the patterns of self-reported pain across the Pain-QuILT, MPQ, and BPI.
Methods: Participants were recruited during a scheduled follow-up visit at a hospital-affiliated pain management and physical rehabilitation clinic in southwestern Ontario. Participants self-reported their current pain using the Pain-QuILT, MPQ, and BPI (randomized order). A semistructured interview format was used to capture participant preferences for pain self-report.
Results: The sample consisted of 50 adults (54% female, 27/50) with a mean age of 50 years. Pain-QuILT was rated as significantly easier to use than both the MPQ and BPI (P<.01) and was also associated with the fewest difficulties in completion. On average, the time to complete each tool was less than 5 minutes. A majority of participants (58%, 29/50) preferred Pain-QuILT for reporting their pain over alternate methods (16%, 8/50 for MPQ; 14%, 7/50 for BPI; 12%, 6/50 for “other”). The most commonly chosen pain descriptors on MPQ were matched with Pain-QuILT across 91% of categories. There was a moderate-to-high correlation between Pain-QuILT and BPI scores for pain intensity (r=.70, P<.01).
Conclusions: The results of this clinical feasibility study in adults with chronic pain are consistent with our previously published pediatric findings. Specifically, data indicate that Pain-QuILT is (1) easy to use, (2) quick to complete, (3) preferred by a majority of patients, and (4) correlated as expected with validated pain measures. As a digital, patient-friendly method of assessing and tracking pain, we conclude that Pain-QuILT has potential to add significant value as one standard component of chronic pain management.
J Med Internet Res 2014;16(5):e127)
Chronic pain, defined as pain that persists beyond normal time of healing, is a prevalent and debilitating problem that is now recognized as a disease [- ]. Common types of chronic pain include low back, headache, abdominal, musculoskeletal, and neuropathic pain [ ]. Pain is a complex sensory and emotional phenomenon that, while intensely experienced, is often difficult to communicate [ ].
Accurate and timely pain assessment is critical to developing and monitoring a pain management plan . Given that there is no medical test to directly measure pain, health care providers rely primarily on patient self-report, including pain quality (what it feels like), intensity (how much it hurts), location (spatial distribution), and temporal nature (how it changes over time) [ ]. Assessment of pain quality and location is particularly important because this information can be used to distinguish between different diagnostic subgroups (eg, neuropathic versus non-neuropathic pain) [ , ].
Chronic pain management often takes place across multiple settings (eg, hospitals, clinics) and involves numerous health care providers, including physicians, nurses, physiotherapists, chiropractors, and psychologists [- ]. Pain outcomes need to be consistently tracked over time in order to gauge the effectiveness of different management strategies, including physical, psychological, and pharmacological approaches. However, there is often a lack of consistency in the assessment of pain across these different settings and providers. One reason for this lack of consistency is the standard use of paper-based assessment tools, which are not conducive to tracking pain over time. Commonly used paper-based tools include the McGill Pain Questionnaire (MPQ) [ ] and the Brief Pain Inventory (BPI) [ ]. However, there is limited research on the serial use of these measures in clinical pain assessment.
The emergence of Internet and mobile technology has created opportunities for innovation in the field of pain assessment and management. For example, electronic pain diaries offer advantages such as ease of data tracking, improved patient compliance, and capture of real-time pain reports without memory bias [- ].
Pain-QuILT is a Web-based tool for the visual self-report and tracking of pain (quality, intensity, location, tracker) in the form of time-stamped records [- ]. Pain quality is expressed by choosing from a validated library of labeled pain icons, such as a matchstick for “burning pain”. Pain intensity is quantified using a 0-10 numerical rating scale (NRS) ranging from “no pain” to “worst pain imaginable”. Pain location is illustrated by “dragging-and-dropping” pain icons onto a detailed virtual body-map that is codified into over 100 regions.
To our knowledge, Pain-QuILT is the only tool that captures the complex sensations of chronic pain by allowing patients to self-report different qualities and intensities of pain across their entire body. For example, they can record the simultaneous experience of a “3/10” burning pain in their shoulder as well as a “5/10” pain in their foot that is both “burning” and “sharp”. All reported data are digitally captured and then populated into a database, which can be used to track changes in pain quality and intensity across different body regions over time.
Health care professionals can use this information to monitor the effectiveness of any pain management practices. Patients can keep track of their pain to help inform self-management in the home setting. By standardizing the assessment of pain outcomes in a digitized format, Pain-QuILT may also improve the coordination of pain management across multiple health care providers.
Pain-QuILT has been iteratively developed and evaluated in adolescents and adults with chronic pain, including usability testing and content validation. Before widespread implementation of Pain-QuILT, it is critical to evaluate clinical feasibility (ie, the ease with which it can be applied in a real-world setting), compared with standard methods of pain assessment. Recently, our group established clinical feasibility in an interdisciplinary pediatric chronic pain clinic that used a semistructured interview method to assess pain . In comparison with this standard method, Pain-QuILT was preferred by a majority of adolescent patients and was perceived to be clinically useful for visually capturing pain and promoting better communication between patients and health providers.
Given that the MPQ and BPI are the standard tools used in adults, the purpose of this study was to extend the findings from our pediatric work to evaluate clinical feasibility of Pain-QuILT among adults with chronic pain in comparison with the MPQ and BPI. In the context of this clinical feasibility study, our primary aims were to assess Pain-QuILT for (1) ease of use, (2) time for completion, and (3) patient preferences. Our secondary aim was to explore the patterns of self-reported pain across the comparator methods of Pain-QuILT, MPQ, and BPI.
This study was conducted at a hospital-affiliated pain management and physical medicine and rehabilitation outpatient clinic in southwestern Ontario. It was staffed by an interdisciplinary team of health care professionals, consisting of a physiatrist, physical therapist, and kinesiologist. Patients who are referred to this outpatient clinic receive a thorough medical evaluation, including assessment of pain, and are then informed of the management plan including pharmacological, injection, and physical therapies. They may also be referred for psychological therapy (eg, group counseling, cognitive behavioral therapy) if needed. All patients are reassessed at timely intervals and treatments are adjusted according to clinical need.
Informed written consent was obtained from all participants, and the study was approved by the locally responsible Research Ethics Boards. A health care provider known to patients identified eligible individuals by screening the patient lists of consecutively scheduled clinic appointments. Individuals were eligible to participate if they were (1) aged 18 years or older, (2) able to speak and read English, and (3) currently experiencing pain of any intensity according to self-report. Individuals were excluded if they had severe cognitive impairment or major comorbid medical or psychiatric illness that could preclude their ability to self-report pain or take part in a verbal interview, according to their health care provider. Individuals were also excluded if they had severe vision or hand dexterity impairments that could prevent independent use of a computer and mouse.
Demographic and Health-Related Data
Following consent, each participant completed a Demographic and Health Questionnaire, which collected data on age, sex, computer comfort, weekly computer use, language proficiencies, education level, and date of pain problem onset.
All participants took part in an individual semistructured interview (20-30 minutes) with a trained investigator (author CL). The investigator was experienced in conducting qualitative interviews and used techniques to minimize the power differential between the interviewer and participant (eg, established rapport, engaged in active listening, used relaxed body language) . The investigator also stressed that the research team wished to ensure that Pain-QuILT addressed the needs of adults with chronic pain and thus encouraged participants to freely express opinions about good and bad aspects of the tool. As a first step, participants self-reported their pain using the Pain-QuILT, MPQ, and BPI (described in detail below). These tools were administered in a randomized order for each participant, in order to minimize potential order effects. Investigator observation and participant comments were used to identify any difficulties or confusion with using each tool; these were recorded as field notes. The time required to complete each tool was recorded. Next, a semistructured interview format was used to discuss participant preferences for pain self-report. A 0-10 NRS ranging from “not easy at all” to “very easy” was used to appraise each tool. Qualitative written feedback on the ease of using each tool was also collected. Finally, participants were asked to indicate their preference of methods for self-reporting pain and explain the reason for their choice. All interviews were conducted by the same investigator in a quiet room within the clinic.
Pain Tool Comparison
McGill Pain Questionnaire
This paper-based questionnaire was developed in the 1970s through groundbreaking research that was focused on identifying common word descriptors for the pain experience [, , ]. At the time, there was no available tool that accounted for the multidimensional nature of pain. The MPQ is composed of 20 subclasses that correspond to sensory, affective, evaluative, and miscellaneous pain. Each subclass consists of a clustered list of 2-6 word descriptors. For example, the first subclass of word descriptors is “sensory-temporal” and is made up of the descriptors: “flickering; quivering; pulsing; throbbing; beating; pounding”. There is a total of 78 descriptors on the MPQ. Participants were instructed to review each discrete cluster of words and then select the one word that best described their current pain. If none of the words within a cluster were descriptive of their pain, then no word was selected. The MPQ is one page in length and was administered by the study investigator.
Brief Pain Inventory Short Form
This paper-based questionnaire was developed in the 1980s for patients with cancer pain, based on research suggesting that existing measures such as the MPQ were burdensome for patients to complete [, ]. Since its initial development, the BPI has subsequently become one of the most widely used tools for assessing all types of pain in both clinical and research settings [ ]. It is designed to assess pain location and severity as well as level of interference with daily life. In the present study, participants used a pen to shade painful areas on a body-manikin diagram. The body-manikin consisted of anterior and posterior aspects and included no regional demarcations. Next, participants were required to rate the intensity of their “pain right now” as well as their “worst”, “least”, and “average” pain from the past 24 hours using separate 0-10 NRS items ranging from “no pain” to “pain as bad as you can imagine”. Finally, participants were asked to rate the extent to which pain had interfered with different parts of their life in the past 24 hours. Each quality of life domain was rated on a separate NRS ranging from 0 (“does not interfere”) to 10 (“completely interferes”). The BPI is one page in length and was administered by the study investigator.
Participants were taught how to use Pain-QuILT via a standard 3-minute demonstration. Following confirmation of understanding, each participant was instructed to use the investigator laptop computer (MacBook Pro) with external mouse to “create a picture” of their current pain, as illustrated in. First, they chose from the library of labeled pain quality icons to describe what their pain felt like. The Pain-QuILT library consisted of 16 icons to represent aching, burning, dull, electrical, freezing, heavy, pinching, pins & needles, pounding, shooting, sharp, stabbing, stiffness, squeezing, throbbing, and “other” pain. They then used the mouse to “drag-and-drop” a miniature copy of this descriptive icon onto a virtual body-map to show pain location. The entire body-map was displayed on a single screen and was made up of anterior and posterior aspects, as well as magnified views of the head (anterior, posterior, side-view). The body-map was codified into 110 distinct regions, and each region became highlighted in blue as the computer mouse hovered over it. Next, after “dropping” the icon onto the appropriate body region, the user assigned a rating of intensity for this pain by using a “pop-up” 0-10 NRS ranging from “no pain” to “worst pain imaginable”. The 0-10 NRS also corresponded with a color scale ranging from green (lower intensity) to red (higher intensity). After the user had chosen an intensity value, the pain icon was added to the body-map, along with the numerical rating. The dropped icon-number pair was enclosed within a square box whose fill color corresponded to the intensity rating (eg, dark green fill for a rating of 1/10). Users continued to “drag-and-drop” numbered icons onto the virtual body-map until all of their current pain or pains had been recorded. shows a patient reporting multiple pains across their body of different qualities and intensities, specifically, shoulder pain that is both “burning” and “aching”, a painful stiffness in their chest, an “aching” knee pain, and a “pounding” pain in the back of their neck. All user-entered pain data (quality, intensity, location), as well as information on time and date of entry, were automatically uploaded to a back-end database that was accessible to the research team.
Qualitative written data and field notes from the semistructured interview were transcribed verbatim and imported into the qualitative software program, HyperRESEARCH . This software was used to facilitate a simple content analysis of the data [ ]. A line-by-line coding analysis was used to identify key concepts from the interview transcripts and field notes. Concepts addressed during the semistructured interviews were used to thematically code and organize participant responses [ ]. Participant quotations were selected to illustrate each key interview concept with the aim of representing the balance of opinion among participants.
Quantitative data from the Demographic and Health Questionnaire, MPQ, BPI, and Pain-QuILT were coded, scored, and entered into a Statistical Package for the Social Sciences database . As described by Lalloo and colleagues, the extracted parameters from each Pain-QuILT report were the number of unique painful sites (range 0 to 110) and number of different pain quality descriptors (range 0 to 16) used to express current pain [ ]. Additionally, a cumulative mean pain intensity score was calculated across all painful body sites. While this cumulative score provided a convenient indicator of the central tendency of data, it was also sensitive to outliers. Thus, we also extracted the lowest and highest single NRS intensity score to provide an indicator of data dispersion. For example, if a participant reported a 5/10 burning pain in their foot, a 3/10 burning pain in their hand, and a 3/10 stabbing pain in their back, then the number of unique painful sites would be recorded as 3, the number of unique pain quality descriptors would be 2, the cumulative intensity score would be calculated as [(5+3+3)/3]=3.7, the lowest reported NRS score would be 3, and the highest reported NRS score would be 5.
All data were analyzed descriptively to assess measures of central tendency (mean, median) and dispersion [standard deviation, interquartile range]. Data were also evaluated to ensure that they met the assumptions of parametric statistical analysis (ie, the normal distribution). When these assumptions were not met, the non-parametric equivalent test was used. Repeated measure analysis of variance (ANOVA) was used to determine whether there were any differences between Pain-QuILT, MPQ, and BPI in terms of time to complete or ease of use ratings. Pearson correlations were used to examine the association between pain intensity scores on Pain-QuILT and BPI. The a priori criterion for evidence of convergent validity was a moderate correlation of r=.5 between Pain-QuILT and BPI scores for current pain intensity. Using the guidelines from Streiner and Norman pertaining to sample size for correlation coefficients, assuming alpha=.05 and beta=.05, the required sample size was N=50 . The level of significance was set at P<.05 for all tests.
A total of 50 adults completed the study over a 5-month period in 2013. Sample characteristics are summarized in. Nearly all participants (48/50, 96%) had a computer at home as well as Internet access (45/50, 90%). Of the 50 participants, 84% (42/50) reported being “comfortable” or “very comfortable” with using computers, while 10% (5/50) were “a little comfortable” and 6% (3/50) were “not at all comfortable”. The self-reported frequency of computer use among participants was none (3/50, 6%), once per week (3/50, 6%), twice per week (2/50, 4%), three times per week (4/50, 8%), five times per week (1/50, 2%), and every day (37/50, 74%).
McGill Pain Questionnaire
The relative endorsement of MPQ pain quality descriptors between and within subclasses is illustrated in. The most commonly chosen MPQ words to express current pain were matched with a descriptor in the Pain-QuILT library across all subclasses, except for “miscellaneous”. This pattern was consistent regardless of whether the MPQ was administered before Pain-QuILT (29/50, 58%), or Pain-QuILT was administered before the MPQ (21/50, 42%).
Brief Pain Inventory
The mean score for current reported pain intensity was 6.6 (SD 2.1). The mean scores for recalled pain in the past 24 hours were 7.9 (SD 1.4) for “worst” pain, and 4.4 (SD 2.2) for “least” pain, respectively.
The mean number of unique painful sites reported was 6.5 (SD 4.0, range 1-22). The mean number of different pain qualities used to describe current pain was 5.0 (SD 2.4, range 1-10). The relative endorsement of Pain-QuILT icons across all participants is illustrated in. The mean reported intensity for current pain (ie, the cumulative calculated score across all body sites) was 6.2 (SD 2.0). The mean lowest reported pain intensity score was 4.8 (SD 2.1), and the mean highest reported pain intensity score was 7.4 (SD 2.1).
The Pearson correlation coefficient between the Pain-QuILT score for current pain (calculated across all body sites) and BPI score for current pain (single NRS rating) was r=.70. The Pearson correlation coefficient between the highest reported intensity score on Pain-QuILT and the BPI score for current pain was r=.76. The Pearson correlation coefficient between the lowest reported intensity score on Pain-QuILT and the BPI score for current pain was r=.55.
Ease of Use
All participants reported the relative ease of using each tool for self-reporting pain. The mean ratings were 5.9 (SD 2.6) for the MPQ, 7.0 (SD 2.6) for the BPI, and 8.3 (SD 2.0) for Pain-QuILT. Overall, there was a significant difference between the tools in terms of perceived ease of use, F2,96=20.6, P<.001. Pairwise comparisons also indicated significant differences between the MPQ and BPI (P=.009), MPQ and Pain-QuILT (P<.001), as well as BPI and Pain-QuILT (P=.002).
Participant-Reported Difficulties With Using Each Pain Tool
Overall, 46% (23/50) of participants indicated that they had difficulties in completing the MPQ, while 22% (11/50) reported difficulties with the BPI and 16% (8/50) specified difficulties with Pain-QuILT.
The most commonly reported issue with the MPQ was trouble with understanding the qualitative word descriptors (10/23, 43%) due to language barriers (eg, English as second language) or uncommon vocabulary, such as “taut” and “smarting”. Participants (7/23, 30%) also reported that the available pain words “...weren’t very good to describe [pain]” (ie, lack of descriptiveness). Other participants (7/23, 30%) noted that it was difficult to select the right words to express their pain due to ambiguity (“what is difference between cool, cold, freezing?”), the number of available options (“too many choices”), and the presence of more than one relevant word from certain subclasses. Last, participants (2/23, 9%) expressed concern about potentially misrepresenting their pain to their health care providers: “more fear of not describing your pain properly with this test”.
The most commonly reported issues with the BPI were communicating pain location using the body-manikin (2/11, 18%; “hard to pull out meaning”) and choosing intensity ratings to describe pain (2/11, 18%; “hard time with pain numbers”). Other reported difficulties included recalling pain over the last 24 hours (“hard to simplify pain”), reporting pain from multiple sites (“varying intensities of pain from different injuries”), and questionnaire design (“cumbersome to complete, too general”). One participant also indicated a “fear of not explaining properly what is happening”.
The most commonly reported issues with Pain-QuILT were related to the virtual body-map (3/8, 37.5%). Specifically, participants identified a need for orientation labels (left, right) and to make it easier to isolate specific painful body areas (“hard to find specific regions on [the] back versus a ‘paint’ tool, because some pain radiates”). In addition, participants (2/8, 25%) indicated difficulty in choosing pain quality icons due to “too many choices...sometimes it aches, sometimes it burns”, and a dislike of using descriptors because “pain just hurts”. Other participants (3/8, 37.5%) identified a “bug” in the software related to an inability to remove icons that were mistakenly added to the body-map.
Time to Complete
The mean time required by participants to complete a single pain report using each tool was 4.2 minutes (SD 1.5) for the MPQ, 4.0 minutes (SD 1.4) for the BPI, and 4.1 minutes (SD 2.2) for Pain-QuILT. There was no significant difference between the tools in terms of time to complete, F1.4,44.8=0.13, P=.81.
Participant Preferences for Self-Reporting Pain
Overall, 16% (8/50) participants chose the MPQ as their preferred method for self-reporting pain, while 14% (7/50) chose the BPI, and 58% (29/50) chose Pain-QuILT. Four of the 50 participants (8%) indicated that they preferred the “other” method of verbally explaining pain to their health care provider. Finally, one participant (1/50, 2%) indicated an equal preference between the BPI and Pain-QuILT.
Reasons for selecting the MPQ included a preference for paper versus electronic pain reporting and greater perceived precision in describing pain, for example, “[it has] words that exactly indicate what is happening to [my] leg—bang on”.
Reasons for choosing the BPI included familiarity, the ability to describe how pain changes over time, and ease of choosing ratings on a set scale, for example, “[it] seems more easy to answer personally. Fits the way that I speak”.
Explanations for choosing Pain-QuILT included greater ease of use, ability to pinpoint different locations and types of pain, preference for computer versus paper-based pain reporting, as well as the visual language to express pain, for example, “[I would] feel more confident being treated by a doctor if they used this tool because [they] would know exactly what you are feeling”.
Our previous work has established the acceptability, usability, and content validity of Pain-QuILT in samples of adults with central post-stroke pain , adults in the community with a range of different types of chronic pain [ ], as well as adults and adolescents with arthritis pain [ ]. Clinical feasibility testing, the focus of the present study, is an important stepping-stone toward widespread implementation of a new assessment tool [ ]. Our group has recently demonstrated clinical feasibility of Pain-QuILT in the context of an interdisciplinary pediatric chronic pain clinic among adolescents aged 12-18 years [ ]. The present study sought to extend these findings by evaluating clinical feasibility of Pain-QuILT from the perspective of adults attending an outpatient pain clinic for treatment of chronic pain. This study included a comparison of Pain-QuILT with standard methods of pain assessment.
As a tool for self-reporting pain, Pain-QuILT was rated as significantly easier to use than the MPQ and BPI, which are two of the most commonly used pain assessment tools in research and clinical practice. Almost half (46%) of participants reported difficulties in using the MPQ. Most of these difficulties related to understanding the pain descriptors and finding accurate words to express pain from a large number of options. These findings of the present study are consistent with a meta-analysis of 51 studies involving 3624 patients, which found that most MPQ words (75%) are rarely endorsed by patients to describe their pain . Although the BPI was associated with fewer reported difficulties, participants indicated that its design was not conducive to reporting different intensities of pain in different body sites. Numerous studies have demonstrated that chronic pain is rarely confined to a single body region [ - ]. For instance, in a study involving 2445 patients, Carnes and colleagues found that 73% experienced pain across multiple body sites [ ]. Among patients with low back pain, only 13% experienced regionally isolated pain. In terms of implications for pain assessment and management, these authors concluded, “self-reported measures of multi-site pain are problematic with pain measures that are site-specific. Pain in other areas may render them less reliable and responsive. Future intervention studies should consider recording other pain sites to identify predictors of response to treatment” (p. 1170) [ ]. Overall, Pain-QuILT was associated with the fewest reported difficulties among participants. Most of the identified issues (75%) will be resolved in the next iteration of Pain-QuILT software (eg, adding orientation labels to body map, fixing “bug” related to deleting unwanted icons). Participant concerns related to the changing nature of pain (“sometimes it aches, sometimes it burns”) will be addressed in future longitudinal studies, which will allow patients to use Pain-QuILT as a diary to document symptoms as they occur. A major identified strength of Pain-QuILT was the ability to record multiple sites, types, and intensities of pain.
The average time required to complete each assessment tool was less than 5 minutes. While there was no significant time difference between the tools, it is important to note that patients can enter Pain-QuILT data independently, while the MPQ and BPI are usually administered by a health care provider in the context of a clinic appointment. Moreover, Pain-QuILT data are generated and stored in a digital format, while information from MPQ and BPI must be manually transcribed into a spreadsheet (paper or computer-based) in order to facilitate tracking over time. Thus, Pain-QuILT has the potential to increase efficiency of clinic appointments by (1) empowering patients to self-report pain on their own time (eg, at home and/or in the clinic waiting room), (2) providing health care providers with digital summaries of tracked pain data to evaluate and inform their management plan, and (3) standardizing the assessment of pain outcomes for use across multiple providers.
Given the inherently personal nature of pain, it is important to consider patient preferences regarding the most effective way of expressing symptoms. The majority of participants (58%) indicated positive preference for Pain-QuILT over alternate methods. It is well recognized that patient engagement is a critical factor in the successful management of chronic disease . In particular, effective doctor-patient communication is known to enhance the health outcomes of pain management [ ]. The interactive and dynamic format of Pain-QuILT may also help patients forge a stronger emotional connection to the tool as a means for portraying and conveying their pain experience, compared to static questionnaires. Moreover, there is a growing body of literature documenting the rise of “self-tracking” among people living with chronic illness. A recent Pew Research Center report found that 40% of adults with 1 chronic condition and 62% of adults with 2 chronic conditions currently self-track their symptoms [ ]. In terms of patient benefits, respondents indicated that self-tracking influenced their overall approach to maintaining health (56%), prompted them to ask their doctor new questions (53%), or influenced a treatment decision (45%) [ ]. Thus, by providing a user-friendly method for communicating with health care providers and self-tracking painful symptoms, Pain-QuILT may encourage greater patient involvement in the long-term management of their own disease.
There is a growing number of patient-oriented mobile applications (apps) designed to aid the self-tracking of pain. In 2011, Rosser and Eccleston identified 111 pain management apps, of which 24% included a self-monitoring function . A more recent scoping review, conducted in 2013, identified 224 pain apps, of which 14% allowed users to self-track their symptoms [ ]. Unfortunately, both studies identified major limitations in the current field of pain apps, including a lack of formal evaluation and limited involvement of health care professionals and patients in their development. Pain-QuILT has been iteratively evaluated and refined through consultation with patients as well as health care professionals and thus has potential to address these identified gaps in the field, as one component of chronic pain management.
Given that participants were asked to self-report their current pain using three different methods, we expected to observe consistency in reported pain. Using the MPQ, participants were presented with a choice of 46 qualitative descriptors across 11 subclasses. Interestingly, the most frequently chosen MPQ words were consistent with the icon descriptors on Pain-QuILT. This relationship was independent of the order of tool assessment. Pain-QuILT icons and word descriptors have been iteratively refined based on patient interviews to ensure that they are representative of the pain experience. The observation that the icons correspond with the most frequently endorsed MPQ descriptors provides further evidence of validity. In terms of pain intensity scores, we examined correlations between Pain-QuILT (body site-specific pain scores) and the BPI (single global score for current pain). There were high correlations (r≥.70) observed between BPI score and (1) the calculated average pain score across all body sites, and (2) the single highest reported pain score across all body sites. There was also a moderate correlation (r=.55) observed between BPI score and the single lowest reported pain score across all body sites. Along with our previous pediatric study, which compared Pain-QuILT scores with a verbal NRS (r=.61), the current data provides further evidence of convergent validity. Importantly, in terms of clinical usefulness, we suggest that the greater level of detail elicited by Pain-QuILT may help inform pain management strategies (eg, observing how treatment affects pain quality and intensity scores within specific body sites) more than a single global intensity score.
Limitations and Future Directions
The present clinical feasibility study was conducted at a single interdisciplinary pain management and rehabilitation clinic in Southwestern Ontario. Although the organization and treatment model of this site was consistent with other Canadian multidisciplinary pain treatment facilities , we acknowledge that future work is needed to evaluate clinical feasibility of Pain-QuILT in other settings. Further, given the interview component of this study, it was necessary for all participants to be able to speak and read English. Although 38% of participants spoke multiple languages, future work is needed to formally evaluate Pain-QuILT in non-English speaking groups. Given the visual nature of Pain-QuILT reporting, it could prove to enhance pain communication for individuals with limited verbal or cognitive skills.
Participants in this study completed only a single Pain-QuILT report. Future work will evaluate whether patient perceptions regarding ease of use and preferences, as well as time to complete, are affected by repeated usage.
The results of this clinical feasibility study in adults with chronic pain are consistent with our previously published pediatric findings . Specifically, data indicate that Pain-QuILT is (1) easy to use, (2) quick to complete, (3) preferred by a majority of adults with chronic pain, and (4) correlated as expected with validated pain measures. As a digital, patient-friendly method of assessing and tracking pain, we conclude that Pain-QuILT has potential to add significant value as one standard component of chronic pain management.
The tool will be licensed for clinical use and research studies through the McMaster Industry Liaison Office [, ]. Updated information on availability will be provided on the author website [ ] and Twitter account (@PainQuILT).
PhD work of author CL has received salary support from an Ontario Graduate Scholarship (Ontario Ministry of Training), a PhD Graduate Training Award from The Arthritis Society / Canadian Arthritis Network, and a PhD Research Stipend from the Pain in Child Health Strategic Training Initiative in Health Research (Canadian Institutes of Health Research). Software development of Pain-QuILT has been supported by a Discovery Advancement Project Grant from the Canadian Arthritis Network.
The authors thank Dr David Streiner, PhD, CPsych (Department of Psychiatry & Behavioural Neuroscience, McMaster University) for providing guidance on study design. The authors thank Émilie McMahon-Lacharité, MSc (Biomedical Communications, University of Toronto), creator of the original Iconic Pain Assessment Tool, for supporting this study.
Author CL thanks the members of her graduate supervisory committee at McMaster University: Dr Joy MacDermid, PT, PhD (Department of Rehabilitation Science, McMaster University), Dr Del Harnish, PhD (Department of Pathology, Molecular Medicine & Biology), Dr Kristina Trim, PhD (Bachelor of Health Sciences Program, McMaster University), Dr Jennifer Stinson, RN, PhD (The Hospital for Sick Children, University of Toronto), and Dr James L Henry, PhD (Department of Psychiatry & Behavioural Neuroscience, McMaster University).
The authors thank Kelsie Korneyi, Navraj Rhandhawa, MD, Jessie Grewal, MSc(kin), and Amy Saward, MSc(kin), for facilitating participant recruitment.
Finally, the authors thank our study participants for their contribution to this research.
All authors contributed substantially to (1) study conception, design, and data interpretation, (2) drafting the manuscript or revising it critically for important intellectual content, and (3) final approval of the version to be published. Participant interviews were conducted by author CL.
Conflicts of Interest
- Merskey H, Bogduk N. Classification of chronic pain: Description of chronic pain syndromes and definitions of pain terms. Seattle: IASP Press; 2002.
- Schopflocher D, Taenzer P, Jovey R. The prevalence of chronic pain in Canada. Pain Res Manag 2011;16(6):445-450 [FREE Full text] [Medline]
- Tracey I, Bushnell MC. How neuroimaging studies have challenged us to rethink: is chronic pain a disease? J Pain 2009 Nov;10(11):1113-1120. [CrossRef] [Medline]
- Boulanger A, Clark AJ, Squire P, Cui E, Horbay GL. Chronic pain in Canada: have we improved our management of chronic noncancer pain? Pain Res Manag 2007;12(1):39-47 [FREE Full text] [Medline]
- Merskey H, Bogduk N. Part III: Pain Terms, A Current List with Definitions and Notes on Usage. In: Classification of chronic pain: descriptions of chronic pain syndromes and definitions of pain terms. Seattle: IASP Press; 1994.
- Jensen MP, Karoly P. Self-Report Scales And Procedures For Assessing Pain In Adults. In: Handbook Of Pain Assessment. New York: The Guilford Press; 2001.
- Jensen MP. The validity and reliability of pain measures in adults with cancer. J Pain 2003 Feb;4(1):2-21. [Medline]
- Hochman JR, French MR, Bermingham SL, Hawker GA. The nerve of osteoarthritis pain. Arthritis Care Res (Hoboken) 2010 Jul;62(7):1019-1023. [CrossRef] [Medline]
- Hasselström J, Liu-Palmgren J, Rasjö-Wrååk G. Prevalence of pain in general practice. Eur J Pain 2002;6(5):375-385. [Medline]
- Todd KH, Ducharme J, Choiniere M, Crandall CS, Fosnocht DE, Homel P, PEMI Study Group. Pain in the emergency department: results of the pain and emergency medicine initiative (PEMI) multicenter study. J Pain 2007 Jun;8(6):460-466. [CrossRef] [Medline]
- Otis JD, Macdonald A, Dobscha SK. Integration and coordination of pain management in primary care. J Clin Psychol 2006 Nov;62(11):1333-1343. [CrossRef] [Medline]
- Melzack R. The McGill Pain Questionnaire: major properties and scoring methods. Pain 1975 Sep;1(3):277-299. [Medline]
- Cleeland CS, Ryan KM. Pain assessment: global use of the Brief Pain Inventory. Ann Acad Med Singapore 1994 Mar;23(2):129-138. [Medline]
- Stinson JN, Stevens BJ, Feldman BM, Streiner D, McGrath PJ, Dupuis A, et al. Construct validity of a multidimensional electronic pain diary for adolescents with arthritis. Pain 2008 Jun;136(3):281-292. [CrossRef] [Medline]
- Stone AA, Shiffman S, Schwartz JE, Broderick JE, Hufford MR. Patient compliance with paper and electronic diaries. Control Clin Trials 2003 Apr;24(2):182-199. [Medline]
- Palmero TM, Valenzuela D. Use of pain diaries to assess recurrent and chronic pain in children. Suffer Child 2003;3:1-19.
- Jamison RN, Raymond SA, Levine JG, Slawsby EA, Nedeljkovic SS, Katz NP. Electronic diaries for monitoring chronic pain: 1-year validation study. Pain 2001 Apr;91(3):277-285. [Medline]
- Stinson JN, Jibb LA, Nguyen C, Nathan PC, Maloney AM, Dupuis LL, et al. Development and testing of a multidimensional iPhone pain assessment application for adolescents with cancer. J Med Internet Res 2013;15(3):e51 [FREE Full text] [CrossRef] [Medline]
- Spyridonis F, Ghinea G, Frank AO. Attitudes of patients toward adoption of 3D technology in pain assessment: qualitative perspective. J Med Internet Res 2013;15(4):e55 [FREE Full text] [CrossRef] [Medline]
- Baggott C, Gibson F, Coll B, Kletter R, Zeltzer P, Miaskowski C. Initial evaluation of an electronic symptom diary for adolescents with cancer. JMIR Res Protoc 2012;1(2):e23 [FREE Full text] [CrossRef] [Medline]
- McMahon E, Wilson-Pauwels L, Henry JL, Jenkinson J, Sutherland B, Brierley M. The Iconic Pain Assessment Tool: Facilitating the Translation of Pain Sensations and Improving Patient-Physician Dialogue. J Bio Communication 2008;34:E20-E24.
- Lalloo C, Henry JL. Evaluation of the Iconic Pain Assessment Tool by a heterogeneous group of people in pain. Pain Res Manag 2011;16(1):13-18 [FREE Full text] [Medline]
- Lalloo C, Stinson JN, Hochman JR, Adachi JD, Henry JL. Adapting the Iconic Pain Assessment Tool Version 2 (IPAT2) for adults and adolescents with arthritis pain through usability testing and refinement of pain quality icons. Clin J Pain 2013 Mar;29(3):253-264. [CrossRef] [Medline]
- Lalloo C, Stinson JN, Brown SC, Campbell F, Isaac L, Henry JL. Pain-QuILT: Assessing clinical feasibilty of a web-based tool for the visual self-report of pain in an interdisciplinary pediatric chronic pain clinic. Clinical J Pain 2014 (forthcoming). [Medline]
- Morgan DL. Focus groups as qualitative research. Planning and research design for focus groups. Thousand Oaks, CA: Sage; 1997:1-80.
- Melzack R, Torgerson WS. On the language of pain. Anesthesiology 1971 Jan;34(1):50-59 [FREE Full text] [Medline]
- Melzack R, Katz J. The McGill Pain Questionnaire: Appraisal and current status. In: Handbook of pain assessment (2nd ed.). New York: Guilford Press; 2001:35-52.
- Daut RL, Cleeland CS, Flanery RC. Development of the Wisconsin Brief Pain Questionnaire to assess pain in cancer and other diseases. Pain 1983 Oct;17(2):197-210. [Medline]
- Dworkin RH, Turk DC, Farrar JT, Haythornthwaite JA, Jensen MP, Katz NP, IMMPACT. Core outcome measures for chronic pain clinical trials: IMMPACT recommendations. Pain 2005 Jan;113(1-2):9-19. [CrossRef] [Medline]
- HyperRESEARCH: qualitative analysis tool computer program, Version 2.8. Massachusetts: ResearchWare Inc; 2009. URL: http://www.researchware.com/products/hyperresearch.html [accessed 2014-05-08] [WebCite Cache]
- Neuendorf K. The content analysis guidebook. Thousand Oaks, CA: Sage Publications; 2002.
- PASW Statistics computer program, Version 17. Chicago: SPSS Inc; 2009. URL: http://www.spss.com.hk/statistics/ [accessed 2014-05-08] [WebCite Cache]
- Norman GR, Streiner D. Simple regression and correlation. In: Biostatistics: The Bare Essentials. Hamilton, ON: BC Decker Inc; 2000.
- Lalloo C, Stinson JN, Hochman JR, Adachi JD, Henry JL. Adapting the Iconic Pain Assessment Tool Version 2 (IPAT2) for adults and adolescents with arthritis pain through usability testing and refinement of pain quality icons. Clin J Pain 2013 Mar;29(3):253-264. [CrossRef] [Medline]
- Feeley N, Cossette S, Côté J, Héon M, Stremler R, Martorella G, et al. The importance of piloting an RCT intervention. Can J Nurs Res 2009 Jun;41(2):85-99. [Medline]
- Wilkie DJ, Savedra MC, Holzemer WL, Tesler MD, Paul SM. Use of the McGill Pain Questionnaire to measure pain: a meta-analysis. Nurs Res 1990;39(1):36-41. [Medline]
- Carnes D, Parsons S, Ashby D, Breen A, Foster NE, Pincus T, et al. Chronic musculoskeletal pain rarely presents in a single body site: results from a UK population study. Rheumatology (Oxford) 2007 Jul;46(7):1168-1170 [FREE Full text] [CrossRef] [Medline]
- Kamaleri Y, Natvig B, Ihlebaek CM, Bruusgaard D. Localized or widespread musculoskeletal pain: does it matter? Pain 2008 Aug 15;138(1):41-46. [CrossRef] [Medline]
- Nakamura I, Nishioka K, Usui C, Osada K, Ichibayashi H, Ishida M, et al. An Epidemiological Internet Survey of Fibromyalgia and Chronic Pain in Japan. Arthritis Care Res (Hoboken) 2014 Jan 8:- (forthcoming). [CrossRef] [Medline]
- Dorflinger L, Kerns RD, Auerbach SM. Providers' roles in enhancing patients' adherence to pain self management. Transl Behav Med 2013 Mar;3(1):39-46 [FREE Full text] [CrossRef] [Medline]
- Butow P, Sharpe L. The impact of communication on adherence in pain management. Pain 2013 Dec;154 Suppl 1:S101-S107. [CrossRef] [Medline]
- Fox S, Duggan M. Pew Internet Internet Project. 2013. Tracking for Health URL: http://www.pewinternet.org/2013/01/28/tracking-for-health/ [accessed 2014-05-08] [WebCite Cache]
- Rosser BA, Eccleston C. Smartphone applications for pain management. J Telemed Telecare 2011;17(6):308-312. [CrossRef] [Medline]
- Lalloo C, Jibb LA, Agarwal A, Stinson JN. There's a pain app for that: Review of Patient-Targeted Smartphone Applications for Pain Management. Clin J Pain 2014:- (forthcoming).
- Peng P, Choiniere M, Dion D, Intrater H, Lefort S, Lynch M, STOPPAIN Investigators Group. Challenges in accessing multidisciplinary pain treatment facilities in Canada. Can J Anaesth 2007 Dec;54(12):977-984. [CrossRef] [Medline]
- McMaster Industry Liaison Office (MILO). URL: http://ip.mcmaster.ca/ [accessed 2014-01-30] [WebCite Cache]
- Pain-QuILT Website. 2013. URL: http://painquilt.mcmaster.ca/ [accessed 2014-01-30] [WebCite Cache]
|BPI: Brief Pain Inventory|
|MPQ: McGill Pain Questionnaire|
|NRS: Numerical Rating Scale|
|SD: standard deviation|
Edited by G Eysenbach; submitted 31.01.14; peer-reviewed by C von Baeyer, J Kelly; comments to author 15.03.14; revised version received 21.03.14; accepted 24.03.14; published 12.05.14
©Chitra Lalloo, Dinesh Kumbhare, Jennifer N Stinson, James L Henry. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 12.05.2014.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.