This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org, as well as this copyright and license information must be included.
For many years, clinicians have been seeking for objective pain assessment solutions via neuroimaging techniques, focusing on the brain to detect human pain. Unfortunately, most of those techniques are not applicable in the clinical environment or lack accuracy.
This study aimed to test the feasibility of a mobile neuroimaging-based clinical augmented reality (AR) and artificial intelligence (AI) framework, CLARAi, for objective pain detection and also localization direct from the patient’s brain in real time.
Clinical dental pain was triggered in 21 patients by hypersensitive tooth stimulation with 20 consecutive descending cold stimulations (32°C-0°C). We used a portable optical neuroimaging technology, functional near-infrared spectroscopy, to gauge their cortical activity during evoked acute clinical pain. The data were decoded using a neural network (NN)–based AI algorithm to classify hemodynamic response data into pain and no-pain brain states in real time. We tested the performance of several networks (NN with 7 layers, 6 layers, 5 layers, 3 layers, recurrent NN, and long short-term memory network) upon reorganized data features on pain diction and localization in a simulated real-time environment. In addition, we also tested the feasibility of transmitting the neuroimaging data to an AR device, HoloLens, in the same simulated environment, allowing visualization of the ongoing cortical activity on a 3-dimensional brain template virtually plotted on the patients’ head during clinical consult.
The artificial neutral network (3-layer NN) achieved an optimal classification accuracy at 80.37% (126,000/156,680) for pain and no pain discrimination, with positive likelihood ratio (PLR) at 2.35. We further explored a 3-class localization task of left/right side pain and no-pain states, and convolutional NN-6 (6-layer NN) achieved highest classification accuracy at 74.23% (1040/1401) with PLR at 2.02.
Additional studies are needed to optimize and validate our prototype CLARAi framework for other pains and neurologic disorders. However, we presented an innovative and feasible neuroimaging-based AR/AI concept that can potentially transform the human brain into an objective target to visualize and precisely measure and localize pain in real time where it is most needed: in the doctor’s office.
RR1-10.2196/13594
Accurate pain assessment is crucial across a wide range of acute and chronic pain conditions to provide proper diagnosis and treatment, especially when patients have limitations to express their ongoing suffering. The estimated economic impact of pain, from direct medical costs to loss of productive time, is US $560 to $635 billion every year [
The pain field has progressed by quantifying the patients’ suffering with more holistic pain questionnaires and measure scales (eg, McGill Pain Questionnaire and Face Rating Pain Scale), which are prevalent, useful, and convenient. However, the subjective reports still carry limitations: first, they are inconsistent among different patient groups regarding age and cultures. For instance, words used by patients nowadays to express the severity of their pain have also evolved with time and might be different from the ones articulated by past generations [
To address these limitations, researchers have started to analyze the neurological signature of pain using neuroimaging [
In previous studies, our group studied the hemodynamic cortical responses detected by fNIRS in patients with hypersensitive teeth in the dental chair [
The University of Michigan Institutional Review Board approval was obtained before study initiation. We recruited 21 participants (8 male; age: mean 27.6, SD 3.5 years) with hypersensitive teeth. We collected neuroimaging data from a thermal stimulation session [
The data were acquired with a TechEN-CW6 fNIRS (Milford, MA, United States) system at a 20 Hz sampling rate. The setup included 8 emitters of near-infrared light and 28 detectors spaced 3 cm apart, yielding 40 data channels deployed at bilateral PFC and S1. The probe set was designed based on the international 10-10 transcranial positioning system [
In this study, we employed 2 experiments for testing the feasibility of pain/no-pain prediction as well as left/right pain localization (
The NN design, training, and testing were completed in a Python-based toolbox, Keras (Chollet et al) [
Experiment flow chart. The green line indicates the convolutional neural network with 7 layers (CNN-7), the blue line indicates the CNN network with 6 layers, the orange line indicates the CNN network with 5 layers (CNN-5), the red line indicates the long short-term memory network, the dark blue line indicates the recurrent NN, and the yellow line indicates the artificial NN with 3 layers for experiment 1—pain/no-pain prediction and experiment 2—left/right pain localization task. Experiment 1 included the data collected from N=12 participants, 239 trials in total, whereas experiment 2 included the data collected from N=2 participants, 20 trials in total. CNN: convolutional neural network.
Study framework.
The aim of the experiment was to test the feasibility of pain/no-pain prediction at individual patient level. We tested convolutional NN (CNN) configurations at 3 different depths, respectively, 7 layers (CNN-7), 5 layers (CNN-5), and 3 layers (artificial NN, ANN) to evaluate their performance on same datasets (
The aim of the experiment was to further test the feasibility of left/right pain and no-pain states prediction (3-class classification) on merged and permuted patients’ data (data were collected from patient 3 and 19, separately, left/right tooth stimulated). We permuted the merged data by randomly including and excluding data cubes along time course. We tested all networks applied in experiment 1, and in addition a 6-layer CNN (CNN-6) on type I data cube, with time series preprocessed with a custom real-time normalization algorithm (
We developed a displaying terminal for the framework using an AR device, HoloLens (Microsoft, WA, United States). AR is a computer vision–based technology that expands our real world by adding a layer of virtual and digital information to it. It is becoming prevalent in different fields including, for example, construction, gaming, and medicine. The Hololens is a headset-shaped AR computer developed by Microsoft, which allows users to visualize 3-dimensional (3D) holographic images on top of the real physical world. In this study, the functional hemodynamic response data acquired from the patient’s brain at multiple cortical regions of interest were wirelessly transmitted to the HoloLens device. Afterwards, we used an in-house–developed software to display the ongoing patient’s cortical function updating in real time on the brain template modeled in the software (
Of the 21 participants, 12 were further preprocessed to enter the feasibility testing in experiment 1 (
Participant demographics with classification performance.
Participant | Pain (points) | No-pain (points) | Class accuracy | Reported NRSa | Stimulation side |
3 | 2000 | 14,980 | 84.58 (%) | 5.5 | Right |
5 | 2000 | 13,080 | 76.83 (%) | 3.9 | Right |
10 | 2000 | 12,140 | 78.84 (%) | 5.8 | Right |
11 | 2000 | 11,520 | 80.98 (%) | 1.9 | Left |
12 | 2000 | 13,320 | 81.53 (%) | 3.4 | Left |
13 | 2000 | 13,700 | 76.25 (%) | 8.5 | Right |
15 | 2000 | 11,480 | 76.12 (%) | 5.8 | Left |
16 | 2000 | 12,600 | 79.55 (%) | 6.6 | Left |
17 | 1900 | 14,360 | 80.74 (%) | 2.6 | Left |
18 | 2000 | 13,020 | 82.29 (%) | 3.8 | Left |
19 | 2000 | 11,840 | 81.00 (%) | 5.1 | Left |
20 | 2000 | 14,640 | 85.78 (%) | 3.3 | Left |
aNRS: numerical rating scale.
Representative averaged oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR) heat map from all data channels. The upper and lower panels, respectively, indicated the hemodynamic responses during pain and no-pain statues. The left and right panels, respectively, indicated the HbO and HbR responses. The red and blue circles, respectively, highlighted 2 regions of interest, sensory and prefrontal cortices. HbO: oxygenated hemoglobin; HbR: deoxygenated hemoglobin; PFC: prefrontal cortex.
Performance of different network setups in experiment 1.
Network setup | Overall accuracy | Sensitivity | Specificity | PPVa | NPVb | PLRc | Kappa |
CNNd-7 | 79.62 (%) | 0.144 | 0.896 | 0.169 | 0.872 | 1.39 | 0.04 |
CNN-5 | 79.25 (%) | 0.153 | 0.891 | 0.183 | 0.872 | 1.4 | 0.05 |
ANNe | 79.17 (%) | 0.192 | 0.884 | 0.205 | 0.877 | 1.65 | 0.08 |
ANN+2 portion | 80.37 (%) | 0.326 | 0.861 | 0.266 | 0.893 | 2.35 | 0.17 |
ANN+2 portion + oversample | 75.93 (%) | 0.409 | 0.801 | 0.242 | 0.898 | 2.06 | 0.16 |
ANN+2 portion + oversample (HbOf only) | 77.19 (%) | 0.379 | 0.819 | 0.245 | 0.895 | 2.10 | 0.16 |
RNNg+2 portion + oversample | 76.31 (%) | 0.332 | 0.815 | 0.211 | 0.888 | 1.80 | 0.11 |
LSTMh+2 portion + oversample | 77.29 (%) | 0.319 | 0.828 | 0.220 | 0.887 | 1.86 | 0.12 |
aPPV: positive predictive value.
bNPV: negative predictive value.
cPLR: positive likelihood ratio.
dCNN: convolutional neural network.
eANN: artificial neural network.
fHbO: oxygenated hemoglobin.
gRNN: recurrent neural network.
hLSTM: long short-term memory.
Performance of different network setups in experiment 2.
Network | Accuracy | Sensitivity | Specificity | PPVa | NPVb | PLRc | Kappa |
ANNd | 70.88 (%) | 0.443 | 0.777 | 0.339 | 0.844 | 1.99 | 0.20 |
CNNe-5 | 65.37 (%) | 0.375 | 0.723 | 0.250 | 0.824 | 1.35 | 0.08 |
CNN-6 | 74.23 (%) | 0.279 | 0.862 | 0.342 | 0.823 | 2.02 | 0.15 |
CNN-7 | 73.23 (%) | 0.540 | 0.782 | 0.389 | 0.868 | 2.48 | 0.28 |
aPPV: positive predictive value.
bNPV: negative predictive value.
cPLR: positive likelihood ratio.
dANN: artificial neural network.
eCNN: convolutional neural network.
ANN performed on split data history segments achieved the best results, with a prediction accuracy at 80.37 % 145,210/180,580), and a PLR at 2.35 (sensitivity=0.326, specificity=0.861). ANN performed on split data history blocks with reweighted loss function achieved the highest sensitivity at 0.409 and specificity of 0.801, with a PLR at 2.06. In addition, CNN-7 achieved the highest specificity at 0.896, however, with a PLR at 1.39 and a sensitivity of 0.144. A detailed performance summary of experiment 1 for different network setups can be found in
CLARAi framework that integrated clinical real-time neuroimaging, augmented reality, and artificial intelligence provides an augmented clinical environment by displaying neuroimaging data with predicted and localized pain of patient. The classification codes for no-pain, right side pain, and left side pain was defined as 0, 1, and 2, respectively, for model training purposes.
In our previous study, we observed, respectively, clinical pain expectation and pain-related responses at PFC and S1 cortices [
Herein, in experiment 1, we tested several NNs on different reorganized brain activation data to predict pain and no-pain conditions. We first tested 3 networks on data including 2-second data history block and found that CNN-7 achieved the highest general classification accuracy. In recent years, CNNs became deeper and deeper, with state-of-the-art networks going from 7 layers AlexNet [
Considering the relatively high-spatial resolution of fNIRS imaging, in experiment 2, we further tested the feasibility of localizing pain. We introduced a 3-class pain localization problem by merging the data from 2 selected patients, one with left side tooth pain hypersensitivity and the other one with same clinical condition on a right tooth during cold stimulation. To eliminate baseline and signal magnitude difference, we applied a simulated real-time normalization algorithm to the data. We then tested this dataset with several NNs with different depth and found that CNN-6 achieved the best general classification accuracy. Though there is need for further validation, the preliminary discrimination result demonstrated a strong potential of our framework in localizing pain at different body regions. Moreover, the results demonstrated the feasibility of training a universal model that can localize pain condition across patients based on the S1 homuncular activation by side and major body regions. This is biologically feasible because the somatotopic homuncular S1 representation for pain in the orofacial region is quite large, like other major functional body regions including the thumb/hand, trunk, and feet [
Combined with the pain prediction module, we developed a clinical AR-based data displaying interface for the framework. The data collected in this study with the predicted result were transferred to a HoloLens device. The magnitudes of hemodynamic response changes at multiple locations on a 3D brain template were superimposed on the participants’ head in reality via HoloLens (
Finally, all selected data preprocessing, classification, transmitting, and displaying methods in this study can be implemented in real time. However, the CLARAi framework is in its initial stages. Future improvements of this work include: (1) optimizing the framework sensitivity by potentially adding short-separation channels during data acquisition to model interfering physiological signals in a better way, (2) expansion of the current participant-specific model to a general model with learning ability that will only require individualization to precisely adapt to variations in each patient, and (3) further expansion of the model to fit other types of pain conditions and neurologic disorders including depression and anxiety. In summary, we tested the feasibility of a prototype of a mobile neuroimaging-based clinical AR and AI (CLARAi) framework for objective pain detection and localization in the clinical environment in real time. Such framework predicted when and where there was physical pain based on the brain statuses in our data study and displayed neuroimaging data interactively in real time. Although extensive validation work still needs to be done, the CLARAi framework might turn into reality the goal of precisely “seeing and believing” the biologic pain suffering of our patients in the doctor’s office.
Video: section 1, the collected oxygenated hemoglobin and deoxygenated hemoglobin data were displayed on an MNI152 brain template in real time. Section 2, the 3-dimensional (3D) virtual brain activation image, through HoloLens, was superimposed onto a participant’s head. Beside the 3D brain activation, an animated human body with modulating red areas indicated pain regions prediction by side—either left or right cranio-orofacial regions.
3-dimensional
artificial intelligence
artificial neural network
augmented reality
convolutional neural network
functional magnetic resonance imaging
functional near-infrared spectroscopy
oxygenated hemoglobin
deoxygenated hemoglobin
neural network
positive likelihood
primary sensory
This research was sponsored by an unrestricted grant from the Colgate Palmolive Company and fNIRS pilot grant award (University of Michigan, UM). Drs Hu, DaSilva, Hall, Petty, Maslowski are the co-creators of CLARAi. We would also like to thank Andrew Racek for his contribution to data collection.
The content described within this study has been developed at the UM and disclosed to the UM Office of Technology Transfer. All intellectual property rights including but not limited to patents/patent applications, trademark and copyright of software, algorithms, reports, displays, and visualizations are owned by the Regents of the University of Michigan. Drs DaSilva and Maslowski are the co-creators of CLARAi. Drs DaSilva and Malowski are co-founders of MoxyTech Inc, which has optioned the technology CLARAi from University of Michigan. Dr. Roger P Ellwood, who was previously an employee of the Colgate Palmolive company-the funding agency of this work.