Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Monday, March 11, 2019 at 4:00 PM to 4:30 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Advertisement

Currently submitted to: Journal of Medical Internet Research

Date Submitted: Oct 29, 2019
Open Peer Review Period: Oct 29, 2019 - Dec 24, 2019
(currently open for review)

Bringing Home Cognitive Assessment: Initial Validation of Unsupervised Web-based Cognitive Testing on the Cambridge Neuropsychological Test Automated Battery (CANTAB) using a within-subjects counterbalanced design

  • Rosa Backx; 
  • Caroline Skirrow; 
  • Pasquale Dente; 
  • Jennifer H Barnett; 
  • Francesca K Cormack; 

ABSTRACT

Background:

Computerised assessments already confer advantages for deriving accurate and reliable measures of cognitive function, including test standardisation, accuracy of response recordings and automated scoring. Web-based cognitive assessment could improve accessibility and flexibility of research and clinical assessment, widen participation and promote research recruitment whilst simultaneously reducing costs. However, differences between lab-based and unsupervised cognitive assessment may influence task performance. Validation is required to establish reliability, equivalency and agreement with respect to gold-standard lab-based assessments.

Objective:

The current study validates an unsupervised web-based version of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study tests: 1) reliability, the correlation between measurements across participants, 2) equivalence, the extent to which test results in different settings produce similar, or by contrast, different overall results, and 3) agreement, by quantifying acceptable limits to bias and differences between the different measurement environments.

Methods:

Fifty-one healthy adults (32 women, 19 men; mean age 37 years) completed two testing sessions on average one week apart. Assessments included equivalent tests of emotion recognition (Emotion Recognition Task: ERT), visual recognition (Pattern Recognition Memory: PRM), episodic memory (Paired Associate Learning: PAL), working memory and spatial planning (Spatial Working Memory: SWM; One-Touch Stockings of Cambridge: OTS), and sustained attention (Rapid Visual Information Processing: RVP). Participants were randomly allocated to one of two groups, either assessed in-person first (n=33) or using web-based assessment first (n=18). Performance measures (errors, correct trials, response sensitivity), and median reaction times were extracted. Analyses included intra-class correlations (ICC) to examine reliability, linear mixed models and Bayesian paired samples t-tests to test for equivalence, and Bland Altman plots to examine agreement.

Results:

Intra-class correlation coefficients ranged from 0.23-0.67, with high correlations in three performance measures (from PAL, SWM and RVP tasks, ≥0.60). High intra-class correlations were also seen for reaction time measures from two tasks (PRM and ERT tasks, ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance measures did not differ between assessment modalities, and generally showed satisfactory agreement.

Conclusions:

Our results support the use of CANTAB performance measures (errors, correct trials, response sensitivity) in unsupervised web-based assessments. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variation in home computer hardware. Results underline the importance of examining more than one index to ascertain validity, since high correlations can be present in the context of consistent, systematic differences which are a product of differences between measurement environments. Further work is now needed validate web-based assessments in clinical populations, and in larger samples to improve sensitivity for detecting subtler differences between test settings.


 Citation

Please cite as:

Backx R, Skirrow C, Dente P, Barnett JH, Cormack FK

Bringing Home Cognitive Assessment: Initial Validation of Unsupervised Web-based Cognitive Testing on the Cambridge Neuropsychological Test Automated Battery (CANTAB) using a within-subjects counterbalanced design

JMIR Preprints. 29/10/2019:16792

DOI: 10.2196/preprints.16792

URL: https://preprints.jmir.org/preprint/16792


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.