Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Monday, March 11, 2019 at 4:00 PM to 4:30 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?


Currently submitted to: Journal of Medical Internet Research

Date Submitted: Oct 29, 2019
Open Peer Review Period: Oct 29, 2019 - Dec 24, 2019
(closed for review but you can still tweet)

NOTE: This is an unreviewed Preprint

Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note “no longer under consideration” will appear above).

Peer-review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a “Peer-Review Me” button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.

Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).

Final version: If our system detects a final peer-reviewed “version of record” (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.

Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.

Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.

Bringing Home Cognitive Assessment: Initial Validation of Unsupervised Web-based Cognitive Testing on the Cambridge Neuropsychological Test Automated Battery (CANTAB) using a within-subjects counterbalanced design

  • Rosa Backx; 
  • Caroline Skirrow; 
  • Pasquale Dente; 
  • Jennifer H Barnett; 
  • Francesca K Cormack; 



Computerised assessments already confer advantages for deriving accurate and reliable measures of cognitive function, including test standardisation, accuracy of response recordings and automated scoring. Web-based cognitive assessment could improve accessibility and flexibility of research and clinical assessment, widen participation and promote research recruitment whilst simultaneously reducing costs. However, differences between lab-based and unsupervised cognitive assessment may influence task performance. Validation is required to establish reliability, equivalency and agreement with respect to gold-standard lab-based assessments.


The current study validates an unsupervised web-based version of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study tests: 1) reliability, the correlation between measurements across participants, 2) equivalence, the extent to which test results in different settings produce similar, or by contrast, different overall results, and 3) agreement, by quantifying acceptable limits to bias and differences between the different measurement environments.


Fifty-one healthy adults (32 women, 19 men; mean age 37 years) completed two testing sessions on average one week apart. Assessments included equivalent tests of emotion recognition (Emotion Recognition Task: ERT), visual recognition (Pattern Recognition Memory: PRM), episodic memory (Paired Associate Learning: PAL), working memory and spatial planning (Spatial Working Memory: SWM; One-Touch Stockings of Cambridge: OTS), and sustained attention (Rapid Visual Information Processing: RVP). Participants were randomly allocated to one of two groups, either assessed in-person first (n=33) or using web-based assessment first (n=18). Performance measures (errors, correct trials, response sensitivity), and median reaction times were extracted. Analyses included intra-class correlations (ICC) to examine reliability, linear mixed models and Bayesian paired samples t-tests to test for equivalence, and Bland Altman plots to examine agreement.


Intra-class correlation coefficients ranged from 0.23-0.67, with high correlations in three performance measures (from PAL, SWM and RVP tasks, ≥0.60). High intra-class correlations were also seen for reaction time measures from two tasks (PRM and ERT tasks, ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance measures did not differ between assessment modalities, and generally showed satisfactory agreement.


Our results support the use of CANTAB performance measures (errors, correct trials, response sensitivity) in unsupervised web-based assessments. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variation in home computer hardware. Results underline the importance of examining more than one index to ascertain validity, since high correlations can be present in the context of consistent, systematic differences which are a product of differences between measurement environments. Further work is now needed validate web-based assessments in clinical populations, and in larger samples to improve sensitivity for detecting subtler differences between test settings.


Please cite as:

Backx R, Skirrow C, Dente P, Barnett JH, Cormack FK

Bringing Home Cognitive Assessment: Initial Validation of Unsupervised Web-based Cognitive Testing on the Cambridge Neuropsychological Test Automated Battery (CANTAB) using a within-subjects counterbalanced design

JMIR Preprints. 29/10/2019:16792

DOI: 10.2196/preprints.16792


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.