Akademska digitalna zbirka SLovenije - logo
E-viri
  • Augmenting cognitive assess...
    Mengoudi, Kyriaki; Ravi, Daniele; Yong, Keir XX; Primativo, Silvia; Pavisic, Ivanna M; Brotherhood, Emilie V; Lu, Kirsty; Schott, Jonathan M; Crutch, Sebastian J; Alexander, Daniel C.

    Alzheimer's & dementia, 12/2020, Letnik: 16, Številka: S4
    Journal Article

    Abstract Background Eye‐tracking technology is an innovative tool that holds promise for enhanced dementia screening, offering the potential of brief and quantitative assessment of cognitive functions. Critically, instruction‐less eye‐tracking tests may ameliorate some of the issues with complex test instructions and linguistic variations associated with traditional cognitive tests, and capture additional sensitive metrics of task performance. However, the extraction of relevant biomarkers from large, complex eye‐tracking datasets is non‐trivial. In this work, we introduce a novel automated way of extracting abnormal oculomotor biomarkers using machine learning from raw eye‐tracking data acquired during an instruction‐less cognitive test. Method A free‐viewing instruction‐less cognitive battery (5 minutes) was administered to healthy controls (N=553) and patients with a range of dementias (N = 30) Figure 1. Our method is based on self‐supervised representation learning: a deep neural network is initially trained to solve a pretext task that has well‐defined available labels. Here the pretext task is to identify distinct tasks ‐ scene perception, reading, episodic memory for scenes ‐ in healthy individuals from eye‐tracking patterns. Figure 2 visualises some features of eye‐tracking patterns that correspond to particular tasks. Once trained, this network encodes high‐level semantic information which is useful for solving other problems of interests (e.g. dementia classification) Figure 3. The extent to which eye‐tracking features of patients with dementia deviate from healthy behaviour is then explored, followed by a comparison between self‐supervised and handcrafted representations on discriminating between controls and patients. Result Based on the results of the handcrafted features, patients with dementia had significantly lower scanpath lengths than controls (z = ‐276.56, SE = 97.09, p =0.00439), consistent with less extensive and efficient scanning of the presented stimuli. The self‐supervised learning features showed higher performance in discriminating dementia patients from controls (F1 score (95% CI: 0.78, 0.82) vs standard handcrafted features 0.62, 0.67). Conclusion These results suggest that instruction‐less eye‐tracking tests can detect dementia status, even in the absence of explicit task instructions. We reveal novel self‐supervised learning features that are more sensitive than handcrafted features in detecting performance differences between participants with and without dementia across a variety of eye‐tracking‐based cognitive tasks.