The bisector function is an important tool for analyzing and filtering Euclidean skeletons. In this paper, we are proposing a new way to compute 2D and 3D discrete bisector function based on annuli. ...From a continuous point of view, a point that belongs to the medial axis is the center of a maximal ball that hits the background in more than one point. The maximal angle between those points is expected to be high for most of the object points and corresponds to the bisector angle. This logic is not really applicable in the discrete space since we may miss some background points that can lead to small bisector angles. In this work we use annuli to find the background points in order to compute the bisector angle. Our approach offers the possibility to change the thickness of the annulus at a given point and is thus flexible when computing skeletons. Our work can be extended to nD and we propose the nD algorithm.
Background
Story recall is a simple and sensitive cognitive test that is commonly used to measure changes in episodic memory function in early Alzheimer disease (AD). Recent advances in digital ...technology and natural language processing methods make this test a candidate for automated administration and scoring. Multiple parallel test stimuli are required for higher-frequency disease monitoring.
Objective
This study aims to develop and validate a remote and fully automated story recall task, suitable for longitudinal assessment, in a population of older adults with and without mild cognitive impairment (MCI) or mild AD.
Methods
The “Amyloid Prediction in Early Stage Alzheimer’s disease” (AMYPRED) studies recruited participants in the United Kingdom (AMYPRED-UK: NCT04828122) and the United States (AMYPRED-US: NCT04928976). Participants were asked to complete optional daily self-administered assessments remotely on their smart devices over 7 to 8 days. Assessments included immediate and delayed recall of 3 stories from the Automatic Story Recall Task (ASRT), a test with multiple parallel stimuli (18 short stories and 18 long stories) balanced for key linguistic and discourse metrics. Verbal responses were recorded and securely transferred from participants’ personal devices and automatically transcribed and scored using text similarity metrics between the source text and retelling to derive a generalized match score. Group differences in adherence and task performance were examined using logistic and linear mixed models, respectively. Correlational analysis examined parallel-forms reliability of ASRTs and convergent validity with cognitive tests (Logical Memory Test and Preclinical Alzheimer’s Cognitive Composite with semantic processing). Acceptability and usability data were obtained using a remotely administered questionnaire.
Results
Of the 200 participants recruited in the AMYPRED studies, 151 (75.5%)—78 cognitively unimpaired (CU) and 73 MCI or mild AD—engaged in optional remote assessments. Adherence to daily assessment was moderate and did not decline over time but was higher in CU participants (ASRTs were completed each day by 73/106, 68.9% participants with MCI or mild AD and 78/94, 83% CU participants). Participants reported favorable task usability: infrequent technical problems, easy use of the app, and a broad interest in the tasks. Task performance improved modestly across the week and was better for immediate recall. The generalized match scores were lower in participants with MCI or mild AD (Cohen d=1.54). Parallel-forms reliability of ASRT stories was moderate to strong for immediate recall (mean rho 0.73, range 0.56-0.88) and delayed recall (mean rho=0.73, range=0.54-0.86). The ASRTs showed moderate convergent validity with established cognitive tests.
Conclusions
The unsupervised, self-administered ASRT task is sensitive to cognitive impairments in MCI and mild AD. The task showed good usability, high parallel-forms reliability, and high convergent validity with established cognitive tests. Remote, low-cost, low-burden, and automatically scored speech assessments could support diagnostic screening, health care, and treatment monitoring.
Background
Vocal and linguistic changes in Alzheimer’s dementia have been documented. The current study assesses whether a fully automated speech‐based artificial intelligence (AI) system can detect ...early clinical impairment and amyloid positivity, which characterise the earliest stages of Alzheimer’s disease (AD).
Method
Two studies were completed in the UK and USA: AMYPRED‐UK (NCT04828122) and AMYPRED‐US (NCT04928976). 200 participants with established amyloid beta (Aβ) and clinical diagnostic status (97 Aβ+, 103 Aβ‐ from prior PET scan or CSF tests; 94 cognitively unimpaired (CU), 106 with mild cognitive impairment (MCI) or mild AD) were recruited and completed automated assessments with the Automatic Story Recall Task (ASRT) either in‐clinic or via telemedicine appointment. The AI text‐pair evaluation model ParaBLEU produced vector‐based representations of the , generalized patterns that differed between the original story text and transcribed retellings. These were fed into logistic regression models trained with tournament leave‐pair‐out cross‐validation analysis to predict Aβ status and MCI/mild AD. Potential benefits of screening using the ASRT system were examined via simulation, including: (1) identifying MCI in primary care, and (2) reducing the number of PET scans required in clinical research studies and trials.
Result
Using an average of only 6.56 minutes of speech per participant, the ASRT system predicted Aβ positivity in the full sample (area under the receiver operating characteristic curve (AUC) = 0.77) and in diagnostic subsamples (MCI/mild AD: AUC = 0.82; CU: AUC = 0.71), as well as MCI/mild AD (AUC = 0.83) in the full sample. Simulation indicated that screening with the ASRT system could: (1) increase correct (+8.5%) and reduce incorrect referrals (‐59.1%) compared with the Mini‐Mental State Exam; and (2) enrich samples for Aβ positivity prior to PET scan (‐35.3% and ‐35.5% fewer scans required in MCI and CU individuals, respectively).
Conclusion
With the first disease‐modifying treatment for AD now available, there is an urgent need for improved screening to identify individuals at risk for AD dementia. The ASRT system is a brief, effective, automated, speech‐based cognitive assessment offering scalable screening for MCI/mild AD and amyloid beta biomarker positivity.
Introduction
Artificial intelligence (AI) systems leveraging speech and language changes could support timely detection of Alzheimer's disease (AD).
Methods
The AMYPRED study (NCT04828122) recruited ...133 subjects with an established amyloid beta (Aβ) biomarker (66 Aβ+, 67 Aβ–) and clinical status (71 cognitively unimpaired CU, 62 mild cognitive impairment MCI or mild AD). Daily story recall tasks were administered via smartphones and analyzed with an AI system to predict MCI/mild AD and Aβ positivity.
Results
Eighty‐six percent of participants (115/133) completed remote assessments. The AI system predicted MCI/mild AD (area under the curve AUC = 0.85, ±0.07) but not Aβ (AUC = 0.62 ±0.11) in the full sample, and predicted Aβ in clinical subsamples (MCI/mild AD: AUC = 0.78 ±0.14; CU: AUC = 0.74 ±0.13) on short story variants (immediate recall). Long stories and delayed retellings delivered broadly similar results.
Discussion
Speech‐based testing offers simple and accessible screening for early‐stage AD.
Abstract
Early detection of Alzheimer’s disease is required to identify patients suitable for disease-modifying medications and to improve access to non-pharmacological preventative interventions. ...Prior research shows detectable changes in speech in Alzheimer’s dementia and its clinical precursors. The current study assesses whether a fully automated speech-based artificial intelligence system can detect cognitive impairment and amyloid beta positivity, which characterize early stages of Alzheimer’s disease. Two hundred participants (age 54–85, mean 70.6; 114 female, 86 male) from sister studies in the UK (NCT04828122) and the USA (NCT04928976), completed the same assessments and were combined in the current analyses. Participants were recruited from prior clinical trials where amyloid beta status (97 amyloid positive, 103 amyloid negative, as established via PET or CSF test) and clinical diagnostic status was known (94 cognitively unimpaired, 106 with mild cognitive impairment or mild Alzheimer’s disease). The automatic story recall task was administered during supervised in-person or telemedicine assessments, where participants were asked to recall stories immediately and after a brief delay. An artificial intelligence text-pair evaluation model produced vector-based outputs from the original story text and recorded and transcribed participant recalls, quantifying differences between them. Vector-based representations were fed into logistic regression models, trained with tournament leave-pair-out cross-validation analysis to predict amyloid beta status (primary endpoint), mild cognitive impairment and amyloid beta status in diagnostic subgroups (secondary endpoints). Predictions were assessed by the area under the receiver operating characteristic curve for the test result in comparison with reference standards (diagnostic and amyloid status). Simulation analysis evaluated two potential benefits of speech-based screening: (i) mild cognitive impairment screening in primary care compared with the Mini-Mental State Exam, and (ii) pre-screening prior to PET scanning when identifying an amyloid positive sample. Speech-based screening predicted amyloid beta positivity (area under the curve = 0.77) and mild cognitive impairment or mild Alzheimer’s disease (area under the curve = 0.83) in the full sample, and predicted amyloid beta in subsamples (mild cognitive impairment or mild Alzheimer’s disease: area under the curve = 0.82; cognitively unimpaired: area under the curve = 0.71). Simulation analyses indicated that in primary care, speech-based screening could modestly improve detection of mild cognitive impairment (+8.5%), while reducing false positives (−59.1%). Furthermore, speech-based amyloid pre-screening was estimated to reduce the number of PET scans required by 35.3% and 35.5% in individuals with mild cognitive impairment and cognitively unimpaired individuals, respectively. Speech-based assessment offers accessible and scalable screening for mild cognitive impairment and amyloid beta positivity.
We introduce ParaBLEU, a paraphrase representation learning model and evaluation metric for text generation. Unlike previous approaches, ParaBLEU learns to understand paraphrasis using generative ...conditioning as a pretraining objective. ParaBLEU correlates more strongly with human judgements than existing metrics, obtaining new state-of-the-art results on the 2017 WMT Metrics Shared Task. We show that our model is robust to data scarcity, exceeding previous state-of-the-art performance using only \(50\%\) of the available training data and surpassing BLEU, ROUGE and METEOR with only \(40\) labelled examples. Finally, we demonstrate that ParaBLEU can be used to conditionally generate novel paraphrases from a single demonstration, which we use to confirm our hypothesis that it learns abstract, generalized paraphrase representations.
We propose a method for learning de-identified prosody representations from raw audio using a contrastive self-supervised signal. Whereas prior work has relied on conditioning models on bottlenecks, ...we introduce a set of inductive biases that exploit the natural structure of prosody to minimize timbral information and decouple prosody from speaker representations. Despite aggressive downsampling of the input and having no access to linguistic information, our model performs comparably to state-of-the-art speech representations on DAMMP, a new benchmark we introduce for spoken language understanding. We use minimum description length probing to show that our representations have selectively learned the subcomponents of non-timbral prosody, and that the product quantizer naturally disentangles them without using bottlenecks. We derive an information-theoretic definition of speech de-identifiability and use it to demonstrate that our prosody representations are less identifiable than other speech representations.