Parkinson’s disease (PD) is a debilitating neurodegenerative disorder. Its symptoms are typically treated with levodopa or dopamine receptor agonists, but its action lacks specificity due to the wide ...distribution of dopamine receptors in the central nervous system and periphery. Here, we report the development of a gene therapy strategy to selectively manipulate PD-affected circuitry. Targeting striatal D1 medium spiny neurons (MSNs), whose activity is chronically suppressed in PD, we engineered a therapeutic strategy comprised of a highly efficient retrograde adeno-associated virus (AAV), promoter elements with strong D1-MSN activity, and a chemogenetic effector to enable precise D1-MSN activation after systemic ligand administration. Application of this therapeutic approach rescues locomotion, tremor, and motor skill defects in both mouse and primate models of PD, supporting the feasibility of targeted circuit modulation tools for the treatment of PD in humans.
Display omitted
•AAV8R12 efficiently transduces striatal D1-, but not D2-, MSNs after nigral delivery•G88P2/3/7 promoters derived from GPR88 gene induce robust gene expression in MSNs•Systemic ligand infusion selectively activates D1-MSNs after nigral AAV8R12/rM3Ds delivery•Single-dose DCZ rescues PD symptoms in primates for at least 24 h without dyskinesia
An AAV-based circuit modulation approach that consists of a designer retrograde AAV capsid, a medium spiny neuron-enriched promoter, and a selected chemogenetic effector specifically modulates direct pathway neurons and rescues parkinsonian symptoms in mouse and macaque Parkinson’s disease models.
Although data-free incremental learning methods are memory-friendly, accurately estimating and counteracting representation shifts is challenging in the absence of historical data. This paper ...addresses this thorny problem by proposing a novel incremental learning method inspired by human analogy capabilities. Specifically, we design an analogy-making mechanism to remap the new data into the old class by prompt tuning. It mimics the feature distribution of the target old class on the old model using only samples of new classes. The learnt prompts are further used to estimate and counteract the representation shift caused by fine-tuning for the historical prototypes. The proposed method sets up new state-of-the-art performance on four incremental learning benchmarks under both the class and domain incremental learning settings. It consistently outperforms data-replay methods by only saving feature prototypes for each class. It has almost hit the empirical upper bound by joint training on the Core50 benchmark. The code will be released at \url{https://github.com/ZhihengCV/A-Prompts}.