UP - logo
E-resources
Full text
Peer reviewed
  • Return of the normal distri...
    Hong, Yongwon; Mundt, Martin; Park, Sungho; Uh, Yungjung; Byun, Hyeran

    Neural networks, October 2022, 2022-10-00, 20221001, Volume: 154
    Journal Article

    Learning continually from sequentially arriving data has been a long standing challenge in machine learning. An emergent body of deep learning literature suggests various solutions, through introduction of significant simplifications to the problem statement. As a consequence of a growing focus on particular tasks and their respective benchmark assumptions, these efforts are thus becoming increasingly tailored to specific settings. Whereas approaches that leverage Variational Bayesian techniques seem to provide a more general perspective of key continual learning mechanisms, they however entail their own caveats. Inspired by prior theoretical work on solving the prevalent mismatch between prior and aggregate posterior in deep generative models, we return to a generic variational auto-encoder based formulation and investigate its utility for continual learning. Specifically, we propose to adapt a two-stage training framework towards a context conditioned variant for continual learning, where we then formulate mechanisms to alleviate catastrophic forgetting through choices of generative rehearsal or well-motivated extraction of data exemplar subsets. Although the proposed generic two-stage variational auto-encoder is not tailored towards a particular task and allows for flexible amounts of supervision, we empirically demonstrate it to surpass task-tailored methods in both supervised classification, as well as unsupervised representation learning.