Akademska digitalna zbirka SLovenije - logo
E-viri
Recenzirano Odprti dostop
  • Distributed Learning in Non...
    Vlaski, Stefan; Sayed, Ali H.

    IEEE transactions on signal processing, 2021, Letnik: 69
    Journal Article

    The diffusion strategy for distributed learning from streaming data employs local stochastic gradient updates along with exchange of iterates over neighborhoods. In Part I <xref ref-type="bibr" rid="ref3">3 of this work we established that agents cluster around a network centroid and proceeded to study the dynamics of this point. We established expected descent in non-convex environments in the large-gradient regime and introduced a short-term model to examine the dynamics over finite-time horizons. Using this model, we establish in this work that the diffusion strategy is able to escape from strict saddle-points in <inline-formula><tex-math notation="LaTeX">O(1/\mu)</tex-math></inline-formula> iterations, where <inline-formula><tex-math notation="LaTeX">\mu</tex-math></inline-formula> denotes the step-size; it is also able to return approximately second-order stationary points in a polynomial number of iterations. Relative to prior works on the polynomial escape from saddle-points, most of which focus on centralized perturbed or stochastic gradient descent, our approach requires less restrictive conditions on the gradient noise process.