Optimization

Most machine learning algorithms build an optimization model and learn its parameters from the given data. Thus, developing effective and efficient optimization methods is of the essence.

Area 12. Optimization.svg

Publications

  • A View of Mini-batch SGD via Generating Functions: Conditions of Convergence, Phase Transitions, Benefit from Negative Momenta

    Optimization
    Maksim Velikanov
    Denis Kuznedelev
    Dmitry Yarotsky
    ICLR,
    2023

    Mini-batch SGD with momentum is a fundamental algorithm for learning large predictive models. In this paper we develop a new analytic framework to analyze noise-averaged properties of mini-batch SGD for linear models at constant learning rates, momenta and sizes of batches. Our key idea is to consider the dynamics of the second moments of model parameters for a special family of "Spectrally Expressible" approximations. This allows to obtain an explicit expression for the generating function of the sequence of loss values. By analyzing this generating function, we find, in particular, that 1) the SGD dynamics exhibits several convergent and divergent regimes depending on the spectral distributions of the problem; 2) the convergent regimes admit explicit stability conditions, and explicit loss asymptotics in the case of power-law spectral distributions; 3) the optimal convergence rate can be achieved at negative momenta. We verify our theoretical predictions by extensive experiments with MNIST and synthetic problems, and find a good quantitative agreement.

  • Decentralized Local Stochastic Extra-Gradient for Variational Inequalities

    OptimizationMachine learning theory
    Aleksandr Beznosikov
    Pavel Dvurechensky
    Anastasia Koloskova
    Valentin Samokhin
    Sebastian U. Stich
    Alexander Gasnikov
    NeurIPS,
    2022

    We consider distributed stochastic variational inequalities (VIs) on unbounded domains with the problem data that is heterogeneous (non-IID) and distributed across many devices. We make a very general assumption on the computational network that, in particular, covers the settings of fully decentralized calculations with time-varying networks and centralized topologies commonly used in Federated Learning. Moreover, multiple local updates on the workers can be made for reducing the communication frequency between the workers. We extend the stochastic extragradient method to this very general setting and theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone (when a Minty solution exists) settings. The provided rates explicitly exhibit the dependence on network characteristics (e.g., mixing time), iteration counter, data heterogeneity, variance, number of devices, and other standard parameters. As a special case, our method and analysis apply to distributed stochastic saddle-point problems (SPP), e.g., to the training of Deep Generative Adversarial Networks (GANs) for which decentralized training has been reported to be extremely challenging. In experiments for the decentralized training of GANs we demonstrate the effectiveness of our proposed approach.

  • Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity

    OptimizationMachine learning theory
    Dmitry Kovalev
    Aleksandr Beznosikov
    Ekaterina Borodich
    Alexander Gasnikov
    Gesualdo Scutari
    NeurIPS,
    2022

    We study structured convex optimization problems, with additive objective r:=p+q, where r is (μ-strongly) convex, q is Lq-smooth and convex, and p is Lp-smooth, possibly nonconvex. For such a class of problems, we proposed an inexact accelerated gradient sliding method that can skip the gradient computation for one of these components while still achieving optimal complexity of gradient calls of p and q, that is, O(Lp/μ) and O(Lq/μ), respectively. This result is much sharper than the classic black-box complexity O((Lp+Lq)/μ), especially when the difference between Lp and Lq is large. We then apply the proposed method to solve distributed optimization problems over master-worker architectures, under agents' function similarity, due to statistical data similarity or otherwise. The distributed algorithm achieves for the first time lower complexity bounds on both communication and local gradient calls, with the former having being a long-standing open problem. Finally the method is extended to distributed saddle-problems (under function similarity) by means of solving a class of variational inequalities, achieving lower communication and computation complexity bounds.