Publications

Explore our scientific papers on fundamental problems in machine learning
5 of 194 publications
  • Decentralized Local Stochastic Extra-Gradient for Variational Inequalities

    OptimizationMachine learning theory
    Aleksandr Beznosikov
    Pavel Dvurechensky
    Anastasia Koloskova
    Valentin Samokhin
    Sebastian U. Stich
    Alexander Gasnikov
    NeurIPS,
    2022

    We consider distributed stochastic variational inequalities (VIs) on unbounded domains with the problem data that is heterogeneous (non-IID) and distributed across many devices. We make a very general assumption on the computational network that, in particular, covers the settings of fully decentralized calculations with time-varying networks and centralized topologies commonly used in Federated Learning. Moreover, multiple local updates on the workers can be made for reducing the communication frequency between the workers. We extend the stochastic extragradient method to this very general setting and theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone (when a Minty solution exists) settings. The provided rates explicitly exhibit the dependence on network characteristics (e.g., mixing time), iteration counter, data heterogeneity, variance, number of devices, and other standard parameters. As a special case, our method and analysis apply to distributed stochastic saddle-point problems (SPP), e.g., to the training of Deep Generative Adversarial Networks (GANs) for which decentralized training has been reported to be extremely challenging. In experiments for the decentralized training of GANs we demonstrate the effectiveness of our proposed approach.

  • Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity

    OptimizationMachine learning theory
    Dmitry Kovalev
    Aleksandr Beznosikov
    Ekaterina Borodich
    Alexander Gasnikov
    Gesualdo Scutari
    NeurIPS,
    2022

    We study structured convex optimization problems, with additive objective r:=p+q, where r is (μ-strongly) convex, q is Lq-smooth and convex, and p is Lp-smooth, possibly nonconvex. For such a class of problems, we proposed an inexact accelerated gradient sliding method that can skip the gradient computation for one of these components while still achieving optimal complexity of gradient calls of p and q, that is, O(Lp/μ) and O(Lq/μ), respectively. This result is much sharper than the classic black-box complexity O((Lp+Lq)/μ), especially when the difference between Lp and Lq is large. We then apply the proposed method to solve distributed optimization problems over master-worker architectures, under agents' function similarity, due to statistical data similarity or otherwise. The distributed algorithm achieves for the first time lower complexity bounds on both communication and local gradient calls, with the former having being a long-standing open problem. Finally the method is extended to distributed saddle-problems (under function similarity) by means of solving a class of variational inequalities, achieving lower communication and computation complexity bounds.

  • Optimal Algorithms for Decentralized Stochastic Variational Inequalities

    OptimizationMachine learning theory
    Dmitry Kovalev
    Aleksandr Beznosikov
    Abdurakhmon Sadiev
    Michael Persiianov
    Peter Richtárik
    Alexander Gasnikov
    NeurIPS,
    2022

    Variational inequalities are a formalism that includes games, minimization, saddle point, and equilibrium problems as special cases. Methods for variational inequalities are therefore universal approaches for many applied tasks, including machine learning problems. This work concentrates on the decentralized setting, which is increasingly important but not well understood. In particular, we consider decentralized stochastic (sum-type) variational inequalities over fixed and time-varying networks. We present lower complexity bounds for both communication and local iterations and construct optimal algorithms that match these lower bounds. Our algorithms are the best among the available literature not only in the decentralized stochastic case, but also in the decentralized deterministic and non-distributed stochastic cases. Experimental results confirm the effectiveness of the presented algorithms.

  • Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees

    OptimizationMachine learning theory
    Aleksandr Beznosikov
    Peter Richtárik
    Michael Diskin
    Max Ryabinin
    Alexander Gasnikov
    NeurIPS,
    2022

    Variational inequalities in general and saddle point problems in particular are increasingly relevant in machine learning applications, including adversarial learning, GANs, transport and robust optimization. With increasing data and problem sizes necessary to train high performing models across various applications, we need to rely on parallel and distributed computing. However, in distributed training, communication among the compute nodes is a key bottleneck during training, and this problem is exacerbated for high dimensional and over-parameterized models. Due to these considerations, it is important to equip existing methods with strategies that would allow to reduce the volume of transmitted information during training while obtaining a model of comparable quality. In this paper, we present the first theoretically grounded distributed methods for solving variational inequalities and saddle point problems using compressed communication: MASHA1 and MASHA2. Our theory and methods allow for the use of both unbiased (such as Randk; MASHA1) and contractive (such as Topk; MASHA2) compressors. New algorithms support bidirectional compressions, and also can be modified for stochastic setting with batches and for federated learning with partial participation of clients. We empirically validated our conclusions using two experimental setups: a standard bilinear min-max problem, and large-scale distributed adversarial training of transformers.

  • On Embeddings for Numerical Features in Tabular Deep Learning

    Tabular data
    Yury Gorishniy
    Ivan Rubachev
    Artem Babenko
    NeurIPS,
    2022

    Recently, Transformer-like deep architectures have shown strong performance on tabular data problems. Unlike traditional models, e.g., MLP, these architectures map scalar values of numerical features to high-dimensional embeddings before mixing them in the main backbone. In this work, we argue that embeddings for numerical features are an underexplored degree of freedom in tabular DL, which allows constructing more powerful DL models and competing with gradient boosted decision trees (GBDT) on some GBDT-friendly benchmarks (that is, where GBDT outperforms conventional DL models). We start by describing two conceptually different approaches to building embedding modules: the first one is based on a piecewise linear encoding of scalar values, and the second one utilizes periodic activations. Then, we empirically demonstrate that these two approaches can lead to significant performance boosts compared to the embeddings based on conventional blocks such as linear layers and ReLU activations. Importantly, we also show that embedding numerical features is beneficial for many backbones, not only for Transformers. Specifically, after proper embeddings, simple MLP-like models can perform on par with the attention-based architectures. Overall, we highlight embeddings for numerical features as an important design aspect with good potential for further improvements in tabular DL.

Filter by:

32
22
21
16
12
12
10
10
9
9
8
7
6
6
6
5
5
5
4
1
30
27
24
19
18
13
11
7
7
7
5
5
4
3
2
2
1
1
1
1
1
1
1
1
1
1
11
27
20
16
9
11
15
21
18
21
16
3
1
1
2
1
1