Tabular data

Tabular data involves two-dimensional tables with objects (rows) and features (columns), which are used in numerous applied tasks such as classification, regression, ranking and many others.

Area 19. Tabular data.svg

Posts

Publications

  • TabR: Tabular Deep Learning Meets Nearest Neighbors

    Tabular data
    Yury Gorishniy
    Ivan Rubachev
    Nikolay Kartashev
    Daniil Shlenskii
    Akim Kotelnikov
    Artem Babenko
    ICLR, 2024

    Deep learning (DL) models for tabular data problems (e.g. classification, regression) are currently receiving increasingly more attention from researchers. However, despite the recent efforts, the non-DL algorithms based on gradient-boosted decision trees (GBDT) remain a strong go-to solution for these problems. One of the research directions aimed at improving the position of tabular DL involves designing so-called retrieval-augmented models. For a target object, such models retrieve other objects (e.g. the nearest neighbors) from the available training data and use their features and labels to make a better prediction.

    In this work, we present TabR — essentially, a feed-forward network with a custom k-Nearest-Neighbors-like component in the middle. On a set of public benchmarks with datasets up to several million objects, TabR marks a big step forward for tabular DL: it demonstrates the best average performance among tabular DL models, becomes the new state-of-the-art on several datasets, and even outperforms GBDT models on the recently proposed “GBDT-friendly” benchmark. Among the important findings and technical details powering TabR, the main ones lie in the attention-like mechanism that is responsible for retrieving the nearest neighbors and extracting valuable signal from them. In addition to the higher performance, TabR is simple and significantly more efficient compared to prior retrieval-based tabular DL models.

  • TabDDPM: Modelling Tabular Data with Diffusion Models

    Tabular dataGenerative models
    Akim Kotelnikov
    Dmitry Baranchuk
    Ivan Rubachev
    Artem Babenko
    ICML, 2023

    Denoising diffusion probabilistic models are becoming the leading generative modeling paradigm for many important data modalities. Being the most prevalent in the computer vision community, diffusion models have recently gained some attention in other domains, including speech, NLP, and graph-like data. In this work, we investigate if the framework of diffusion models can be advantageous for general tabular problems, where data points are typically represented by vectors of heterogeneous features. The inherent heterogeneity of tabular data makes it quite challenging for accurate modeling since the individual features can be of a completely different nature, i.e., some of them can be continuous and some can be discrete. To address such data types, we introduce TabDDPM --- a diffusion model that can be universally applied to any tabular dataset and handles any feature types. We extensively evaluate TabDDPM on a wide set of benchmarks and demonstrate its superiority over existing GAN/VAE alternatives, which is consistent with the advantage of diffusion models in other fields.

  • On Embeddings for Numerical Features in Tabular Deep Learning

    Tabular data
    Yury Gorishniy
    Ivan Rubachev
    Artem Babenko
    NeurIPS, 2022

    Recently, Transformer-like deep architectures have shown strong performance on tabular data problems. Unlike traditional models, e.g., MLP, these architectures map scalar values of numerical features to high-dimensional embeddings before mixing them in the main backbone. In this work, we argue that embeddings for numerical features are an underexplored degree of freedom in tabular DL, which allows constructing more powerful DL models and competing with gradient boosted decision trees (GBDT) on some GBDT-friendly benchmarks (that is, where GBDT outperforms conventional DL models). We start by describing two conceptually different approaches to building embedding modules: the first one is based on a piecewise linear encoding of scalar values, and the second one utilizes periodic activations. Then, we empirically demonstrate that these two approaches can lead to significant performance boosts compared to the embeddings based on conventional blocks such as linear layers and ReLU activations. Importantly, we also show that embedding numerical features is beneficial for many backbones, not only for Transformers. Specifically, after proper embeddings, simple MLP-like models can perform on par with the attention-based architectures. Overall, we highlight embeddings for numerical features as an important design aspect with good potential for further improvements in tabular DL.

Datasets

  • Shifts Dataset

    Distributional shiftUncertainty estimation Tabular dataMachine translationNatural language processing
    Andrey Malinin
    Neil Band
    Yarin Gal
    Mark J. F. Gales
    Alexander Ganshin
    German Chesnokov
    Alexey Noskov
    Andrey Ploskonosov
    Liudmila Prokhorenkova
    Ivan Provilkov
    Vatsal Raina
    Vyas Raina
    Denis Roginskiy
    Mariya Shmatova
    Panos Tigas
    Boris Yangel

    The Shifts Dataset contains curated and labeled examples of real, 'in-the-wild' distributional shifts across three large-scale tasks. Specifically, it contains tabular weather prediction, machine translation, and vehicle motion prediction tasks' data used in Shifts Challenge 2021. Dataset shift is ubiquitous in all of these tasks and modalities.