Natural language processing

Language is one of the key forms of communication. We study methods of language representation and understanding to simplify human-computer interactions.

Area 10. Natural language processing.svg

Posts

Publications

  • CrowdSpeech and Vox DIY: Benchmark Dataset for Crowdsourced Audio Transcription

    Speech processingNatural language processing
    Nikita Pavlichenko
    Ivan Stelmakh
    Dmitry Ustalov
    NeurIPS Benchmarks,
    2021

    Domain-specific data is the crux of the successful transfer of machine learning systems from benchmarks to real life. In simple problems such as image classification, crowdsourcing has become one of the standard tools for cheap and time-efficient data collection: thanks in large part to advances in research on aggregation methods. However, the applicability of crowdsourcing to more complex tasks (e.g., speech recognition) remains limited due to the lack of principled aggregation methods for these modalities. The main obstacle towards designing aggregation methods for more advanced applications is the absence of training data, and in this work, we focus on bridging this gap in speech recognition. For this, we collect and release CROWDSPEECH — the first publicly available large-scale dataset of crowdsourced audio transcriptions. Evaluation of existing and novel aggregation methods on our data shows room for improvement, suggesting that our work may entail the design of better algorithms. At a higher level, we also contribute to the more general challenge of developing the methodology for reliable data collection via crowdsourcing. In that, we design a principled pipeline for constructing datasets of crowdsourced audio transcriptions in any novel domain. We show its applicability on an under-resourced language by constructing VOXDIY — a counterpart of CROWDSPEECH for the Russian language. We also release the code that allows a full replication of our data collection pipeline and share various insights on best practices of data collection via crowdsourcing.

  • Shifts: A Dataset of Real Distributional Shift Across Multiple Large-Scale Tasks

    Machine translationNatural language processing Tabular dataDistributional shiftUncertainty estimation
    Andrey Malinin
    Neil Band
    Yarin Gal
    Mark J. F. Gales
    Alexander Ganshin
    German Chesnokov
    Alexey Noskov
    Andrey Ploskonosov
    Liudmila Prokhorenkova
    Ivan Provilkov
    Vatsal Raina
    Vyas Raina
    Denis Roginskiy
    Mariya Shmatova
    Panos Tigas
    Boris Yangel
    NeurIPS Benchmarks,
    2021

    Published at NeurIPS Datasets and Benchmarks Track.

    There has been significant research done on developing methods for improving robustness to distributional shift and uncertainty estimation. In contrast, only limited work has examined developing standard datasets and benchmarks for assessing these approaches. Additionally, most work on uncertainty estimation and robustness has developed new techniques based on small-scale regression or image classification tasks. However, many tasks of practical interest have different modalities, such as tabular data, audio, text, or sensor data, which offer significant challenges involving regression and discrete or continuous structured prediction. Thus, given the current state of the field, a standardized large-scale dataset of tasks across a range of modalities affected by distributional shifts is necessary. This will enable researchers to meaningfully evaluate the plethora of recently developed uncertainty quantification methods, as well as assessment criteria and state-of-the-art baselines. In this work, we propose the \emph{Shifts Dataset} for evaluation of uncertainty estimates and robustness to distributional shift. The dataset, which has been collected from industrial sources and services, is composed of three tasks, with each corresponding to a particular data modality: tabular weather prediction, machine translation, and self-driving car (SDC) vehicle motion prediction. All of these data modalities and tasks are affected by real, ‘in-the-wild’ distributional shifts and pose interesting challenges with respect to uncertainty estimation. In this work we provide a description of the dataset and baseline results for all tasks.

  • Scaling Ensemble Distribution Distillation to Many Classes with Proxy Targets

    Natural language processing Machine translationSpeech processingComputer visionOptimizationDistributional shiftUncertainty estimation Probabilistic machine learning
    Max Ryabinin
    Andrey Malinin
    Mark Gales
    NeurIPS,
    2021

    Ensembles of machine learning models yield improved system performance as well as robust and interpretable uncertainty estimates; however, their inference costs can be prohibitively high. Ensemble Distribution Distillation (EnD^2) is an approach that allows a single model to efficiently capture both the predictive performance and uncertainty estimates of an ensemble. For classification, this is achieved by training a Dirichlet distribution over the ensemble members' output distributions via the maximum likelihood criterion. Although theoretically principled, this work shows that the criterion exhibits poor convergence when applied to large-scale tasks where the number of classes is very high. Specifically, we show that for the Dirichlet log-likelihood criterion classes with low probability induce larger gradients than high-probability classes. Hence during training the model focuses on the distribution of the ensemble tail-class probabilities rather than the probability of the correct and closely related classes. We propose a new training objective which minimizes the reverse KL-divergence to a Proxy-Dirichlet target derived from the ensemble. This loss resolves the gradient issues of EnD^2, as we demonstrate both theoretically and empirically on the ImageNet, LibriSpeech, and WMT17 En-De datasets containing 1000, 5000, and 40,000 classes, respectively.