Natural language processing

Language is one of the key forms of communication. We study methods of language representation and understanding to simplify human-computer interactions.

Area 10. Natural language processing.svg

Posts

Publications

  • Distributed Inference and Fine-tuning of Large Language Models Over The Internet

    Large-scale machine learningNatural language processing
    Alexander Borzunov
    Max Ryabinin
    Artem Chumachenko
    Dmitry Baranchuk
    Tim Dettmers
    Younes Belkada
    Pavel Samygin
    Colin Raffel
    NeurIPS, 2023

    Large language models (LLMs) are useful in many NLP tasks and become more capable with size, with the best open-source models having over 50 billion parameters. However, using these 50B+ models requires high-end hardware, making them inaccessible to most researchers. In this work, we investigate methods for cost-efficient inference and fine-tuning of LLMs, comparing local and distributed strategies. We observe that a large enough model (50B+) can run efficiently even on geodistributed devices in a consumer-grade network. This could allow running LLM efficiently by pooling together idle compute resources of multiple research groups and volunteers. We address two open problems: (1) how to perform inference and fine-tuning reliably if any device can disconnect abruptly and (2) how to partition LLMs between devices with uneven hardware, joining and leaving at will. In order to do that, we develop special fault-tolerant inference algorithms and load-balancing protocols that automatically assign devices to maximize the total system throughput. We showcase these algorithms in Petals — a decentralized system that runs Llama 2 (70B) and BLOOM (176B) over the Internet up to 10х faster than offloading for interactive generation. We evaluate the performance of our system in simulated conditions and a real-world setup spanning two continents.

  • CrowdSpeech and Vox DIY: Benchmark Dataset for Crowdsourced Audio Transcription

    Natural language processing Speech processing
    Nikita Pavlichenko
    Ivan Stelmakh
    Dmitry Ustalov
    NeurIPS Benchmarks, 2021

    Domain-specific data is the crux of the successful transfer of machine learning systems from benchmarks to real life. In simple problems such as image classification, crowdsourcing has become one of the standard tools for cheap and time-efficient data collection: thanks in large part to advances in research on aggregation methods. However, the applicability of crowdsourcing to more complex tasks (e.g., speech recognition) remains limited due to the lack of principled aggregation methods for these modalities. The main obstacle towards designing aggregation methods for more advanced applications is the absence of training data, and in this work, we focus on bridging this gap in speech recognition. For this, we collect and release CROWDSPEECH — the first publicly available large-scale dataset of crowdsourced audio transcriptions. Evaluation of existing and novel aggregation methods on our data shows room for improvement, suggesting that our work may entail the design of better algorithms. At a higher level, we also contribute to the more general challenge of developing the methodology for reliable data collection via crowdsourcing. In that, we design a principled pipeline for constructing datasets of crowdsourced audio transcriptions in any novel domain. We show its applicability on an under-resourced language by constructing VOXDIY — a counterpart of CROWDSPEECH for the Russian language. We also release the code that allows a full replication of our data collection pipeline and share various insights on best practices of data collection via crowdsourcing.

  • Distributed Deep Learning In Open Collaborations

    Computer visionNatural language processing Large-scale machine learning
    Michael Diskin
    Alexey Bukhtiyarov
    Max Ryabinin
    Lucile Saulnier
    Quentin Lhoest
    Anton Sinitsin
    Dmitry Popov
    Dmitry Pyrkin
    Maxim Kashirin
    Alexander Borzunov
    Albert Villanova del Moral
    Denis Mazur
    Ilia Kobelev
    Yacine Jernite
    Thomas Wolf
    Gennady Pekhimenko
    NeurIPS, 2021

    Modern deep learning applications require increasingly more compute to train state-of-the-art models. To address this demand, large corporations and institutions use dedicated High-Performance Computing clusters, whose construction and maintenance are both environmentally costly and well beyond the budget of most organizations. As a result, some research directions become the exclusive domain of a few large industrial and even fewer academic actors. To alleviate this disparity, smaller groups may pool their computational resources and run collaborative experiments that benefit all participants. This paradigm, known as grid- or volunteer computing, has seen successful applications in numerous scientific areas. However, using this approach for machine learning is difficult due to high latency, asymmetric bandwidth, and several challenges unique to volunteer computing. In this work, we carefully analyze these constraints and propose a novel algorithmic framework designed specifically for collaborative training. We demonstrate the effectiveness of our approach for SwAV and ALBERT pretraining in realistic conditions and achieve performance comparable to traditional setups at a fraction of the cost. Finally, we provide a detailed report of successful collaborative language model pretraining with nearly 50 participants.

Datasets

  • Shifts Dataset

    Distributional shiftUncertainty estimation Tabular dataMachine translationNatural language processing
    Andrey Malinin
    Neil Band
    Yarin Gal
    Mark J. F. Gales
    Alexander Ganshin
    German Chesnokov
    Alexey Noskov
    Andrey Ploskonosov
    Liudmila Prokhorenkova
    Ivan Provilkov
    Vatsal Raina
    Vyas Raina
    Denis Roginskiy
    Mariya Shmatova
    Panos Tigas
    Boris Yangel

    The Shifts Dataset contains curated and labeled examples of real, 'in-the-wild' distributional shifts across three large-scale tasks. Specifically, it contains tabular weather prediction, machine translation, and vehicle motion prediction tasks' data used in Shifts Challenge 2021. Dataset shift is ubiquitous in all of these tasks and modalities.

  • Text-to-Image dataset for billion-scale similarity search

    Nearest neighbor searchNatural language processing Computer vision
    Dmitry Baranchuk
    Artem Babenko

    Yandex Text-to-Image (T2I) dataset is collected to foster the research in billion-scale nearest neighbor search (NNS) when query distribution differs from the indexing one. In particular, this dataset addresses the cross-domain setting: a user specifies a textual query and requests the search engine to retrieve the most relevant images to the query. Notably, current large-scale indexing methods perform poorly in this setting. Therefore, novel highly-performant indexing solutions robust to out-of-domain queries are in high demand.

    The dataset represents a snapshot of the Yandex visual search engine and contains 1 billion 200-dimensional image embeddings for indexing. The image embeddings are produced by the Se-ResNext-101 model. The embeddings for textual queries are extracted by a variant of the DSSM model.

    Read more about the data format and how to download the dataset in the related post.