Large-scale machine learning

Today, training most powerful models often takes significant resources. Our research aims to make large-scale training more efficient and accessible to the entire machine learning community.

Area 7. Large-scale machine learning.svg

Posts

Publications

  • PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression

    Model compressionLarge-scale machine learningNatural language processing
    Vladimir Malinovskii
    Denis Mazur
    Ivan Ilin
    Denis Kuznedelev
    Konstantin Burlachenko
    Kai Yi
    Dan Alistarh
    Peter Richtarik
    NeurIPS, 2024

    There has been significant interest in "extreme" compression of large language models (LLMs), i.e. to 1-2 bits per parameter, which allows such models to be executed efficiently on resource-constrained devices. Existing work focused on improved one-shot quantization techniques and weight representations; yet, purely post-training approaches are reaching diminishing returns in terms of the accuracy-vs-bit-width trade-off. State-of-the-art quantization methods such as QuIP# and AQLM include fine-tuning (part of) the compressed parameters over a limited amount of calibration data; however, such fine-tuning techniques over compressed weights often make exclusive use of straight-through estimators (STE), whose performance is not well-understood in this setting. In this work, we question the use of STE for extreme LLM compression, showing that it can be sub-optimal, and perform a systematic study of quantization-aware fine-tuning strategies for LLMs. We propose PV-Tuning - a representation-agnostic framework that generalizes and improves upon existing fine-tuning strategies, and provides convergence guarantees in restricted cases. On the practical side, when used for 1-2 bit vector quantization, PV-Tuning outperforms prior techniques for highly-performant models such as Llama and Mistral. Using PV-Tuning, we achieve the first Pareto-optimal quantization for Llama-2 family models at 2 bits per parameter.

  • Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding

    Speculative and parallel decodingNatural language processing Large-scale machine learning
    Zhuoming Chen
    Avner May
    Ruslan Svirschevski
    Yuhsun Huang
    Max Ryabinin
    Zhihao Jia
    Beidi Chen
    NeurIPS, 2024

    As the usage of large language models (LLMs) grows, it becomes increasingly important to serve them quickly and efficiently. While speculative decoding has recently emerged as a promising direction for accelerating LLM serving, existing methods are limited in their ability to scale to larger speculation budgets and adapt to different hyperparameters. This paper introduces Sequoia, a scalable and robust algorithm for speculative decoding. To improve scalability, Sequoia introduces a dynamic programming algorithm to find an optimal tree structure for the speculated tokens. To achieve robust speculative decoding, Sequoia uses a novel sampling and verification method that outperforms prior work across different decoding temperatures. Sequoia improves the decoding speed of Llama2-7B, Llama2-13B, and Vicuna-33B on an A100 GPU by up to 4.04×, 3.73×, and 2.27×. To serve Llama3-70B-Instruct on a single L40 GPU through offloading, Sequoia reduces the per-token decoding latency to 0.60 s/token, 9.5× faster than DeepSpeed-Zero-Inference.

  • SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices

    Speculative and parallel decodingNatural language processing Large-scale machine learning
    Ruslan Svirschevski
    Avner May
    Zhuoming Chen
    Beidi Chen
    Zhihao Jia
    Max Ryabinin
    NeurIPS, 2024

    As large language models gain widespread adoption, running them efficiently becomes a crucial task. Recent works on LLM inference use speculative decoding to achieve extreme speedups. However, most of these works implicitly design their algorithms for high-end datacenter hardware. In this work, we ask the opposite question: how fast can we run LLMs on consumer machines? Consumer GPUs can no longer fit the largest available models and must offload them to RAM or SSD. With parameter offloading, hundreds or thousands of tokens can be processed in batches within the same time as just one token, making it a natural fit for speculative decoding. We propose SpecExec (Speculative Execution), a simple parallel decoding method that can generate up to 20 tokens per target model iteration for popular LLM families. SpecExec takes the most probable continuations from the draft model to build a "cache" tree for the target model, which then gets validated in a single pass. Using SpecExec, we demonstrate inference of 50B+ parameter LLMs on consumer GPUs with RAM offloading at 4-6 tokens per second with 4-bit quantization or 2-3 tokens per second with 16-bit weights.