Research areas
Our research covers the most significant areas in modern machine learning.
Tabular data
Tabular data involves two-dimensional tables with objects (rows) and features (columns), which are used in numerous applied tasks such as classification, regression, ranking and many others.7 publications2 posts1 datasetLarge-scale machine learning
Today, training most powerful models often takes significant resources. Our research aims to make large-scale training more efficient and accessible to the entire machine learning community.11 publications2 postsGenerative models
Generative models in computer vision are powerful tool for various applications.13 publications6 postsGraph machine learning
Graphs are a natural way to represent data from various domains such as social networks, molecules, text, code, etc. We develop and analyze algorithms for graph-structured data.13 publications5 posts1 datasetNeural algorithmic reasoning
Algorithmic reasoning focuses on building models that can execute classic algorithms. It allows one to combine the advantages of neural networks with theoretical guarantees of algorithms.1 publication2 postsComputer vision
Yandex Research team regularly contributes to the computer vision research community, mostly in the field of image retrieval and generative modelling.35 publications5 posts1 datasetMachine learning theory
We study various aspects related to theoretical understanding of ML models and algorithms.29 publications2 postsNatural language processing
Language is one of the key forms of communication. We study methods of language representation and understanding to simplify human-computer interactions.25 publications3 posts2 datasetsNearest neighbor search
Nearest neighbor search is a long-standing problem arising in a large number of machine learning applications, such as recommender services, information retrieval, and others.14 publications3 posts1 datasetOptimization
Most machine learning algorithms build an optimization model and learn its parameters from the given data. Thus, developing effective and efficient optimization methods is of the essence.17 publicationsRanking
Learning to rank is a central problem in information retrieval. The objective is to rank a given set of items to optimize the overall utility of the list.17 publicationsGradient boosting
Gradient boosting iteratively combines weak learners (usually decision trees) to create a stronger model. It achieves state-of-the-art results on tabular data with heterogeneous features.13 publications1 postUncertainty estimation
Uncertainty estimation enables detecting when ML models make mistakes. This is of critical importance in high risk machine learning applications, such as autonomous vehicle and medical ML.9 publications3 posts1 datasetDistributional shift
Distributional shift is the mismatch between training and deployment data that is ubiquitous in the real-world. Studying this phenomenon can enable safer and more reliable ML systems.6 publications3 posts1 datasetMachine translation
Language barriers hinder global communication and access to worldwide knowledge. By improving machine translation systems, we hope to facilitate the exchange of culture and information.9 publications1 datasetDistributed ML
Training and running large neural networks efficiently across many devices, whether in a GPU cluster or a swarm of poorly connected consumer devices.7 publications1 postSpeech processing
Speech is an important data modality and relatives to applications such as speech recognition and speech synthesis, which are core technologies in products such as vocal assistants.5 publicationsModel compression
Deep learning models are outgrowing the hardware that runs them. We try to make large models fit on smaller devices through quantization, pruning, factorization, and other means.3 publicationsSpeculative and parallel decoding
Modern LLMs are autoregressive models that generate one token at a time, which is inefficient on parallel hardware. These works accelerate generation by processing multiple tokens per forward pass.1 post