Improving Web Ranking 
with Human-in-the-Loop: 
Methodology, Scalability, Evaluation

Tutorial at TheWebConf 2021 (WWW '21)

Room: TBA
Starts: TBA

Date: TBA

Modern Web services widely employ sophisticated Machine Learning techniques to rank news, posts, products, and other items presented to the users or contributed by them. These techniques are usually built on offline data pipelines and use a numerical approximation of the relevance of the demonstrated content. In our hands-on tutorial, we present a systematic view on using Human-in-the-Loop to obtain scalable offline evaluation processes and, in particular,  high-quality relevance judgements. We will introduce the ranking problem to the attendees, discuss the commonly used ranking quality metrics, and then focus on Human-in-the-Loop-based approach to obtain relevance judgements at scale. More precisely, we will present a thorough introduction to pairwise comparisons, demonstrate how these comparisons can be obtained using Crowdsourcing, and organize a hands-on practice session in which the attendees will obtain high-quality relevance judgements for search quality evaluation. Finally, we will discuss the obtained relevance judgements, point out directions for further studies, and answer questions asked during the tutorial.

Speakers

Alexey Drutsa

Crowdsourcing Department, Yandex

Dmitry Ustalov

Crowdsourcing Department, Yandex

Nikita Popov

Search Department, Yandex

Daria Baidakova

Crowdsourcing Department, Yandex
Cookie files
Yandex uses cookies to personalize its services. By continuing to use this site, you agree to this cookie usage. You can learn more about cookies and how your data is processed in the Privacy Policy.
Tue Feb 16 2021 19:25:26 GMT+0300 (Moscow Standard Time)