We invite software engineers, designers, analysts, service or product managers — beginners, advanced specialists, and researchers — to join us at The Web Conference 2018, which will take place in Lyon from 23 to 27 of April, to learn how to make web service development data-driven and do it effectively.
Extended abstract and the full list of references are organized in the following overview article. If you wish to refer to the tutorial in your publication, refer to this paper please.
Part 1: Statistical foundation — Statistics for online experiments: 101 (statistical hypothesis testing, causal relationship)
Part 2: Experimentation pipeline and workflow in the light of industrial practice — Conducting an A/B experiment: Yandex way (what should be analyzed before starting the experiment, experiments’ review, decision making based on results) — Cases, pitfalls, lessons learned
Part 3: Development of online metrics — Main components of an online metric — Main metric properties (sensitivity and directionality) — Evaluation criteria beyond difference of averages (periodicity, trends, quantiles, etc.) — Product-driven ideas for metrics (loyalty and interaction metrics, dwell time based metric patching, session metrics and session division) — Effective criteria for ratio metrics — Reducing noise in metric measurements
Part 4: Interleaving for online ranking evaluation — Classic interleaving methods (including their comparison to other evaluation methods) — Optimized Interleaving — Multi-leaving
Part 5: Machine learning driven A/B testing — Randomized experiment vs Observational study — Variance Reduction Based on Subtraction of Prediction — Heterogeneous Treatment Effect — Learning sensitive metric combinations — Future Prediction Based metrics — Smart Scheduling of Online Experiments — Stopping experiments early: sequential testing