Back to Statistics & AB Testing
Statistics & AB Testing

Statistics & AB Testing

33 of 69 Completed

Introduction to Statistics and A/B Testing
Hypothesis Testing
A/B Testing & Experiment Design
Confidence Intervals
A/B Testing Common Scenarios
A/B Testing Tradeoffs
Generalized Linear Models and Regression

Why do A/B testing and statistics questions get asked?

A/B testing

A/B testing, also known as “bucket testing” or “split testing,” comes from a user experience research methodology. At its very core, A/B testing consists of a randomized experiment with two variants that we name A and B. A and B are usually similar, except for one variation that might affect user behavior.

Given the long history of statistics, modern A/B testing is a relatively new phenomenon that originated over 20 years ago as computer usage reached maturity levels. It was simply not possible to A/B test on a large scale with accurate data collection until the Internet made it possible.

There are a few things we have to remember before diving into A/B testing and statistics for interviewing.

Interviewers expect different depths of knowledge on A/B testing and statistics.

Expect various levels of rigidity in your interviews among A/B testing and statistics. For example, Google’s data science role currently requires data scientists to be extremely scientific with their approach to experimental design. It is assumed that the work necessary to create experiments at Google is extremely low, which is why the responsibility on experience is then many times decided by data scientists.

In other cases, startups may require less of an emphasis on experimentation because of the difficulty of confounding factors and the need to move fast on changes. Ultimately, it all depends on the situation.

A/B testing is not a one-trick wonder.

Most A/B tests fail. For instance, when a company really believes in a hypothesis, they generally won’t A/B test it and end up just launching it. It’s in the cases where they cannot reach a consensus on the correct path forward that they might then conduct one.

A/B testing also can fail for a variety of external reasons due to the nature of running experiments on the internet. Most scientists can’t replicate studies done by their peers. Is this because of bad science? Long term behavioral changes? We’ll never know, but it’s worth noting that understanding more about experiment design is the only way we can add some nuance and demonstrate our practical understanding of the subject matter.

We can’t apply A/B testing to all situations.

While A/B testing is useful as the answer to most hypothetical questions, it doesn’t work in many cases. In this next section, we’ll dive into how A/B testing should be used, and when it should be avoided.

Good job, keep it up!



You have 36 sections remaining on this learning path.

There's so much more to Interview Query! Sign up to access hundreds of interview questions, expert coaching and a flourishing data science community.