This course will cover the design and analysis of A/B tests, also known as split tests, which are online experiments used to test potential improvements to a website or mobile application. Two versions of the website are shown to different users - usually the existing website and a potential change. Then, the results are analyzed to determine whether the change is an improvement worth launching. This course will cover how to choose and characterize metrics to evaluate your experiments, how to design an experiment with enough statistical power, how to analyze the results and draw valid conclusions, and how to ensure that the the participants of your experiments are adequately protected.
Overview of A/B Testing
This lesson will cover what A/B testing is and what it can be used for.,How to construct a binomial confidence interval for the results.,How to decide whether the change is worth the launch cost.
Policy and Ethics for Experiments
How to make sure the participants of your experiments are adequately protected.,What questions you should be asking regarding the ethicality of experiments.,The four main ethics principles to consider when designing experiments.
Choosing and Characterizing Metrics
Learn techniques for brainstorming metrics.,What to do when you can't measure directly.,Characteristics to consider when validating metrics.
Designing an Experiment
How to choose which users will be in your experiment and control group.,When to limit your experiment to a subset of your entire user base.,Design decisions affect the size of your experiment.
How to analyze the results of your experiments.,Run sanity checks to catch problems with the experiment set-up.,Check conclusions with multiple methods, including a binomial sign test.
The lecturers explain the concept of A/B testing quite well and I could get a good grasp of the material. The course is a mix of lessons and quizzes, topped off by a final project. The quizzes were at the right level of difficulty, and the subsequent...
The lecturers explain the concept of A/B testing quite well and I could get a good grasp of the material. The course is a mix of lessons and quizzes, topped off by a final project. The quizzes were at the right level of difficulty, and the subsequent explanations by the lecturers were clear. The final project was interesting and challenging, and I found the forums quite helpful. In total, I guess I spent about 18 hours on the course. All in all, the course is definitely worth taking.
A few other details are worth mentioning, though. During the quizzes, I occasionally had to assume something to get the right answer. Personally, I found this a bit annoying: I fill in an answer, Udacity says it is wrong, I assume something and select a different answer, pass the verification, and then hear that my first attempt was right after all. But, it was just a minor annoyance, since it did force me to think things through.
The lectures in the course take the form of a discussion between the three lecturers, rather than the classic use of sheets. Usually I don't like this discussion style (see, e.g., the machine learning course on Udacity), but in this case the lecturers managed it well.
One thing the creators of the course should change (imho) is the shortness of several of the videos, some lasting just 30-45 seconds. I really don't mind watching 10-15 minute videos, and many of the videos in this course should be combined in my opinion. This is particularly so when using the app to watch the videos. When viewing the lecture in full screen, you can't move to the next video at the end of the previous one - you have to rotate the device first, then select the next video, and then rotate back to full screen. Pretty annoying if you have to do this every 45 seconds.
Finally, a few words about the statistical side of A/B testing. This course is not a course in statistics, which is why the prerequisites mention several statistical Udacity courses. The lecturers do explain everything you need, so probably you can get through the A/B testing course without it. However, I am convinced that you can't (shouldn't?) work with A/B testing in practise without a solid grasp of the underlying statistics. For me, having prior knowledge of the statistical concepts gave me the time to focus my thinking on how hypothesis testing is applied in the web environment.
Thanks to the people at Udacity and Google to make this course available to us!