If you want to break into competitive data science, then this course is for you! Participating in predictive modelling competitions can help you gain practical experience, improve and harness your data modelling skills in various domains such as credit, insurance, marketing, natural language processing, sales’ forecasting and computer vision to name a few. At the same time you get to do it in a competitive context against thousands of participants where each one tries to build the most predictive algorithm. Pushing each other to the limit can result in better performance and smaller prediction errors. Being able to achieve high ranks consistently can help you accelerate your career in data science.
In this course, you will learn to analyse and solve competitively such predictive modelling tasks.
When you finish this class, you will:
- Understand how to solve predictive modelling competitions efficiently and learn which of the skills obtained can be applicable to real-world tasks.
- Learn how to preprocess the data and generate new features from various sources such as text and images.
- Be taught advanced feature engineering techniques like generating mean-encodings, using aggregated statistical measures or finding nearest neighbors as a means to improve your predictions.
- Be able to form reliable cross validation methodologies that help you benchmark your solutions and avoid overfitting or underfitting when tested with unobserved (test) data.
- Gain experience of analysing and interpreting the data. You will become aware of inconsistencies, high noise levels, errors and other data-related issues such as leakages and you will learn how to overcome them.
- Acquire knowledge of different algorithms and learn how to efficiently tune their hyperparameters and achieve top performance.
- Master the art of combining different machine learning models and learn how to ensemble.
- Get exposed to past (winning) solutions and codes and learn how to read them.
Disclaimer : This is not a machine learning course in the general sense. This course will teach you how to get high-rank solutions against thousands of competitors with focus on practical usage of machine learning methods rather than the theoretical underpinnings behind them.
- Python: work with DataFrames in pandas, plot figures in matplotlib, import and train models from scikit-learn, XGBoost, LightGBM.
- Machine Learning: basic understanding of linear models, K-NN, random forest, gradient boosting and neural networks.
Do you have technical problems? Write to us: email@example.com
Introduction & Recap
This week we will introduce you to competitive data science. You will learn about competitions' mechanics, the difference between competitions and a real life data science, hardware and software that people usually use in competitions. We will also briefly recap major ML models frequently used in competitions.
Feature Preprocessing and Generation with Respect to Models
In this module we will summarize approaches to work with features: preprocessing, generation and extraction. We will see, that the choice of the machine learning model impacts both preprocessing we apply to the features and our approach to generation of new ones. We will also discuss feature extraction from text with Bag Of Words and Word2vec, and feature extraction from images with Convolution Neural Networks.
Final Project Description
This is just a reminder, that the final project in this course is better to start soon! The final project is in fact a competition, in this module you can find an information about it.
Exploratory Data Analysis
We will start this week with Exploratory Data Analysis (EDA). It is a very broad and exciting topic and an essential component of solving process. Besides regular videos you will find a walk through EDA process for Springleaf competition data and an example of prolific EDA for NumerAI competition with extraordinary findings.
In this module we will discuss various validation strategies. We will see that the strategy we choose depends on the competition setup and that correct validation scheme is one of the bricks for any winning solution.
Finally, in this module we will cover something very unique to data science competitions. That is, we will see examples how it is sometimes possible to get a top position in a competition with a very little machine learning, just by exploiting a data leakage.
This week we will first study another component of the competitions: the evaluation metrics. We will recap the most prominent ones and then see, how we can efficiently optimize a metric given in a competition.
Advanced Feature Engineering I
In this module we will study a very powerful technique for feature generation. It has a lot of names, but here we call it "mean encodings". We will see the intuition behind them, how to construct them, regularize and extend them.
In this module we will talk about hyperparameter optimization process. We will also have a special video with practical tips and tricks, recorded by four instructors.
Advanced feature engineering II
In this module we will learn about a few more advanced feature engineering techniques.
Nowadays it is hard to find a competition won by a single model! Every winning solution incorporates ensembles of models. In this module we will talk about the main ensembling techniques in general, and, of course, how it is better to ensemble the models in practice.
Competitions go through
For the 5th week we've prepared for you several "walk-through" videos. In these videos we discuss solutions to competitions we took prizes at. The video content is quite short this week to let you spend more time on the final project. Good luck!
Final project for the course.
Alexander Guschin, Dmitry Ulyanov, Marios Michailidis, Dmitry Altukhov and Mikhail Trofimov
Start your review of How to Win a Data Science Competition: Learn from Top Kagglers
Anonymous is taking this course right now.
One of the best courses on Coursera ever! Highly recommended! A lot of really useful info without tons of blah-blah-blah, as in most courses from the real professional.
The only downside is a terrible accent of a professor. And subtitles (i guess due to this accent) are owful, sometimes changing the meaning dramatically. So, not recommended for those, who can understand oral speech only when reading subtitles.
Anonymous is taking this course right now.
It is really a very useful course where we get to learn from experienced people who achieved top ranks on Kaggle. It is a practical course with many handy tips. However, the accent of some instructors is not very understandable, and the subtitles are auto-generated which makes it hard sometimes to understand what's being said.