Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Brilliant

Machine Learning

via Brilliant

Overview

Machine learning swoops in where humans fail — such as when there are hundreds (or hundreds of thousands) variables to keep track of and millions (or billions, or trillions) of pieces of data to process.

This course develops the mathematical basis needed to deeply understand how problems of classification and estimation work. By the end of this course, you’ll develop the techniques needed to analyze data and apply these techniques to real-world problems.

Syllabus

  • Linear Regression: Get your basics in line.
    • Introduction to Linear Regression: The basics of prediction with a very simple model: a line.
    • Statistics of Linear Regression: Dive into the math behind linear regression.
    • Linear Algebra in Linear Regression: Brush up on linear algebra, a key tool throughout machine learning.
    • Higher Dimensional Regression: What happens when you need to do a regression with more than two variables? Hyperplanes!
    • Limitations of Linear Regression: When variables are related non-linearly, linear regression falls short.
    • Alternatives to Linear Regression: Get familiar with ridge regression, lasso, nearest neighbors, and other approaches.
  • Linear Classification: Classifying both quantitative and qualitative data.
    • Indicator Matrix: Add this clever relationship representation to your tool kit.
    • Logistic Classification: Instead of giving a definitive 'yes' or 'no', this method predicts probabilities of 'yes' or 'no'.
    • Linear Discriminant Analysis: Explore this powerful tool for separating classes of normally distributed data.
    • KNN Classification: "My neighbors are my friends", as a classification algorithm.
    • Perceptrons: The judge and jury for classification.
    • Naive Bayes: Bayes' theorem - a classic tool of probability - guides this classication method.
  • Trees: Explore this versatile model and related ideas like bagging, random forests, and boosting.
    • Tree Regression: A versatile tool, best applied when there are strong distinctions between cases.
    • Tree Classification: The basics of classification via a tree.
    • Trees: Pros, Cons, and Best Practices: A major advantage of trees is their interpretability. What are the drawbacks?
    • Bagging: Reduce the model variance by averaging across many trees!
    • Boosting: "Teammates who complement each other's weaknesses", trees edition.
  • Support Vector Machine: Divide classes with the best possible margin of error.
    • Hard Margin Support Vector Machines: The wall of SVMs: you're either in or you're out.
    • Soft Margin Support Vector Machines: Explore this SVM that works even when some points end up on the "wrong side of the wall".
    • Nonlinear Decision Boundaries: Sometimes, the best wall isn't a straight line.
    • More than Two Classes: Learn how to combine several classifiers to handle data sets with many classes.
    • Connection to Logistic Regression: SVMs are similar to logistic regression - but not exactly the same! Find out why.
  • Kernels: It's time to upgrade the dot product.
    • Intro To Kernels: Get down the basics of this tool which helps measure the similarity of vectors.
    • Kernel Boundaries: Use kernels to classify new data by comparing it to existing data.
    • Kernel Support Vector Machines: See why SVMs are one of the best models for employing kernels.
    • Using the Kernel Trick: Explore the power of the kernel trick, and the drawbacks and pitfalls of using kernels.

Reviews

Start your review of Machine Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.