A reader is interested in a specific news article and you want to find similar articles to recommend. What is the right notion of similarity? Moreover, what if there are millions of other documents? Each time you want to a retrieve a new document, do you need to search through all other documents? How do you group similar documents together? How do you discover new, emerging topics that the documents cover?
In this third case study, finding similar documents, you will examine similarity-based algorithms for retrieval. In this course, you will also examine structured representations for describing the documents in the corpus, including clustering and mixed membership models, such as latent Dirichlet allocation (LDA). You will implement expectation maximization (EM) to learn the document clusterings, and see how to scale the methods using MapReduce.
Learning Outcomes: By the end of this course, you will be able to:
-Create a document retrieval system using k-nearest neighbors.
-Identify various similarity metrics for text data.
-Reduce computations in k-nearest neighbor search by using KD-trees.
-Produce approximate nearest neighbors using locality sensitive hashing.
-Compare and contrast supervised and unsupervised learning tasks.
-Cluster documents by topic using k-means.
-Describe how to parallelize k-means using MapReduce.
-Examine probabilistic clustering approaches using mixtures models.
-Fit a mixture of Gaussian model using expectation maximization (EM).
-Perform mixed membership modeling using latent Dirichlet allocation (LDA).
-Describe the steps of a Gibbs sampler and how to use its output to draw inferences.
-Compare and contrast initialization techniques for non-convex optimization objectives.
-Implement these techniques in Python.
Gregorycompleted this course and found the course difficulty to be medium.
Machine Learning: Clustering & Retrieval is the fourth course in the University of Washington's 6-part machine learning specialization on Coursera. The 6-week course covers several popular techniques for grouping unlabeled data and retrieving items similar to items of interest. After a short intro in week 1, the course covers k-nearest neighbor search, k-means clustering, Gaussian mixture models, latent Dirichlet allocation and hierarchical clustering. It is recommended that you complete the first 3 courses in the specialization track before taking this course, but you could take it as a stan…
Machine Learning: Clustering & Retrieval is the fourth course in the University of Washington's 6-part machine learning specialization on Coursera. The 6-week course covers several popular techniques for grouping unlabeled data and retrieving items similar to items of interest. After a short intro in week 1, the course covers k-nearest neighbor search, k-means clustering, Gaussian mixture models, latent Dirichlet allocation and hierarchical clustering. It is recommended that you complete the first 3 courses in the specialization track before taking this course, but you could take it as a standalone course as long as you know a bit of Python and probability. Grading is based on a series of comprehension quizzes and labs, but you must pay for a verified certificate to gain access to graded assignments. Thankfully you can still download and complete the labs without doing the associated quizzes, so you won't miss too much as a freeware student.
Clustering and Retrieval has a good balance of lecture content and labs that illustrate concepts covered in lecture. The professor is easy to understand and the lecture slides and are well done. The course generally has good pacing and devotes plenty of time to each of the main weekly topics, taking care to explain important considerations like different algorithmic approaches to each method and similarities between different techniques. It does, however, go off on a couple tangents, introducing map reduce and hidden Markov models, neither of which are covered in much detail or addressed in the labs.
The labs use a data set of Wikipedia articles about famous people as an example to illustrate clustering and retrieval. Using the same data set for multiple labs is always a good idea because it lets students focus on the techniques themselves instead of having familiarizing themselves with new data. The amount of actual coding you have to do in the labs is minimal. The labs are more like interactive explorations of machine learning techniques with occasional one-line fill in the blanks than full-on coding assignments. You'll spend more time reading text, running provided code and analyzing results than writing code yourself. You can look at and answer the lab quiz questions as you go along but you can't actually submit them and get graded feedback without joining the verified track.
Machine Learning: Clustering & Retrieval is a great course that covers the many most common clustering techniques with adequate depth while remaining accessible. Although the coding required is minimal, it is not an easy course: some of the concepts may take a couple watch-troughs to sink in and you may struggle with certain concepts if you don't have prior knowledge of probability. Aside from the need to pay to gain access to graded quizzes and few topics that felt tacked on, there's not much to dislike about this course.
I give Machine Learning: Clustering & Retrieval 4.5 out of 5 stars: Great.
Jasoncompleted this course, spending 3 hours a week on it and found the course difficulty to be medium.
Another phenomenal machine learning class by University of Washington! This one is a little lighter on the math and programming, mostly because the concepts (especially in the last two modules) get extremely abstract! However the concept is explained well enough to recreate the functions as custom programs, which is what I love about these classes.