Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Self-Supervision & Contrastive Frameworks - A Vision-Based Review

Stanford University via YouTube

Overview

This course covers self-supervision and contrastive frameworks in computer vision, focusing on learning representations from unlabelled datasets. The course explores methodologies of top frameworks like SimCLR, MoCo V2, BYOL, SwAV, DINO, and Barlow Twins, discussing their strengths, weaknesses, and suitability for medical applications. The teaching method includes a review of each framework's architecture, loss function, and findings. The course is intended for individuals interested in machine learning methodology for medical applications and leveraging unlabelled data for clinical deployment.

Syllabus

Intro
SS Learning: Invariant Representations
Pre-text Tasks: A Deeper Dive
Contrastive Learning: Entity Discrimination
Contrastive Learning: Problem
SimCLR: Simple Contrastive Learning Representatio
SimCLR: Architecture
SimCLR: Loss Function
SimCLR: Findings
MoCo V2: Momentum Contrast
MoCo V2: Architecture
MoCo V2: Main Principle
MoCoV2: Loss Function
MoCo V2: Findings
BYOL: Bootstrap Your Own Latent
BYOL: Architecture
BYOL: Main Principle
BYOL: Findings
SWAV: Swapping Assignments between Views
SWAV: Architecture
SWAV: Loss Function
SWAV: Main Principle
SWAV: Multi-crop
SWAV: Additional Findings
DINO: Self-Distillation with NO labels
DINO: Attention-Maps
VIT (Vision Transformer): Architecture
DINO: Architecture
DINO: Loss Function
DINO: Main Principle
DINO: Multi-crop
DINO: Additional Findings Compute

Taught by

Stanford MedAI

Reviews

Start your review of Self-Supervision & Contrastive Frameworks - A Vision-Based Review

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.