Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Self - Cross, Hard - Soft Attention and the Transformer

Alfredo Canziani via YouTube

Overview

This course covers the concepts of self-attention, cross-attention, hard and soft attention, and the Transformer architecture. Students will learn about set encoding, key-value stores, queries, keys, and values in self-attention and cross-attention. The course includes implementation details and a practical PyTorch implementation of a Transformer encoder using a Jupyter Notebook. The intended audience for this course is individuals interested in deep learning, natural language processing, and neural networks.

Syllabus

– Welcome to class
– Listening to YouTube from the terminal
– Summarising papers with @Notion
– Reading papers collaboratively
– Attention! Self / cross, hard / soft
– Use cases: set encoding!
– Self-attention
– Key-value store
– Queries, keys, and values → self-attention
– Queries, keys, and values → cross-attention
– Implementation details
– The Transformer: an encoder-predictor-decoder architecture
– The Transformer encoder
– The Transformer “decoder” which is an encoder-predictor-decoder module
– Jupyter Notebook and PyTorch implementation of a Transformer encoder
– Goodbye :

Taught by

Alfredo Canziani

Reviews

Start your review of Self - Cross, Hard - Soft Attention and the Transformer

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.