Overview
Syllabus
Intro
Conditional Text Generation
Modeling: Conditional Language Models
What if we don't have parallel data?
Can't we just collect/generate the data?
Outline
Initialization: Unsupervised Word Translation
Unsupervised Word Translation: Adversarial Training
Back-translation
One slide primer on phrase-based statistical MT
Unsupervised Statistical MT
Bidirectional Modeling . Model: same encoder decoder used for both languages Initialize with cross-lingual word embeddings
Unsupervised MT: Training Objective 1
How does it work?
Unsupervised NMT: Training Objective 3
In summary
When Does Unsupervised Machine Translation Work?
Reasons for this poor performance
Open Problems
Better Initialization: Cross Lingual Language Models
Better Initialization: Multilingual BART
Better Initialization: Masked Sequence to Sequence Model (MASS) • Encoder-decoder formulation of masked language modelling
Multilingual Unsupervised MT
Multilingual UNMT
How practical is the strict unsupervised scenario
Related Area: Style Transfer
Discussion Question
Taught by
Graham Neubig