Overview
This course explores OpenAI's GPT-2 model and the controversies surrounding it. The learning outcomes include understanding how language models can learn tasks without explicit supervision and the potential of zero-shot task transfer. The course teaches about the capacity of language models and their performance across various language processing tasks. The teaching method involves discussing the model's capabilities and presenting samples of text generated by the model. This course is intended for individuals interested in natural language processing, machine learning, and artificial intelligence.
Syllabus
GPT-2: Language Models are Unsupervised Multitask Learners
Taught by
Yannic Kilcher