Author Interview - Transformer Memory as a Differentiable Search Index

Author Interview - Transformer Memory as a Differentiable Search Index

Yannic Kilcher via YouTube Direct link

- Intro

1 of 16

1 of 16

- Intro

Class Central Classrooms beta

YouTube playlists curated by Class Central.

Classroom Contents

Author Interview - Transformer Memory as a Differentiable Search Index

Automatically move to the next video in the Classroom when playback concludes

  1. 1 - Intro
  2. 2 - Start of Interview
  3. 3 - How did this idea start?
  4. 4 - How does memorization play into this?
  5. 5 - Why did you not compare to cross-encoders?
  6. 6 - Instead of the ID, could one reproduce the document itself?
  7. 7 - Passages vs documents
  8. 8 - Where can this model be applied?
  9. 9 - Can we make this work on large collections?
  10. 10 - What's up with the NQ100K dataset?
  11. 11 - What is going on inside these models?
  12. 12 - What's the smallest scale to obtain meaningful results?
  13. 13 - Investigating the document identifiers
  14. 14 - What's the end goal?
  15. 15 - What are the hardest problems currently?
  16. 16 - Final comments & how to get started

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.