This course is an introduction to sequence models and their applications, including an overview of sequence model architectures and how to handle inputs of variable length.
• Predict future values of a time-series
• Classify free form text
• Address time-series and text problems with recurrent neural networks
• Choose between RNNs/LSTMs and simpler models
• Train and reuse word embeddings in text problems
You will get hands-on practice building and optimizing your own text classification and sequence models on a variety of public datasets in the labs we’ll work on together.
Prerequisites: Basic SQL, familiarity with Python and TensorFlow
Working with Sequences
-In this module, you’ll learn what a sequence is, see how you can prepare sequence data for modeling, and be introduced to some classical approaches to sequence modeling and practice applying them.
Recurrent Neural Networks
-In this module, we introduce recurrent neural nets, explain how they address the variable-length sequence problem, explain how our traditional optimization procedure applies to RNNs, and review the limits of what RNNs can and can’t represent.
Dealing with Longer Sequences
-In this module we dive deeper into RNNs. We’ll talk about LSTMs, Deep RNNs, working with real world data, and more.
-In this module we look at different ways of working with text and how to create your own text classification models.
-Labeled data for our classification models is expensive and precious. Here we will address how we can reuse pre-trained embeddings to make our models with TensorFlow Hub.
-In this module, we focus on a sequence-to-sequence model called the encoder-decoder network to solve tasks, such as Machine Translation, Text Summarization and Question Answering.
-In this final module, we review what you have learned so far about sequence modeling for time-series and natural language data.