Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Reinforcement learning is becoming more and more important in an advanced world. Understanding algorithms is a key role in the reinforcement learning process.
Traditional machine learning algorithms are used for predictions and classification. Reinforcement learning is about training agents to take decisions to maximize cumulative rewards. In this course, Understanding Algorithms for Reinforcement Learning, you'll learn basic principles of reinforcement learning algorithms, RL taxonomy, and specific policy search techniques such as Q-learning and SARSA. First, you'll discover the objective of reinforcement learning; to find an optimal policy which allows agents to make the right decisions to maximize long-term rewards. You'll study how to model the environment so that RL algorithms are computationally tractable. Next, you'll explore dynamic programming, an important technique used to cache intermediate results which simplify the computation of complex problems. You'll understand and implement policy search techniques such as temporal difference learning (Q-learning) and SARSA which help converge on to an optimal policy for your RL algorithm. Finally, you'll build reinforcement learning platforms which allow study, prototyping, and development of policies, as well as work with both Q-learning and SARSA techniques on OpenAI Gym. By the end of this course, you should have a solid understanding of reinforcement learning techniques, Q-learning and SARSA and be able to implement basic RL algorithms.
Traditional machine learning algorithms are used for predictions and classification. Reinforcement learning is about training agents to take decisions to maximize cumulative rewards. In this course, Understanding Algorithms for Reinforcement Learning, you'll learn basic principles of reinforcement learning algorithms, RL taxonomy, and specific policy search techniques such as Q-learning and SARSA. First, you'll discover the objective of reinforcement learning; to find an optimal policy which allows agents to make the right decisions to maximize long-term rewards. You'll study how to model the environment so that RL algorithms are computationally tractable. Next, you'll explore dynamic programming, an important technique used to cache intermediate results which simplify the computation of complex problems. You'll understand and implement policy search techniques such as temporal difference learning (Q-learning) and SARSA which help converge on to an optimal policy for your RL algorithm. Finally, you'll build reinforcement learning platforms which allow study, prototyping, and development of policies, as well as work with both Q-learning and SARSA techniques on OpenAI Gym. By the end of this course, you should have a solid understanding of reinforcement learning techniques, Q-learning and SARSA and be able to implement basic RL algorithms.