Chapter 1: Introduction to Deep Reinforcement Learning
Chapter Goal: Introduce the reader to field of reinforcement learning and setting the context of what they will learn in rest of the book
Sub -Topics
1. Deep reinforcement learning
2. Examples and case studies
3. Types of algorithms with mind-map
4. Libraries and environment setup
5. Summary
Chapter 2: Markov Decision Processes
Chapter Goal: Help the reader understand models, foundations on which all algorithms are built.
Sub - Topics
1. Agent and environment
2. Rewards
3. Markov reward and decision processes
4. Policies and value functions
5. Bellman equations
Chapter 3: Model Based Algorithms
Chapter Goal: Introduce reader to dynamic programming and related algorithms
Sub - Topics:
1. Introduction to OpenAI Gym environment
2. Policy evaluation/prediction
3. Policy iteration and improvement
4. Generalised policy iteration
5. Value iteration
Chapter 4: Model Free Approaches
Chapter Goal: Introduce Reader to model free methods which form the basis for majority of current solutions
Sub - Topics:
1. Prediction and control with Monte Carlo methods
2. Exploration vs exploitation
3. TD learning methods
4. TD control
5. On policy learning using SARSA
6. Off policy learning using q-learning
Chapter 5: Function Approximation
Chapter Goal: Help readers understand value function approximation and Deep Learning use in Reinforcement Learning.
1. Limitations to tabular methods studied so far
2. Value function approximation
3. Linear methods and features used
4. Non linear function approximation using deep Learning
Chapter 6: Deep Q-Learning
Chapter Goal: Help readers understand core use of deep learning in reinforcement learning. Deep q learning and many of its variants are introduced here with in depth code exercises.
1. Deep q-networks (DQN)
2. Issues in Naive DQN
3. Introduce experience replay and target networks
4. Double q-learning (DDQN)
5. Duelling DQN
6. Categorical 51-atom DQN (C51)
7. Quantile regression DQN (QR-DQN)
8. Hindsight experience replay (HER)
Chapter 7: Policy Gradient Algorithms
Chapter Goal: Introduce reader to concept of policy gradients and related theory. Gain in depth knowledge of common policy gradient methods through hands-on exercises
1. Policy gradient approach and its advantages
2. The policy gradient theorem
3. REINFORCE algorithm
4. REINFORCE with baseline
5. Actor-critic methods
6. Advantage actor critic (A2C/A3C)
7. Proximal policy optimization (PPO)
8. Trust region policy optimization (TRPO)
Chapter 8: Combining Policy Gradients and Q-Learning
Chapter Goal: Introduce reader to the trade offs between two approaches ways to connect together the two seemingly dissimilar approaches. Gain in depth knowledge of some land mark approaches.
1. Tradeoff between policy gradients and q-learning
2. The connection
3. Deep deterministic policy gradient (DDPG)
4. Twin delayed DDPG (TD3)
5. Soft actor critic (SAC)
Chapter 9: Integrated Learning and Planning
Chapter Goal: Introduce reader to the scalable approaches which are sample efficient for scalable problems.
1. Model based reinforcement learning
2. Dyna and its variants
3. Guided policy search
4. Monte Carlo tree search (MCTS)
5. AlphaGo
Chapter 10: Further Exploration and Next Steps
Chapter Goal: With the backdrop of having gone through most of the popular algorithms, readers are now introduced again to exploration vs exploitation dilemma, central to reinforcement learning.
1. Multi arm bandits
2. Upper confidence bound
3. Thompson sampling
Nimish is a passionate technical leader who brings to table extreme focus on use of technology for solving customer problems. He has over 25 years of work experience in the Software and Consulting. Nimish has held leadership roles with P&L responsibilities at PwC, IBM and Oracle. In 2006 he set out on his entrepreneurial journey in Software consulting at SOAIS with offices in Boston, Chicago and Bangalore. Today the firm provides Automation and Digital Transformation services to Fortune 100 companies helping them make the transition from on-premise applications to the cloud.
He is also an angel investor in the space of AI and Automation driven startups. He has co-founded Paybooks, a SaaS HR and Payroll platform for Indian market. He has also cofounded a Boston based startup which offers ZipperAgent and ZipperHQ, a suite of AI driven workflow and video marketing automation platforms. He currently hold the position as CTO and Chief Data Scientist for both these platforms.
Nimish has an MBA from Indian Institute of Management in Ahmedabad, India and a BS in Electrical Engineering from Indian Institute of Technology in Kanpur, India. He also holds multiple certifications in AI and Deep Learning.
Deep reinforcement learning is a fast-growing discipline that is making a significant impact in fields of autonomous vehicles, robotics, healthcare, finance, and many more. This book covers deep reinforcement learning using deep-q learning and policy gradient models with coding exercise.
You'll begin by reviewing the Markov decision processes, Bellman equations, and dynamic programming that form the core concepts and foundation of deep reinforcement learning. Next, you'll study model-free learning followed by function approximation using neural networks and deep learning. This is followed by various deep reinforcement learning algorithms such as deep q-networks, various flavors of actor-critic methods, and other policy-based methods.
You'll also look at exploration vs exploitation dilemma, a key consideration in reinforcement learning algorithms, along with Monte Carlo tree search (MCTS), which played a key role in the success of AlphaGo. The final chapters conclude with deep reinforcement learning implementation using popular deep learning frameworks such as TensorFlow and PyTorch. In the end, you'll understand deep reinforcement learning along with deep q networks and policy gradient models implementation with TensorFlow, PyTorch, and Open AI Gym.
You will:
Examine deep reinforcement learning
Implement deep learning algorithms using OpenAI’s Gym environment
Code your own game playing agents for Atari using actor-critic algorithms
Apply best practices for model building and algorithm training