The Reinforcement Learning Workshop
eBook - ePub

The Reinforcement Learning Workshop

Alessandro Palmas, Emanuele Ghelfi, Dr. Alexandra Galina Petre, Mayur Kulkarni, Anand N.S., Quan Nguyen, Aritra Sen, Anthony So, Saikat Basak

  1. 822 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Reinforcement Learning Workshop

Alessandro Palmas, Emanuele Ghelfi, Dr. Alexandra Galina Petre, Mayur Kulkarni, Anand N.S., Quan Nguyen, Aritra Sen, Anthony So, Saikat Basak

Book details
Book preview
Table of contents
Citations

About This Book

Start with the basics of reinforcement learning and explore deep learning concepts such as deep Q-learning, deep recurrent Q-networks, and policy-based methods with this practical guide

Key Features

  • Use TensorFlow to write reinforcement learning agents for performing challenging tasks
  • Learn how to solve finite Markov decision problems
  • Train models to understand popular video games like Breakout

Book Description

Various intelligent applications such as video games, inventory management software, warehouse robots, and translation tools use reinforcement learning (RL) to make decisions and perform actions that maximize the probability of the desired outcome. This book will help you to get to grips with the techniques and the algorithms for implementing RL in your machine learning models.

Starting with an introduction to RL, you'll be guided through different RL environments and frameworks. You'll learn how to implement your own custom environments and use OpenAI baselines to run RL algorithms. Once you've explored classic RL techniques such as Dynamic Programming, Monte Carlo, and TD Learning, you'll understand when to apply the different deep learning methods in RL and advance to deep Q-learning. The book will even help you understand the different stages of machine-based problem-solving by using DARQN on a popular video game Breakout. Finally, you'll find out when to use a policy-based method to tackle an RL problem.

By the end of The Reinforcement Learning Workshop, you'll be equipped with the knowledge and skills needed to solve challenging problems using reinforcement learning.

What you will learn

  • Use OpenAI Gym as a framework to implement RL environments
  • Find out how to define and implement reward function
  • Explore Markov chain, Markov decision process, and the Bellman equation
  • Distinguish between Dynamic Programming, Monte Carlo, and Temporal Difference Learning
  • Understand the multi-armed bandit problem and explore various strategies to solve it
  • Build a deep Q model network for playing the video game Breakout

Who this book is for

If you are a data scientist, machine learning enthusiast, or a Python developer who wants to learn basic to advanced deep reinforcement learning algorithms, this workshop is for you. A basic understanding of the Python language is necessary.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is The Reinforcement Learning Workshop an online PDF/ePUB?
Yes, you can access The Reinforcement Learning Workshop by Alessandro Palmas, Emanuele Ghelfi, Dr. Alexandra Galina Petre, Mayur Kulkarni, Anand N.S., Quan Nguyen, Aritra Sen, Anthony So, Saikat Basak in PDF and/or ePUB format, as well as other popular books in Informatique & Programmation en Python. We have over one million books available in our catalogue for you to explore.

Information

Year
2020
ISBN
9781800209961

1. Introduction to Reinforcement Learning

Overview
This chapter introduces the Reinforcement Learning (RL) framework, which is one of the most exciting fields of machine learning and artificial intelligence. You will learn how to describe the characteristics and advanced applications of RL to show what can be achieved within this framework. You will also learn to differentiate between RL and other learning approaches. You will learn the main concepts of this discipline both from a theoretical point of view and from a practical point of view using Python and other useful libraries.
By the end of the chapter, you will understand what RL is and know how to use the Gym toolkit and Baselines, two popular libraries in this field, to interact with an environment and implement a simple learning loop.

Introduction

Learning and adapting to new circumstances is a crucial process for humans and, in general, for all animals. Usually, learning is intended as a process of trial and error through which we improve our performance in particular tasks. Our life is a continuous learning process, that is, we start from simple goals (for example, walking), and we end up pursuing difficult and complex tasks (for example, playing a sport). As humans, we are always driven by our reward mechanism, which awards good behaviors and punishes bad ones.
Reinforcement Learning (RL), inspired by the human learning process, is a subfield of machine learning and deals with learning from interaction. With the term "interaction," we mean the process of trial and error through which we, as humans, understand the consequences of our actions and build up our own experiences.
RL, in particular, considers sequential decision-making problems. These are problems in which an agent has to take a sequence of decisions, that is, actions, to maximize a certain performance measure.
RL considers tasks to be Markov Decision Processes (MDPs), which are problems arising in many real-world scenarios. In this setting, the decision-maker, referred to as the agent, has to make decisions accounting for environmental uncertainty and experience. Agents are goal-directed; they need only a notion of a goal, such as a numerical signal, to be maximized. Unlike supervised learning, in RL, there is no need to provide good examples; it is the agent who learns how to map situations to actions. The mapping from situations (states) to actions is called "policy" in literature, and it represents the agent's behavior or strategy. Solving an MDP means finding the agent's policy by maximizing the desired outcome (that is, the total reward). We will study MDPs in more detail in future chapters.
RL has been successfully applied to various kinds of problems and domains, showing exciting results. This chapter is an introduction to RL. It aims to explain some applications and describe concepts both from an intuitive perspective and from a mathematical point of view. Both of these aspects are very important when learning new disciplines. Without intuitive understanding, it is impossible to make sense of formulas and algorithms; without mathematical background, it is tough to implement existing or new algorithms.
In this chapter, we will first compare the three main machine learning paradigms, namely supervised learning, RL, and unsupervised learning. We will discuss their differences and similarities and define some example problems.
Second, we will move on to a section that contains the theory of RL and its notations. We will learn about concepts such as what an agent is, what an environment is, and how to parameterize different policies. This section represents the fundamentals of this discipline.
Third, we will begin using two RL frameworks, namely Gym and Baselines. We will learn that interacting with a Gym environment is extremely simple, as is learning a task using Baselines algorithms.
Finally, we will explore some RL applications to motivate you to study this discipline, showing various techniques that can be used to face real-world problems. RL is not bound to the academic world. However, it is still crucial from an industrial point of view, allowing you to solve problems that are almost impossible to solve using other techniques.

Learning Paradigms

In this section, we will discuss the similarities and differences between the three main learning paradigms under the umbrella of machine learning. We will analyze some representative problems in order to understand the characteristics of these frameworks better.

Introduction to Learning Paradigms

For a learning paradigm, we implement a problem and a solution method. Usually, learning paradigms deal with data and rephrase the problem in a way that can be solved by finding parameters and maximizing an objective function. In these settings, the problem can be faced using mathematical and optimization tools, allowing a formal study. The term "learning" is often used to represent a dynamic process of adapting the algorithm's parameters in such a way as to optimize their performance (that is, to learn) on a given task. Tom Mitchell defined learning in a precise way, as follows:
"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."
Let's rephrase the preceding definition more intuitively. To define whether a program is learning, we need to set a task; that is the goal of the program. The task can be everything we want the program to do, that is, play a game of chess, do autonomous driving, or carry out image classification. The problem should be accompanied by a performance measure, that is, a function that returns how well the program is performing on that task. For the chess game, a performance function can simply be represented by the following:
Figure 1.1: A performance function for a game of chess
Figure 1.1: A performance function for a game of chess
In this context, the experience is the amount of data collected by the program at a specific moment. For chess, the experience is represented by the set of games played by the program.
The same input presented at the beginning of the learning phase or the end of the learning phase can result in different responses (that is, outputs) from the algorithm; the differences are caused by the algorithm's parameters being updated during the process.
In the following table, we can see some examples of the experience, task, and performance tuples to better understand their concrete instantiations:
Figure 1.2: Table for instantiations
Figure 1.2: Table for instantiations
It is possible to classify the learning algorithms based on the input they have and on the feedback they receive. In the following section, we will look at the three main learning paradigms in the context of machine learning based on this classification.

Supervised versus Unsupervised versus RL

The three main learning paradigms are supervised learning, unsupervised...

Table of contents

  1. The Reinforcement Learning Workshop
  2. Preface
  3. 1. Introduction to Reinforcement Learning
  4. 2. Markov Decision Processes and Bellman Equations
  5. 3. Deep Learning in Practice with TensorFlow 2
  6. 4. Getting Started with OpenAI and TensorFlow for Reinforcement Learning
  7. 5. Dynamic Programming
  8. 6. Monte Carlo Methods
  9. 7. Temporal Difference Learning
  10. 8. The Multi-Armed Bandit Problem
  11. 9. What Is Deep Q-Learning?
  12. 10. Playing an Atari Game with Deep Recurrent Q-Networks
  13. 11. Policy-Based Methods for Reinforcement Learning
  14. 12. Evolutionary Strategies for RL
  15. Appendix
Citation styles for The Reinforcement Learning Workshop

APA 6 Citation

Palmas, A., Ghelfi, E., Petre, A. G., Kulkarni, M., Anand, NS., Nguyen, Q., 
 Basak, S. (2020). The Reinforcement Learning Workshop (1st ed.). Packt Publishing. Retrieved from https://www.perlego.com/book/1694612/the-reinforcement-learning-workshop-pdf (Original work published 2020)

Chicago Citation

Palmas, Alessandro, Emanuele Ghelfi, Alexandra Galina Petre, Mayur Kulkarni, NS. Anand, Quan Nguyen, Aritra Sen, Anthony, and Saikat Basak. (2020) 2020. The Reinforcement Learning Workshop. 1st ed. Packt Publishing. https://www.perlego.com/book/1694612/the-reinforcement-learning-workshop-pdf.

Harvard Citation

Palmas, A. et al. (2020) The Reinforcement Learning Workshop. 1st edn. Packt Publishing. Available at: https://www.perlego.com/book/1694612/the-reinforcement-learning-workshop-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Palmas, Alessandro et al. The Reinforcement Learning Workshop. 1st ed. Packt Publishing, 2020. Web. 14 Oct. 2022.