Hands-On Meta Learning with Python
eBook - ePub

Hands-On Meta Learning with Python

Meta learning using one-shot learning, MAML, Reptile, and Meta-SGD with TensorFlow

Sudharsan Ravichandiran

  1. 226 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Hands-On Meta Learning with Python

Meta learning using one-shot learning, MAML, Reptile, and Meta-SGD with TensorFlow

Sudharsan Ravichandiran

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

Explore a diverse set of meta-learning algorithms and techniques to enable human-like cognition for your machine learning models using various Python frameworks

Key Features

  • Understand the foundations of meta learning algorithms
  • Explore practical examples to explore various one-shot learning algorithms with its applications in TensorFlow
  • Master state of the art meta learning algorithms like MAML, reptile, meta SGD

Book Description

Meta learning is an exciting research trend in machine learning, which enables a model to understand the learning process. Unlike other ML paradigms, with meta learning you can learn from small datasets faster.

Hands-On Meta Learning with Python starts by explaining the fundamentals of meta learning and helps you understand the concept of learning to learn. You will delve into various one-shot learning algorithms, like siamese, prototypical, relation and memory-augmented networks by implementing them in TensorFlow and Keras. As you make your way through the book, you will dive into state-of-the-art meta learning algorithms such as MAML, Reptile, and CAML. You will then explore how to learn quickly with Meta-SGD and discover how you can perform unsupervised learning using meta learning with CACTUs. In the concluding chapters, you will work through recent trends in meta learning such as adversarial meta learning, task agnostic meta learning, and meta imitation learning.

By the end of this book, you will be familiar with state-of-the-art meta learning algorithms and able to enable human-like cognition for your machine learning models.

What you will learn

  • Understand the basics of meta learning methods, algorithms, and types
  • Build voice and face recognition models using a siamese network
  • Learn the prototypical network along with its variants
  • Build relation networks and matching networks from scratch
  • Implement MAML and Reptile algorithms from scratch in Python
  • Work through imitation learning and adversarial meta learning
  • Explore task agnostic meta learning and deep meta learning

Who this book is for

Hands-On Meta Learning with Python is for machine learning enthusiasts, AI researchers, and data scientists who want to explore meta learning as an advanced approach for training machine learning models. Working knowledge of machine learning concepts and Python programming is necessary.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Hands-On Meta Learning with Python è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Hands-On Meta Learning with Python di Sudharsan Ravichandiran in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Informatique e Réseaux de neurones. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2018
ISBN
9781789537024
Edizione
1
Argomento
Informatique

MAML and Its Variants

In the previous chapter, we learned about the Neural Turing Machine (NTM) and how it stores and retrieves information from the memory. We also learned about the variant of NTM called the memory-augmented neural network, which is extensively used in one-shot learning. In this chapter, we will learn one of the interesting and most popularly used meta learning algorithms called Model Agnostic Meta Learning (MAML). We will see what model agnostic meta learning is, and how it is used in a supervised and reinforcement learning settings. We will also learn how to build MAML from scratch and then we will learn about Adversarial Meta Learning (ADML). We will see how ADML is used to find a robust model parameter. Following that we will learn how to implement ADML for the classification task. Lastly, we will learn about Context Adaptation for Meta Learning (CAML).
In this chapter, you will learn about the following:
  • MAML
  • MAML algorithm
  • MAML in supervised and reinforcement learning settings
  • Building MAML from scratch
  • ADML
  • Building ADML from scratch
  • CAML

MAML

MAML is one of the recently introduced and most popularly used meta learning algorithms and it has created a major breakthrough in meta learning research. Learning to learn is the key focus of meta learning and we know that, in meta learning, we learn from various related tasks containing only a small number of data points and the meta learner produces a quick learner that can generalize well on a new related task even with a lesser number of training samples.
The basic idea of MAML is to find a better initial parameter so that, with good initial parameters, the model can learn quickly on new tasks with fewer gradient steps.
So, what do we mean by that? Let's say we are performing a classification task using a neural network. How do we train the network? We will start off with initializing random weights and train the network by minimizing the loss. How do we minimize the loss? We do so using gradient descent. Okay, but how do we use gradient descent for minimizing the loss? We use gradient descent for finding the optimal weights that will give us the minimal loss. We take multiple gradient steps to find the optimal weights so that we can reach the convergence.
In MAML, we try to find these optimal weights by learning from the distribution of similar tasks. So, for a new task, we don't have to start with randomly initialized weights—instead, we can start with optimal weights, which will take fewer gradient steps to reach convergence and it doesn't require more data points for training.
Let's understand MAML in simple terms; let's say we have three related tasks: T1, T2, and T3. First, we randomly initialize our model parameter, θ. We train our network on task T1. Then, we try to minimize the loss L by gradient descent. We minimize the loss by finding the optimal parameter,
. Similarly, for tasks T2 and T3, we will start off with a randomly initialized model parameter, θ, and minimize the loss by finding the right set of parameters by gradient descent. Let's say
and
are the optimal parameters for the tasks, T2 and T3, respectively.
As you can see in the following diagram, we start off each task with the randomly initialized parameter θ and minimize the loss by finding the optimal parameters
,
and
for each of the tasks T1, T2, and T3 respectively:
However, instead of initializing θ in a random positionthat is, with random values—if we initialize θ in a position that is common to all three tasks, we don't need to take more gradient steps and it will take us less time for training. MAML tries to do exactly this. MAML tries to find this optimal parameter θ that is common to many of the related tasks, so we c...

Indice dei contenuti

  1. Title Page
  2. Copyright and Credits
  3. Dedication
  4. About Packt
  5. Contributors
  6. Preface
  7. Introduction to Meta Learning
  8. Face and Audio Recognition Using Siamese Networks
  9. Prototypical Networks and Their Variants
  10. Relation and Matching Networks Using TensorFlow
  11. Memory-Augmented Neural Networks
  12. MAML and Its Variants
  13. Meta-SGD and Reptile
  14. Gradient Agreement as an Optimization Objective
  15. Recent Advancements and Next Steps
  16. Assessments
  17. Other Books You May Enjoy
Stili delle citazioni per Hands-On Meta Learning with Python

APA 6 Citation

Ravichandiran, S. (2018). Hands-On Meta Learning with Python (1st ed.). Packt Publishing. Retrieved from https://www.perlego.com/book/868341/handson-meta-learning-with-python-meta-learning-using-oneshot-learning-maml-reptile-and-metasgd-with-tensorflow-pdf (Original work published 2018)

Chicago Citation

Ravichandiran, Sudharsan. (2018) 2018. Hands-On Meta Learning with Python. 1st ed. Packt Publishing. https://www.perlego.com/book/868341/handson-meta-learning-with-python-meta-learning-using-oneshot-learning-maml-reptile-and-metasgd-with-tensorflow-pdf.

Harvard Citation

Ravichandiran, S. (2018) Hands-On Meta Learning with Python. 1st edn. Packt Publishing. Available at: https://www.perlego.com/book/868341/handson-meta-learning-with-python-meta-learning-using-oneshot-learning-maml-reptile-and-metasgd-with-tensorflow-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Ravichandiran, Sudharsan. Hands-On Meta Learning with Python. 1st ed. Packt Publishing, 2018. Web. 14 Oct. 2022.