Hands-On Meta Learning with Python
eBook - ePub

Hands-On Meta Learning with Python

Meta learning using one-shot learning, MAML, Reptile, and Meta-SGD with TensorFlow

Sudharsan Ravichandiran

  1. 226 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Hands-On Meta Learning with Python

Meta learning using one-shot learning, MAML, Reptile, and Meta-SGD with TensorFlow

Sudharsan Ravichandiran

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

Explore a diverse set of meta-learning algorithms and techniques to enable human-like cognition for your machine learning models using various Python frameworks

Key Features

  • Understand the foundations of meta learning algorithms
  • Explore practical examples to explore various one-shot learning algorithms with its applications in TensorFlow
  • Master state of the art meta learning algorithms like MAML, reptile, meta SGD

Book Description

Meta learning is an exciting research trend in machine learning, which enables a model to understand the learning process. Unlike other ML paradigms, with meta learning you can learn from small datasets faster.

Hands-On Meta Learning with Python starts by explaining the fundamentals of meta learning and helps you understand the concept of learning to learn. You will delve into various one-shot learning algorithms, like siamese, prototypical, relation and memory-augmented networks by implementing them in TensorFlow and Keras. As you make your way through the book, you will dive into state-of-the-art meta learning algorithms such as MAML, Reptile, and CAML. You will then explore how to learn quickly with Meta-SGD and discover how you can perform unsupervised learning using meta learning with CACTUs. In the concluding chapters, you will work through recent trends in meta learning such as adversarial meta learning, task agnostic meta learning, and meta imitation learning.

By the end of this book, you will be familiar with state-of-the-art meta learning algorithms and able to enable human-like cognition for your machine learning models.

What you will learn

  • Understand the basics of meta learning methods, algorithms, and types
  • Build voice and face recognition models using a siamese network
  • Learn the prototypical network along with its variants
  • Build relation networks and matching networks from scratch
  • Implement MAML and Reptile algorithms from scratch in Python
  • Work through imitation learning and adversarial meta learning
  • Explore task agnostic meta learning and deep meta learning

Who this book is for

Hands-On Meta Learning with Python is for machine learning enthusiasts, AI researchers, and data scientists who want to explore meta learning as an advanced approach for training machine learning models. Working knowledge of machine learning concepts and Python programming is necessary.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Hands-On Meta Learning with Python est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Hands-On Meta Learning with Python par Sudharsan Ravichandiran en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Informatique et RĂ©seaux de neurones. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Année
2018
ISBN
9781789537024
Édition
1

MAML and Its Variants

In the previous chapter, we learned about the Neural Turing Machine (NTM) and how it stores and retrieves information from the memory. We also learned about the variant of NTM called the memory-augmented neural network, which is extensively used in one-shot learning. In this chapter, we will learn one of the interesting and most popularly used meta learning algorithms called Model Agnostic Meta Learning (MAML). We will see what model agnostic meta learning is, and how it is used in a supervised and reinforcement learning settings. We will also learn how to build MAML from scratch and then we will learn about Adversarial Meta Learning (ADML). We will see how ADML is used to find a robust model parameter. Following that we will learn how to implement ADML for the classification task. Lastly, we will learn about Context Adaptation for Meta Learning (CAML).
In this chapter, you will learn about the following:
  • MAML
  • MAML algorithm
  • MAML in supervised and reinforcement learning settings
  • Building MAML from scratch
  • ADML
  • Building ADML from scratch
  • CAML

MAML

MAML is one of the recently introduced and most popularly used meta learning algorithms and it has created a major breakthrough in meta learning research. Learning to learn is the key focus of meta learning and we know that, in meta learning, we learn from various related tasks containing only a small number of data points and the meta learner produces a quick learner that can generalize well on a new related task even with a lesser number of training samples.
The basic idea of MAML is to find a better initial parameter so that, with good initial parameters, the model can learn quickly on new tasks with fewer gradient steps.
So, what do we mean by that? Let's say we are performing a classification task using a neural network. How do we train the network? We will start off with initializing random weights and train the network by minimizing the loss. How do we minimize the loss? We do so using gradient descent. Okay, but how do we use gradient descent for minimizing the loss? We use gradient descent for finding the optimal weights that will give us the minimal loss. We take multiple gradient steps to find the optimal weights so that we can reach the convergence.
In MAML, we try to find these optimal weights by learning from the distribution of similar tasks. So, for a new task, we don't have to start with randomly initialized weights—instead, we can start with optimal weights, which will take fewer gradient steps to reach convergence and it doesn't require more data points for training.
Let's understand MAML in simple terms; let's say we have three related tasks: T1, T2, and T3. First, we randomly initialize our model parameter, Ξ. We train our network on task T1. Then, we try to minimize the loss L by gradient descent. We minimize the loss by finding the optimal parameter,
. Similarly, for tasks T2 and T3, we will start off with a randomly initialized model parameter, Ξ, and minimize the loss by finding the right set of parameters by gradient descent. Let's say
and
are the optimal parameters for the tasks, T2 and T3, respectively.
As you can see in the following diagram, we start off each task with the randomly initialized parameter Ξ and minimize the loss by finding the optimal parameters
,
and
for each of the tasks T1, T2, and T3 respectively:
However, instead of initializing ξ in a random position—that is, with random values—if we initialize ξ in a position that is common to all three tasks, we don't need to take more gradient steps and it will take us less time for training. MAML tries to do exactly this. MAML tries to find this optimal parameter ξ that is common to many of the related tasks, so we c...

Table des matiĂšres

  1. Title Page
  2. Copyright and Credits
  3. Dedication
  4. About Packt
  5. Contributors
  6. Preface
  7. Introduction to Meta Learning
  8. Face and Audio Recognition Using Siamese Networks
  9. Prototypical Networks and Their Variants
  10. Relation and Matching Networks Using TensorFlow
  11. Memory-Augmented Neural Networks
  12. MAML and Its Variants
  13. Meta-SGD and Reptile
  14. Gradient Agreement as an Optimization Objective
  15. Recent Advancements and Next Steps
  16. Assessments
  17. Other Books You May Enjoy
Normes de citation pour Hands-On Meta Learning with Python

APA 6 Citation

Ravichandiran, S. (2018). Hands-On Meta Learning with Python (1st ed.). Packt Publishing. Retrieved from https://www.perlego.com/book/868341/handson-meta-learning-with-python-meta-learning-using-oneshot-learning-maml-reptile-and-metasgd-with-tensorflow-pdf (Original work published 2018)

Chicago Citation

Ravichandiran, Sudharsan. (2018) 2018. Hands-On Meta Learning with Python. 1st ed. Packt Publishing. https://www.perlego.com/book/868341/handson-meta-learning-with-python-meta-learning-using-oneshot-learning-maml-reptile-and-metasgd-with-tensorflow-pdf.

Harvard Citation

Ravichandiran, S. (2018) Hands-On Meta Learning with Python. 1st edn. Packt Publishing. Available at: https://www.perlego.com/book/868341/handson-meta-learning-with-python-meta-learning-using-oneshot-learning-maml-reptile-and-metasgd-with-tensorflow-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Ravichandiran, Sudharsan. Hands-On Meta Learning with Python. 1st ed. Packt Publishing, 2018. Web. 14 Oct. 2022.