Deep Learning with TensorFlow 2 and Keras
eBook - ePub

Deep Learning with TensorFlow 2 and Keras

Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API, 2nd Edition

Antonio Gulli, Amita Kapoor, Sujit Pal

  1. 646 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Deep Learning with TensorFlow 2 and Keras

Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API, 2nd Edition

Antonio Gulli, Amita Kapoor, Sujit Pal

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

Build machine and deep learning systems with the newly released TensorFlow 2 and Keras for the lab, production, and mobile devices

Key Features

  • Introduces and then uses TensorFlow 2 and Keras right from the start
  • Teaches key machine and deep learning techniques
  • Understand the fundamentals of deep learning and machine learning through clear explanations and extensive code samples

Book Description

Deep Learning with TensorFlow 2 and Keras, Second Edition teaches neural networks and deep learning techniques alongside TensorFlow (TF) and Keras. You'll learn how to write deep learning applications in the most powerful, popular, and scalable machine learning stack available.

TensorFlow is the machine learning library of choice for professional applications, while Keras offers a simple and powerful Python API for accessing TensorFlow. TensorFlow 2 provides full Keras integration, making advanced machine learning easier and more convenient than ever before.

This book also introduces neural networks with TensorFlow, runs through the main applications (regression, ConvNets (CNNs), GANs, RNNs, NLP), covers two working example apps, and then dives into TF in production, TF mobile, and using TensorFlow with AutoML.

What you will learn

  • Build machine learning and deep learning systems with TensorFlow 2 and the Keras API
  • Use Regression analysis, the most popular approach to machine learning
  • Understand ConvNets (convolutional neural networks) and how they are essential for deep learning systems such as image classifiers
  • Use GANs (generative adversarial networks) to create new data that fits with existing patterns
  • Discover RNNs (recurrent neural networks) that can process sequences of input intelligently, using one part of a sequence to correctly interpret another
  • Apply deep learning to natural human language and interpret natural language texts to produce an appropriate response
  • Train your models on the cloud and put TF to work in real environments
  • Explore how Google tools can automate simple ML workflows without the need for complex modeling

Who this book is for

This book is for Python developers and data scientists who want to build machine learning and deep learning systems with TensorFlow. This book gives you the theory and practice required to use Keras, TensorFlow 2, and AutoML to build machine learning systems. Some knowledge of machine learning is expected.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Deep Learning with TensorFlow 2 and Keras est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Deep Learning with TensorFlow 2 and Keras par Antonio Gulli, Amita Kapoor, Sujit Pal en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Ciencia de la computaciĂłn et Procesamiento del lenguaje natural. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Année
2019
ISBN
9781838827724

2

TensorFlow 1.x and 2.x

The intent of this chapter is to explain the differences between TensorFlow 1.x and TensorFlow 2.0. We'll start by reviewing the traditional programming paradigm for 1.x and then we'll move on to all the new features and paradigms available in 2.x.

Understanding TensorFlow 1.x

It is generally the tradition that the first program one learns to write in any computer language is "hello world." We maintain the convention in this book! Let's begin with a Hello World program:
import tensorflow as tf message = tf.constant('Welcome to the exciting world of Deep Neural Networks!') with tf.Session() as sess: print(sess.run(message).decode()) 
Let us go in depth into this simple code. The first line imports tensorflow. The second line defines the message using tf.constant. The third line defines the Session() using with, and the fourth runs the session using run(). Note that this tells us that the result is a "byte string." In order to remove string quotes and b (for byte) we use the method decode().

TensorFlow 1.x computational graph program structure

TensorFlow 1.x is unlike other programming languages. We first need to build a blueprint of whatever neural network we want to create. This is accomplished by dividing the program into two separate parts: a definition of a computational graph, and its execution.

Computational graphs

A computational graph is a network of nodes and edges. In this section, all the data to be used – that is, tensor objects (constants, variables, placeholders) – and all the computations to be performed – that is, operation objects – are defined. Each node can have zero or more inputs but only one output. Nodes in the network represent objects (tensors and operations), and edges represent the tensors that flow between operations. The computational graph defines the blueprint of the neural network, but the tensors in it have no "value" associated with them yet.
A placeholder is simply a variable that we will assign data to at a later time. It allows us to create our computational graph, without needing the data.
To build a computational graph, we define all the constants, variables, and operations that we need to perform. In the following sections we describe the structure using a simple example of defining and executing a graph to add two vectors.

Execution of the graph

The execution of the graph is performed using the session object, which encapsulates the environment in which tensor and operation objects are evaluated. This is the place where actual calculations and transfers of information from one layer to another take place. The values of different tensor objects are initialized, accessed, and saved in a session object only. Until this point, the tensor objects were just abstract definitions. Here, they come to life.

Why do we use graphs at all?

There are multiple reasons as to why we use graphs. First of all, they are a natural metaphor for describing (deep) networks. Secondly, graphs can be automatically optimized by removing common sub-expressions, by fusing kernels, and by cutting redundant expressions. Thirdly, graphs can be distributed easily during training, and be deployed to different environments such as CPUs, GPUs, or TPUs, and also the likes of cloud, IoT, mobile, or traditional servers. After all, computational graphs are a common concept if you are familiar with functional programming, seen as compositions of simple primitives (as is common in functional programming). TensorFlow borrowed many concepts from computational graphs, and internally it performs several optimizations on our behalf.

An example to start with

We'll consider a simple example of adding two vectors. The graph we want to build is:
The corresponding code to define the computational graph is:
v_1 = tf.constant([1,2,3,4]) v_2 = tf.constant([2,1,5,3]) v_add = tf.add(v_1,v_2) # You can also write v_1 + v_2 instead 
Next, we execute the graph in the session:
with tf.Session() as sess: print(sess.run(v_add)) 
or
sess = tf.Session() print(sess.run(v_add)) sess.close() 
This results in printing the sum of two vectors:
[3 3 8 7] 
Remember, each session needs to be explicitly closed using close().
The building of a computational graph is very simple – you go on adding the variables and operations and passing them through (flow the tensors). In this way you build your neural network layer by layer. TensorFlow also allows you to use specific devices (CPU/GPU) with different objects of the computational graph using tf.device(). In our example, the computational graph consists of three nodes, v_1 and v_2 representing the two vectors, and v_add, the operation to be performed on them. Now to bring this graph to life we first need to define a session object using tf.Session(). We named our session object sess. Next, we run it using the run method defined in the Session class as:
run (fetches, feed_dict=None, options=None, run_metadata) 
This evaluates the tensor in the fetches parameter. Our example has tensor v_add in fetches. The run me...

Table des matiĂšres

  1. Preface
  2. Neural Network Foundations with TensorFlow 2.0
  3. TensorFlow 1.x and 2.x
  4. Regression
  5. Convolutional Neural Networks
  6. Advanced Convolutional Neural Networks
  7. Generative Adversarial Networks
  8. Word Embeddings
  9. Recurrent Neural Networks
  10. Autoencoders
  11. Unsupervised Learning
  12. Reinforcement Learning
  13. TensorFlow and Cloud
  14. TensorFlow for Mobile and IoT and TensorFlow.js
  15. An introduction to AutoML
  16. The Math Behind Deep Learning
  17. Tensor Processing Unit
  18. Other Books You May Enjoy
  19. Index
Normes de citation pour Deep Learning with TensorFlow 2 and Keras

APA 6 Citation

Gulli, A., Kapoor, A., & Pal, S. (2019). Deep Learning with TensorFlow 2 and Keras (2nd ed.). Packt Publishing. Retrieved from https://www.perlego.com/book/1365828/deep-learning-with-tensorflow-2-and-keras-regression-convnets-gans-rnns-nlp-and-more-with-tensorflow-2-and-the-keras-api-2nd-edition-pdf (Original work published 2019)

Chicago Citation

Gulli, Antonio, Amita Kapoor, and Sujit Pal. (2019) 2019. Deep Learning with TensorFlow 2 and Keras. 2nd ed. Packt Publishing. https://www.perlego.com/book/1365828/deep-learning-with-tensorflow-2-and-keras-regression-convnets-gans-rnns-nlp-and-more-with-tensorflow-2-and-the-keras-api-2nd-edition-pdf.

Harvard Citation

Gulli, A., Kapoor, A. and Pal, S. (2019) Deep Learning with TensorFlow 2 and Keras. 2nd edn. Packt Publishing. Available at: https://www.perlego.com/book/1365828/deep-learning-with-tensorflow-2-and-keras-regression-convnets-gans-rnns-nlp-and-more-with-tensorflow-2-and-the-keras-api-2nd-edition-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Gulli, Antonio, Amita Kapoor, and Sujit Pal. Deep Learning with TensorFlow 2 and Keras. 2nd ed. Packt Publishing, 2019. Web. 14 Oct. 2022.