Hands-On Neural Networks with TensorFlow 2.0
eBook - ePub

Hands-On Neural Networks with TensorFlow 2.0

Understand TensorFlow, from static graph to eager execution, and design neural networks

Paolo Galeone

  1. 358 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Hands-On Neural Networks with TensorFlow 2.0

Understand TensorFlow, from static graph to eager execution, and design neural networks

Paolo Galeone

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

A comprehensive guide to developing neural network-based solutions using TensorFlow 2.0

Key Features

  • Understand the basics of machine learning and discover the power of neural networks and deep learning
  • Explore the structure of the TensorFlow framework and understand how to transition to TF 2.0
  • Solve any deep learning problem by developing neural network-based solutions using TF 2.0

Book Description

TensorFlow, the most popular and widely used machine learning framework, has made it possible for almost anyone to develop machine learning solutions with ease. With TensorFlow (TF) 2.0, you'll explore a revamped framework structure, offering a wide variety of new features aimed at improving productivity and ease of use for developers.

This book covers machine learning with a focus on developing neural network-based solutions. You'll start by getting familiar with the concepts and techniques required to build solutions to deep learning problems. As you advance, you'll learn how to create classifiers, build object detection and semantic segmentation networks, train generative models, and speed up the development process using TF 2.0 tools such as TensorFlow Datasets and TensorFlow Hub.

By the end of this TensorFlow book, you'll be ready to solve any machine learning problem by developing solutions using TF 2.0 and putting them into production.

What you will learn

  • Grasp machine learning and neural network techniques to solve challenging tasks
  • Apply the new features of TF 2.0 to speed up development
  • Use TensorFlow Datasets (tfds) and the tf.data API to build high-efficiency data input pipelines
  • Perform transfer learning and fine-tuning with TensorFlow Hub
  • Define and train networks to solve object detection and semantic segmentation problems
  • Train Generative Adversarial Networks (GANs) to generate images and data distributions
  • Use the SavedModel file format to put a model, or a generic computational graph, into production

Who this book is for

If you're a developer who wants to get started with machine learning and TensorFlow, or a data scientist interested in developing neural network solutions in TF 2.0, this book is for you. Experienced machine learning engineers who want to master the new features of the TensorFlow framework will also find this book useful.

Basic knowledge of calculus and a strong understanding of Python programming will help you grasp the topics covered in this book.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Hands-On Neural Networks with TensorFlow 2.0 è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Hands-On Neural Networks with TensorFlow 2.0 di Paolo Galeone in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Computer Science e Neural Networks. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2019
ISBN
9781789613797
Edizione
1

Section 1: Neural Network Fundamentals

This section provides a basic introduction to machine learning and the important concepts of neural networks and deep learning.
This section comprises the following chapters:
  • Chapter 1, What is Machine Learning?
  • Chapter 2, Neural Networks and Deep Learning

What is Machine Learning?

Machine learning (ML) is an artificial intelligence branch where we define algorithms, with the aim of learning about a model that describes and extracts meaningful information from data.
Exciting applications of ML can be found in fields such as predictive maintenance in industrial environments, image analysis for medical applications, time series forecasting for finance and many other sectors, face detection and identification for security purposes, autonomous driving, text comprehension, speech recognition, recommendation systems, and many other applications of ML are countless, and we probably use them daily without even knowing it!
Just think about the camera application on your smartphone— when you open the app and you point the camera toward a person, you see a square around the person's face. How is this possible? For a computer, an image is just a set of three stacked matrices. How can an algorithm detect that a specific subset of those pixels represents a face?
There's a high chance that the algorithm (also called a model) used by the camera application has been trained to detect that pattern. This task is known as face detection. This face detection task can be solved using a ML algorithm that can be classified into the broad category of supervised learning.
ML tasks are usually classified into three broad categories, all of which we are going to analyze in the following sections:
  • Supervised learning
  • Unsupervised learning
  • Semi-supervised learning
Every group has its peculiarities and set of algorithms, but all of them share the same goal: learning from data. Learning from data is the goal of every ML algorithm and, in particular, learning about an unknown function that maps data to the (expected) response.
The dataset is probably the most critical part of the entire ML pipeline; its quality, structure, and size are key to the success of deep learning algorithms, as we will see in upcoming chapters.
For instance, the aforementioned face detection task can be solved by training a model, making it look at thousands and thousands of labeled examples so that the algorithm learns that a specific input corresponds with what we call a face.
The same algorithm can achieve a different performance if it's trained on a different dataset of faces, and the more high-quality data we have, the better the algorithm's performance will be.
In this chapter, we will cover the following topics:
  • The importance of the dataset
  • Supervised learning
  • Unsupervised learning
  • Semi-supervised learning

The importance of the dataset

Since the concept of the dataset is essential in ML, let's look at it in detail, with a focus on how to create the required splits for building a complete and correct ML pipeline.
A dataset is nothing more than a collection of data. Formally, we can describe a dataset as a set of pairs,
, where
is the i-th example and
is its label, with a finite cardinality,
:
A dataset has a finite number of elements, and our ML algorithm will loop over this dataset several times, trying to understand the data structure, until it solves the task it is asked to address. As shown in Chapter 2, Neural Networks and Deep Learning, some algorithms will consider all the data at once, while other algorithms will iteratively look at a small subset of the data at each training iteration.
A typical supervised learning task is the classification of the dataset. We train a model on the data, making it learn that a specific set of features extracted from the example
(or the example,
, itself) corresponds to a label,
.
It is worth familiarizing yourself with the concept of datasets, dataset splits, and epochs from the beginning of your journey into the ML world so that you are already familiar with these concepts when we talk about them in the chapters that follow.
Right now, you already know, at a very high level, what a dataset is. But let's dig into the basic concepts of a dataset split. A dataset contains all the data that's at your disposal. As we mentioned previously, the ML algorithm needs to loop over the dataset several times and look at the data in order to learn how to solve a task (for example, the classification task).
If we use the same dataset to train and test the performance of our algorithm, how can we guarantee that our algorithm performs well, even on unseen data? Well, we can't.
The most common practice is to split the dataset into three parts:
  • Training set: The subset to use to train the model.
  • Validation set: The subset to measure the model's performance during the training and also to perform hyperparameter tuning/searches.
  • Test set: The subset to never touch during the training or validation phases. This is used only to run the final performance evaluation.
All three parts are disjoint subsets of the dataset, as shown in the following Venn diagram:
Venn diagram representing how a dataset should be divided no overlapping among the training, validation, and test sets is required
The training set is usually the bigger subset since it must be a meaningful representation of the whole dataset. The validation and test sets are smaller and generally the same size—of course, this is just something general; there are no constraints about the dataset's cardinality. In fact, the only thing that matters is that they're big enough for the algorithm to be trained on and represented.
We will make our model learn from the training set, evaluate its performance during the training process using the validation set, and run the final performance evaluation on the test set: this allows us to correctly define and train supervised learning algorithms that could generalize well, and therefore work well even on unseen data.
An epoch is the processing of the entire training set that's done by the learning algorithm. Hence, if our training set has 60,000 examples, once the ML algorithm uses...

Indice dei contenuti

  1. Title Page
  2. Copyright and Credits
  3. About Packt
  4. Contributors
  5. Preface
  6. Section 1: Neural Network Fundamentals
  7. What is Machine Learning?
  8. Neural Networks and Deep Learning
  9. Section 2: TensorFlow Fundamentals
  10. TensorFlow Graph Architecture
  11. TensorFlow 2.0 Architecture
  12. Efficient Data Input Pipelines and Estimator API
  13. Section 3: The Application of Neural Networks
  14. Image Classification Using TensorFlow Hub
  15. Introduction to Object Detection
  16. Semantic Segmentation and Custom Dataset Builder
  17. Generative Adversarial Networks
  18. Bringing a Model to Production
  19. Other Books You May Enjoy
Stili delle citazioni per Hands-On Neural Networks with TensorFlow 2.0

APA 6 Citation

Galeone, P. (2019). Hands-On Neural Networks with TensorFlow 2.0 (1st ed.). Packt Publishing. Retrieved from https://www.perlego.com/book/1204120/handson-neural-networks-with-tensorflow-20-understand-tensorflow-from-static-graph-to-eager-execution-and-design-neural-networks-pdf (Original work published 2019)

Chicago Citation

Galeone, Paolo. (2019) 2019. Hands-On Neural Networks with TensorFlow 2.0. 1st ed. Packt Publishing. https://www.perlego.com/book/1204120/handson-neural-networks-with-tensorflow-20-understand-tensorflow-from-static-graph-to-eager-execution-and-design-neural-networks-pdf.

Harvard Citation

Galeone, P. (2019) Hands-On Neural Networks with TensorFlow 2.0. 1st edn. Packt Publishing. Available at: https://www.perlego.com/book/1204120/handson-neural-networks-with-tensorflow-20-understand-tensorflow-from-static-graph-to-eager-execution-and-design-neural-networks-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Galeone, Paolo. Hands-On Neural Networks with TensorFlow 2.0. 1st ed. Packt Publishing, 2019. Web. 14 Oct. 2022.