Deep Learning with Microsoft Cognitive Toolkit Quick Start Guide
eBook - ePub

Deep Learning with Microsoft Cognitive Toolkit Quick Start Guide

A practical guide to building neural networks using Microsoft's open source deep learning framework

Willem Meints

  1. 208 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Deep Learning with Microsoft Cognitive Toolkit Quick Start Guide

A practical guide to building neural networks using Microsoft's open source deep learning framework

Willem Meints

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

Learn how to train popular deep learning architectures such as autoencoders, convolutional and recurrent neural networks while discovering how you can use deep learning models in your software applications with Microsoft Cognitive Toolkit

Key Features

  • Understand the fundamentals of Microsoft Cognitive Toolkit and set up the development environment
  • Train different types of neural networks using Cognitive Toolkit and deploy it to production
  • Evaluate the performance of your models and improve your deep learning skills

Book Description

Cognitive Toolkit is a very popular and recently open sourced deep learning toolkit by Microsoft. Cognitive Toolkit is used to train fast and effective deep learning models. This book will be a quick introduction to using Cognitive Toolkit and will teach you how to train and validate different types of neural networks, such as convolutional and recurrent neural networks.

This book will help you understand the basics of deep learning. You will learn how to use Microsoft Cognitive Toolkit to build deep learning models and discover what makes this framework unique so that you know when to use it. This book will be a quick, no-nonsense introduction to the library and will teach you how to train different types of neural networks, such as convolutional neural networks, recurrent neural networks, autoencoders, and more, using Cognitive Toolkit. Then we will look at two scenarios in which deep learning can be used to enhance human capabilities. The book will also demonstrate how to evaluate your models' performance to ensure it trains and runs smoothly and gives you the most accurate results. Finally, you will get a short overview of how Cognitive Toolkit fits in to a DevOps environment

What you will learn

  • Set up your deep learning environment for the Cognitive Toolkit on Windows and Linux
  • Pre-process and feed your data into neural networks
  • Use neural networks to make effcient predictions and recommendations
  • Train and deploy effcient neural networks such as CNN and RNN
  • Detect problems in your neural network using TensorBoard
  • Integrate Cognitive Toolkit with Azure ML Services for effective deep learning

Who this book is for

Data Scientists, Machine learning developers, AI developers who wish to train and deploy effective deep learning models using Microsoft CNTK will find this book to be useful. Readers need to have experience in Python or similar object-oriented language like C# or Java.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Deep Learning with Microsoft Cognitive Toolkit Quick Start Guide un PDF/ePUB en línea?
Sí, puedes acceder a Deep Learning with Microsoft Cognitive Toolkit Quick Start Guide de Willem Meints en formato PDF o ePUB, así como a otros libros populares de Computer Science y Neural Networks. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Año
2019
ISBN
9781789803198
Edición
1
Categoría
Neural Networks

Validating Model Performance

When you've built a deep learning model using neural networks, you are left with the question of how well it can predict when presented with new data. Are the predictions made by the model accurate enough to be usable in a real-world scenario? In this chapter, we will look at how to measure the performance of your deep learning models. We'll also dive into tooling to monitor and debug your models.
By the end of this chapter, you'll have a solid understanding of different validation techniques you can use to measure the performance of your model. You'll also know how to use a tool such as TensorBoard to get into the details of your neural network. Finally, you will know how to apply different visualizations to debug your neural network.
The following topics will be covered in this chapter:
  • Choosing a good strategy to validate model performance
  • Validating the performance of a classification model
  • Validating the performance of a regression model
  • Measuring performance of a for out-of-memory datasets
  • Monitoring your model

Technical requirements

We assume you have a recent version of Anaconda installed on your computer and have followed the steps in Chapter 1, Getting Started with CNTK, to install CNTK on your computer. The sample code for this chapter can be found in our GitHub repository at https://github.com/PacktPublishing/Deep-Learning-with-Microsoft-Cognitive-Toolkit-Quick-Start-Guide/tree/master/ch4.
In this chapter, we'll work on a few examples stored in Jupyter Notebooks. To access the sample code, run the following commands inside an Anaconda prompt in the directory where you've downloaded the code:
cd ch4
jupyter notebook
We'll mention relevant notebooks in each of the sections so you can follow along and try out different techniques yourself.
Check out the following video to see the code in action:
http://bit.ly/2TVuoR3

Choosing a good strategy to validate model performance

Before we dive into different validation techniques for various kinds of models, let's talk a little bit about validating deep learning models in general.
When you build a machine learning model, you're training it with a set of data samples. The machine learning model learns these samples and derives general rules from them. When you feed the same samples to the model, it will perform pretty well on those samples. However, when you feed new samples to the model that you haven't used in training, the model will behave differently. It will most likely be worse at making a good prediction on those samples. This happens because your model will always tend to lean toward data it has seen before.
But we don't want our model to be good at predicting the outcome for samples it has seen before. It needs to work well for samples that are new to the model, because in a production environment you will get different input that you need to predict an outcome for. To make sure that our model works well, we need to validate it using a set of samples that we didn't use for training.
Let's take a look at two different techniques for creating a dataset for validating a neural network. First, we'll explore how to use a hold-out dataset. After that we'll focus on a more complex method of creating a separate validation dataset.

Using a hold-out dataset for validation

The first and easiest method to create a dataset to validate a neural network is to use a hold-out set. You're holding back one set of samples from training and using those samples to measure the performance of your model after you're done training the model:
The ratio between training and validation samples is usually around 80% training samples versus 20% test samples. This ensures that you have enough data to train the model and a reasonable amount of samples to get a good measurement of the performance.
Usually, you choose random samples from the main dataset to include in the training and test set. This ensures that you get an even distribution between the sets.
You can produce your own hold-out set using the train_test_split function from the scikit-learn library. It accepts any number of datasets and splits them into two segments based on either the train_size or the test_size keyword parameter:
from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
It is good practice to randomly split your dataset each time you run a training session. Deep learning algorithms, such as the ones used in CNTK, are highly influenced by random-number generators, and the order in which you provide samples to the neural network during training. So, to even out the effect of the sample order, you need to randomize the order of your dataset each time you train the model.
Using a hold-out set works well when you want to quickly measure the performance of your model. It's also great when you have a large dataset or a model that takes a long time to train. But there are downsides to using the hold-out technique.
Your model is sensitive to the order in which samples were provided during training. Also, each time you start a new training session, the random-number generator in your computer will provide different values to initialize the parameters in your neural network. This can cause swings in performance metrics. Sometimes, you will get really good results, but sometimes you get really bad results. In the end, this is bad because it is unreliable.
Be careful when randomizing datasets that contain sequences of samples that should be handled as a single input, such as when working with a time series dataset. Libraries such as scikit-learn don't handle this kind of dataset correctly and you may need to write your own randomization logic.

Using k-fold cross-validation

You can increase the reliability of the performance metrics for your model by using a technique called k-fold cross-validation. Cross-validation performs the same technique as the hold-out set. But it does it a number of times—usually about 5 to 10 times:
The process of k-fold cross-validation works like this: First, you split the dataset into a training and test set. You then train the model using the training set. Finally, you use the test set to calculate the performance metrics for your model. This process then gets repeated as many times as needed—usually 5 to 10 times. At the end of the cross-validation process, the average is calculated over all the performance metrics, which gives you the final performance metrics. Most tools will also give you the individual values so you can see how much variation there is between different training runs.
Cross-validation gives you a much more stable performance measurement, because you use a more realistic training and test scenario. The order of samples isn't defined in production, which is simulated by running the same training process a number of times. Also, we're using separate hold-out sets to simulate unseen data.
Using k-fold cross-validation takes a lot of time when validating deep learning models, so use it wisely. If you're still experimenting with the setup of your model, you're better off using the basic hold-out technique. Later, when you're done experimenting, you can use k-fold cross-validation to make sure that the model performs well in a production environment.
Note that CNTK doesn't include support for running k-fold cross-validation. You need to write your own scripts to do so.

What about underfitting and overfitting?

When you start to collect metrics for a neural network using either a hold-out dataset or by applying k-fold cross-validation you'll discover that the output for the metrics will be different for the training dataset and the validation dataset. In the this section, we'll take a look at how to use the information from the collected metrics to detect overfitting and underfitting problems for your model.
When a model is overfit, it perform...

Índice

  1. Title Page
  2. Copyright and Credits
  3. Dedication
  4. About Packt
  5. Contributors
  6. Preface
  7. Getting Started with CNTK
  8. Building Neural Networks with CNTK
  9. Getting Data into Your Neural Network
  10. Validating Model Performance
  11. Working with Images
  12. Working with Time Series Data
  13. Deploying Models to Production
  14. Other Books You May Enjoy
Estilos de citas para Deep Learning with Microsoft Cognitive Toolkit Quick Start Guide

APA 6 Citation

Meints, W. (2019). Deep Learning with Microsoft Cognitive Toolkit Quick Start Guide (1st ed.). Packt Publishing. Retrieved from https://www.perlego.com/book/955542/deep-learning-with-microsoft-cognitive-toolkit-quick-start-guide-a-practical-guide-to-building-neural-networks-using-microsofts-open-source-deep-learning-framework-pdf (Original work published 2019)

Chicago Citation

Meints, Willem. (2019) 2019. Deep Learning with Microsoft Cognitive Toolkit Quick Start Guide. 1st ed. Packt Publishing. https://www.perlego.com/book/955542/deep-learning-with-microsoft-cognitive-toolkit-quick-start-guide-a-practical-guide-to-building-neural-networks-using-microsofts-open-source-deep-learning-framework-pdf.

Harvard Citation

Meints, W. (2019) Deep Learning with Microsoft Cognitive Toolkit Quick Start Guide. 1st edn. Packt Publishing. Available at: https://www.perlego.com/book/955542/deep-learning-with-microsoft-cognitive-toolkit-quick-start-guide-a-practical-guide-to-building-neural-networks-using-microsofts-open-source-deep-learning-framework-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Meints, Willem. Deep Learning with Microsoft Cognitive Toolkit Quick Start Guide. 1st ed. Packt Publishing, 2019. Web. 14 Oct. 2022.