Hands-On Deep Learning with Go
eBook - ePub

Hands-On Deep Learning with Go

A practical guide to building and implementing neural network models using Go

Gareth Seneque, Darrell Chua

  1. 242 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Hands-On Deep Learning with Go

A practical guide to building and implementing neural network models using Go

Gareth Seneque, Darrell Chua

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

Apply modern deep learning techniques to build and train deep neural networks using Gorgonia

Key Features

  • Gain a practical understanding of deep learning using Golang
  • Build complex neural network models using Go libraries and Gorgonia
  • Take your deep learning model from design to deployment with this handy guide

Book Description

Go is an open source programming language designed by Google for handling large-scale projects efficiently. The Go ecosystem comprises some really powerful deep learning tools such as DQN and CUDA. With this book, you'll be able to use these tools to train and deploy scalable deep learning models from scratch.

This deep learning book begins by introducing you to a variety of tools and libraries available in Go. It then takes you through building neural networks, including activation functions and the learning algorithms that make neural networks tick. In addition to this, you'll learn how to build advanced architectures such as autoencoders, restricted Boltzmann machines (RBMs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), and more. You'll also understand how you can scale model deployments on the AWS cloud infrastructure for training and inference.

By the end of this book, you'll have mastered the art of building, training, and deploying deep learning models in Go to solve real-world problems.

What you will learn

  • Explore the Go ecosystem of libraries and communities for deep learning
  • Get to grips with Neural Networks, their history, and how they work
  • Design and implement Deep Neural Networks in Go
  • Get a strong foundation of concepts such as Backpropagation and Momentum
  • Build Variational Autoencoders and Restricted Boltzmann Machines using Go
  • Build models with CUDA and benchmark CPU and GPU models

Who this book is for

This book is for data scientists, machine learning engineers, and AI developers who want to build state-of-the-art deep learning models using Go. Familiarity with basic machine learning concepts and Go programming is required to get the best out of this book.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Hands-On Deep Learning with Go un PDF/ePUB en línea?
Sí, puedes acceder a Hands-On Deep Learning with Go de Gareth Seneque, Darrell Chua en formato PDF o ePUB, así como a otros libros populares de Computer Science y Neural Networks. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Año
2019
ISBN
9781789347883
Edición
1
Categoría
Neural Networks

Section 1: Deep Learning in Go, Neural Networks, and How to Train Them

This section introduces you to deep learning (DL) and the libraries in Go that are needed to design, implement, and train deep neural networks (DNNs). We also cover the implementation of an autoencoder for unsupervised learning, and a restricted Boltzmann machine (RBM) for a Netflix-style collaborative filtering system.
The following chapters are included in this section:
  • Chapter 1, Introduction to Deep Learning in Go
  • Chapter 2, What is a Neural Network and How Do I Train One?
  • Chapter 3, Beyond Basic Neural Networks - Autoencoders and Restricted Boltzmann Machines
  • Chapter 4, CUDA - GPU-Accelerated Training

Introduction to Deep Learning in Go

This book will very quickly jump into the practicalities of implementing Deep Neural Networks (DNNs) in Go. Simply put, this book's title contains its aim. This means there will be a lot of technical detail, a lot of code, and (not too much) math. By the time you finally close this book or turn off your Kindle, you'll know how (and why) to implement modern, scalable DNNs and be able to repurpose them for your needs in whatever industry or mad science project you're involved in.
Our choice of Go reflects the maturing of the landscape of Go libraries built for the kinds of operations our DNNs perform. There is, of course, much debate about the trade-offs made when selecting languages or libraries, and we will devote a section of this chapter to our views and argue for the choices we've made.
However, what is code without context? Why do we care about this seemingly convoluted mix of linear algebra, calculus, statistics, and probability? Why use computers to recognize things in images or identify aberrant patterns in financial data? And, perhaps most importantly, what do the approaches to these tasks have in common? The initial sections of this book will try to provide some of this context.
Scientific endeavor, when broken up into the disciplines that represent their institutional and industry specialization, is governed by an idea of progress. By this, we mean a kind of momentum, a moving forward, toward some end. For example, the ideal goal of medicine is to be able to identify and cure any ailment or disease. Physicists aim to understand completely the fundamental laws of nature. Progress trends in this general direction. Science is itself an optimization method. So, what might the ultimate goal of Machine Learning (ML) be?
We'll be upfront. We think it's the creation of Artificial General Intelligence (AGI). That's the prize: a general-purpose learning computer to take care of the jobs and leave life to people. As we will see when we cover the history of Deep Learning (DL) in detail, founders of the top Artificial Intelligence (AI) labs agree that AGI represents a meta-solution to many of the complex problems in our world today, from economics to medicine to government.
This chapter will cover the following topics:
  • Why DL?
  • DL—history applications
  • Overview of ML in Go
  • Using Gorgonia

Introducing DL

We will now offer a high-level view of why DL is important and how it fits into the discussion about AI. Then, we will look at the historical development of DL, as well as current and future applications.

Why DL?

So, who are you, dear reader? Why are you interested in DL? Do you have your private vision for AI? Or do you have something more modest? What is your origin story?
In our survey of colleagues, teachers, and meetup acquaintances, the origin story of someone with a more formal interest in machines has a few common features. It doesn't matter much if you grew up playing games against the computer, an invisible enemy who sometimes glitched out, or if you chased down actual bots in id Software's Quake back in the late 1990s; the idea of some combination of software and hardware thinking and acting independently had an impact on each of us early on in life.
And then, as time passed, with age, education, and exposure to pop culture, your ideas grew refined and maybe you ended up as a researcher, engineer, hacker, or hobbyist, and now you're wondering how you might participate in booting up the grand machine.
If your interests are more modest, say you are a data scientist looking to understand cutting-edge techniques, but are ambivalent about all of this talk of sentient software and science fiction, you are, in many ways, better prepared for the realities of ML in 2019 than most. Each of us, regardless of the scale of our ambition, must understand the logic of code and hard work through trial and error. Thankfully, we have very fast graphics cards.
And what is the result of these basic truths? Right now, in 2019, DL has had an impact on our lives in numerous ways. Hard problems are being solved. Some trivial, some not. Yes, Netflix has a model of your most embarrassing movie preferences, but Facebook has automatic image annotation for the visually impaired. Understanding the potential impact of DL is as simple as watching the expression of joy on the face of someone who has just seen a photo of a loved one for the first time.

DL – a history

We will now briefly cover the history of DL and the historical context from which it emerged, including the following:
  • The idea of AI
  • The beginnings of computer science/information theory
  • Current academic work about the state/future of DL systems
While we are specifically interested in DL, the field didn't emerge out of nothing. It is a group of models/algorithms within ML itself, a branch of computer science. It forms one approach to AI. The other, so-called symbolic AI, revolves around hand-crafted (rather than learned) features and rules written in code, rather than a weighted model that contains patterns extracted from data algorithmically.
The idea of thinking machines, before becoming a science, was very much a fiction that began in antiquity. The Greek god of arms manufacturing, Hephaestus, built automatons out of gold and silver. They served his whims and are an early example of human imagination naturally considering what it might take to replicate an embodied form of itself.
Bringing the history forward a few thousand years, there are several key figures in 20th-century information theory and computer science that built the platform that allowed the development of AI as a distinct field, including the recent work in DL we will be covering.
The first major figure, Claude Shannon, offered us a general theory of communication. Specifically, he described, in his landmark paper, A Mathematical Theory of Computation, how to ensure against information loss when transmitting over an imperfect medium (like, say, using vacuum tubes to perform computation). This notion, particularly his noisy-channel coding theorem, proved crucial for handling arbitrarily large quantities of data and algorithms reliably, without the errors of the medium itself being introduced into the communications channel.
Alan Turing described his Turing machine in 1936, offering us a universal model of computation. With the fundamental building blocks he described, he defined the limits of what a machine might compute. He was influenced by John Von Neumann's idea of the stored-program. The key insight from Turing's work is that digital computers can simulate any process of formal reasoning (the Church-Turing hypothesis). The following diagram shows the Turing machine process:
So, you mean to tell us, Mr. Turing, that computers might be made to reason…like us?!
John Von Neumann was himself influenced by Turing's 1936 paper. Before the development of the transistor, when vacuum tubes were the only means of computation available (in systems such as ENIAC and its derivatives), John Von Neumann published his final work. It remained incomplete at his death and is entitled The Computer and the Brain. Despite remaining incomplete, it gave early co...

Índice

  1. Title Page
  2. Copyright and Credits
  3. About Packt
  4. Contributors
  5. Preface
  6. Section 1: Deep Learning in Go, Neural Networks, and How to Train Them
  7. Introduction to Deep Learning in Go
  8. What Is a Neural Network and How Do I Train One?
  9. Beyond Basic Neural Networks - Autoencoders and RBMs
  10. CUDA - GPU-Accelerated Training
  11. Section 2: Implementing Deep Neural Network Architectures
  12. Next Word Prediction with Recurrent Neural Networks
  13. Object Recognition with Convolutional Neural Networks
  14. Maze Solving with Deep Q-Networks
  15. Generative Models with Variational Autoencoders
  16. Section 3: Pipeline, Deployment, and Beyond!
  17. Building a Deep Learning Pipeline
  18. Scaling Deployment
  19. Other Books You May Enjoy
Estilos de citas para Hands-On Deep Learning with Go

APA 6 Citation

Seneque, G., & Chua, D. (2019). Hands-On Deep Learning with Go (1st ed.). Packt Publishing. Retrieved from https://www.perlego.com/book/1204083/handson-deep-learning-with-go-a-practical-guide-to-building-and-implementing-neural-network-models-using-go-pdf (Original work published 2019)

Chicago Citation

Seneque, Gareth, and Darrell Chua. (2019) 2019. Hands-On Deep Learning with Go. 1st ed. Packt Publishing. https://www.perlego.com/book/1204083/handson-deep-learning-with-go-a-practical-guide-to-building-and-implementing-neural-network-models-using-go-pdf.

Harvard Citation

Seneque, G. and Chua, D. (2019) Hands-On Deep Learning with Go. 1st edn. Packt Publishing. Available at: https://www.perlego.com/book/1204083/handson-deep-learning-with-go-a-practical-guide-to-building-and-implementing-neural-network-models-using-go-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Seneque, Gareth, and Darrell Chua. Hands-On Deep Learning with Go. 1st ed. Packt Publishing, 2019. Web. 14 Oct. 2022.