Hands-On Neuroevolution with Python
eBook - ePub

Hands-On Neuroevolution with Python

Build high-performing artificial neural network architectures using neuroevolution-based algorithms

Iaroslav Omelianenko

  1. 368 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Hands-On Neuroevolution with Python

Build high-performing artificial neural network architectures using neuroevolution-based algorithms

Iaroslav Omelianenko

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

Increase the performance of various neural network architectures using NEAT, HyperNEAT, ES-HyperNEAT, Novelty Search, SAFE, and deep neuroevolution

Key Features

  • Implement neuroevolution algorithms to improve the performance of neural network architectures
  • Understand evolutionary algorithms and neuroevolution methods with real-world examples
  • Learn essential neuroevolution concepts and how they are used in domains including games, robotics, and simulations

Book Description

Neuroevolution is a form of artificial intelligence learning that uses evolutionary algorithms to simplify the process of solving complex tasks in domains such as games, robotics, and the simulation of natural processes. This book will give you comprehensive insights into essential neuroevolution concepts and equip you with the skills you need to apply neuroevolution-based algorithms to solve practical, real-world problems.

You'll start with learning the key neuroevolution concepts and methods by writing code with Python. You'll also get hands-on experience with popular Python libraries and cover examples of classical reinforcement learning, path planning for autonomous agents, and developing agents to autonomously play Atari games. Next, you'll learn to solve common and not-so-common challenges in natural computing using neuroevolution-based algorithms. Later, you'll understand how to apply neuroevolution strategies to existing neural network designs to improve training and inference performance. Finally, you'll gain clear insights into the topology of neural networks and how neuroevolution allows you to develop complex networks, starting with simple ones.

By the end of this book, you will not only have explored existing neuroevolution-based algorithms, but also have the skills you need to apply them in your research and work assignments.

What you will learn

  • Discover the most popular neuroevolution algorithms – NEAT, HyperNEAT, and ES-HyperNEAT
  • Explore how to implement neuroevolution-based algorithms in Python
  • Get up to speed with advanced visualization tools to examine evolved neural network graphs
  • Understand how to examine the results of experiments and analyze algorithm performance
  • Delve into neuroevolution techniques to improve the performance of existing methods
  • Apply deep neuroevolution to develop agents for playing Atari games

Who this book is for

This book is for machine learning practitioners, deep learning researchers, and AI enthusiasts who are looking to implement neuroevolution algorithms from scratch. Working knowledge of the Python programming language and basic knowledge of deep learning and neural networks are mandatory.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Hands-On Neuroevolution with Python è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Hands-On Neuroevolution with Python di Iaroslav Omelianenko in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Computer Science e Artificial Intelligence (AI) & Semantics. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2019
ISBN
9781838822002

Section 1: Fundamentals of Evolutionary Computation Algorithms and Neuroevolution Methods

This section introduces core concepts of evolutionary computation and discusses particulars of neuroevolution-based algorithms and which Python libraries can be used to implement them. You will become familiar with the fundamentals of neuroevolution methods and will get practical recommendations on how to start your experiments. This section provides a basic introduction to the Anaconda package manager for Python as part of your environment setup.

This section comprises the following chapters:
  • Chapter 1, Overview of Neuroevolution Methods
  • Chapter 2, Python Libraries and Environment Setup

Overview of Neuroevolution Methods

The concept of artificial neural networks (ANN) was inspired by the structure of the human brain. There was a strong belief that, if we were able to imitate this intricate structure in a very similar way, we would be able to create artificial intelligence. We are still on the road to achieving this. Although we can implement Narrow AI agents, we are still far from creating a Generic AI agent.
This chapter introduces you to the concept of ANNs and the two methods that we can use to train them (the gradient descent with error backpropagation and neuroevolution) so that they learn how to approximate the objective function. However, we will mainly focus on discussing the neuroevolution-based family of algorithms. You will learn about the implementation of the evolutionary process that's inspired by natural evolution and become familiar with the most popular neuroevolution algorithms: NEAT, HyperNEAT, and ES-HyperNEAT. We will also discuss the methods of optimization that we can use to search for final solutions and make a comparison between objective-based search and Novelty Search algorithms. By the end of this chapter, you will have a complete understanding of the internals of neuroevolution algorithms and be ready to apply this knowledge in practice.
In this chapter, we will cover the following topics:
  • Evolutionary algorithms and neuroevolution-based methods
  • NEAT algorithm overview
  • Hypercube-based NEAT
  • Evolvable-Substrate HyperNEAT
  • Novelty Search optimization method

Evolutionary algorithms and neuroevolution-based methods

The term artificial neural networks stands for a graph of nodes connected by links where each of the links has a particular weight. The neural node defines a kind of threshold operator that allows the signal to pass only after a specific activation function has been applied. It remotely resembles the way in which neurons in the brain are organized. Typically, the ANN training process consists of selecting the appropriate weight values for all the links within the network. Thus, ANN can approximate any function and can be considered as a universal approximator, which is established by the Universal Approximation Theorem.
For more information on the proof of the Universal Approximation Theorem, take a look at the following papers:
  • Cybenko, G. (1989) Approximations by Superpositions of Sigmoidal Functions, Mathematics of Control, Signals, and Systems, 2(4), 303–314.
  • Leshno, Moshe; Lin, Vladimir Ya.; Pinkus, Allan; Schocken, Shimon (January 1993). Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks. 6 (6): 861–867. doi:10.1016/S0893-6080(05)80131-5. (https://www.sciencedirect.com/science/article/abs/pii/S0893608005801315?via%3Dihub)
  • Kurt Hornik (1991) Approximation Capabilities of Multilayer Feedforward Networks, Neural Networks, 4(2), 251–257. doi:10.1016/0893-6080(91)90009-T (https://www.sciencedirect.com/science/article/abs/pii/089360809190009T?via%3Dihub)
  • Hanin, B. (2018). Approximating Continuous Functions by ReLU Nets of Minimal Width. arXiv preprint arXiv:1710.11278. (https://arxiv.org/abs/1710.11278)
Over the past 70 years, many ANN training methods have been proposed. However, the most popular technique that gained fame in this decade was proposed by Jeffrey Hinton. It is based on the backpropagation of prediction error through the network, with various optimization techniques built around the gradient descent of the loss function with respect to connection weights between the network nodes. It demonstrates the outstanding performance of training deep neural networks for tasks related mainly to pattern recognition. However, despite its inherent powers, it has significant drawbacks. One of these drawbacks is that a vast amount of training samples are required to learn something useful from a specific dataset. Another significant disadvantage is the fixed network architecture that's created manually by the experimenter, which results in inefficient use of computational resources. This is due to a significant amount of network nodes not participating in the inference process. Also, backpropagation-based methods have problems with transferring the acquired knowledge to other similar domains.
Alongside backpropagation methods, there are very promising evolutionary algorithms that can address the aforementioned problems. These bio-inspired techniques draw inspiration from Darwin's theory of evolution and use natural evolution abstractions to create artificial neural networks. The basic idea behind neuroevolution is to produce the ANNs by using stochastic, population-based search methods. It is possible to evolve optimal architectures of neural networks, which accurately address the specific tasks using the evolutionary process. As a result, compact and energy-efficient networks with moderate computing power requirements can be created. The evolutionary process is executed by applying genetic operators (mutation, crossover) to the population of chromosomes (genetically encoded representations of ANNs/solutions) over many generations. The central belief is that since this is in biological systems, subsequent generations will be suited to withstand the generational pressure that's expressed by the objective function, that is, they will become better approximators of the objective function.
Next, we will discuss the basic concepts of genetic algorithms. You will need to have a moderate level of understanding of genetic algorithms.

Genetic operators

Genetic operators are at the very heart of every evolutionary algorithm, and the performance of any neuroevolutionary algorithm depends on them. There are two major genetic operators: mutation and crossover (recombination).
In this chapter, you will learn about the basics of genetic algorithms and how they differ from conventional algorithms, which use error backpropagation-based methods for training the ANN.

Mutation operator

The mutation operator serves the essential role of preserving the genetic diversity of the population during evolution and prevents stalling in the local minima when the chromosomes of organisms in a population become too similar. This mutation alters one or more genes in the chromosome, according to the mutation probability defined by the experimenter. By introducing random changes to the solver's chromosome, mutation allows the evolutionary process to explore new areas in the search space of possible solutions and find better and better solutions over generations.
The following diagram shows the common types of mutation operators:
Types of mutation operators
The exact type of mutation operator depends on the kind of genetic encoding that's used by a specific genetic algorithm. Among the various mutation types we come across, we can distinguish the following:
  • Bit inversion: The randomly selected bit, which is inverted (binary encoding).
  • Order change: Two genes are randomly selected and their p...

Indice dei contenuti

  1. Title Page
  2. Copyright and Credits
  3. Dedication
  4. About Packt
  5. Contributors
  6. Preface
  7. Section 1: Fundamentals of Evolutionary Computation Algorithms and Neuroevolution Methods
  8. Overview of Neuroevolution Methods
  9. Python Libraries and Environment Setup
  10. Section 2: Applying Neuroevolution Methods to Solve Classic Computer Science Problems
  11. Using NEAT for XOR Solver Optimization
  12. Pole-Balancing Experiments
  13. Autonomous Maze Navigation
  14. Novelty Search Optimization Method
  15. Section 3: Advanced Neuroevolution Methods
  16. Hypercube-Based NEAT for Visual Discrimination
  17. ES-HyperNEAT and the Retina Problem
  18. Co-Evolution and the SAFE Method
  19. Deep Neuroevolution
  20. Section 4: Discussion and Concluding Remarks
  21. Best Practices, Tips, and Tricks
  22. Concluding Remarks
  23. Other Books You May Enjoy
Stili delle citazioni per Hands-On Neuroevolution with Python

APA 6 Citation

Omelianenko, I. (2019). Hands-On Neuroevolution with Python (1st ed.). Packt Publishing. Retrieved from https://www.perlego.com/book/1343357/handson-neuroevolution-with-python-build-highperforming-artificial-neural-network-architectures-using-neuroevolutionbased-algorithms-pdf (Original work published 2019)

Chicago Citation

Omelianenko, Iaroslav. (2019) 2019. Hands-On Neuroevolution with Python. 1st ed. Packt Publishing. https://www.perlego.com/book/1343357/handson-neuroevolution-with-python-build-highperforming-artificial-neural-network-architectures-using-neuroevolutionbased-algorithms-pdf.

Harvard Citation

Omelianenko, I. (2019) Hands-On Neuroevolution with Python. 1st edn. Packt Publishing. Available at: https://www.perlego.com/book/1343357/handson-neuroevolution-with-python-build-highperforming-artificial-neural-network-architectures-using-neuroevolutionbased-algorithms-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Omelianenko, Iaroslav. Hands-On Neuroevolution with Python. 1st ed. Packt Publishing, 2019. Web. 14 Oct. 2022.