The Self-Assembling Brain
eBook - ePub

The Self-Assembling Brain

How Neural Networks Grow Smarter

Peter Robin Hiesinger

Condividi libro
  1. 296 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

The Self-Assembling Brain

How Neural Networks Grow Smarter

Peter Robin Hiesinger

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

What neurobiology and artificial intelligence tell us about how the brain builds itself How does a neural network become a brain? While neurobiologists investigate how nature accomplishes this feat, computer scientists interested in artificial intelligence strive to achieve this through technology. The Self-Assembling Brain tells the stories of both fields, exploring the historical and modern approaches taken by the scientists pursuing answers to the quandary: What information is necessary to make an intelligent neural network?As Peter Robin Hiesinger argues, "the information problem" underlies both fields, motivating the questions driving forward the frontiers of research. How does genetic information unfold during the years-long process of human brain development—and is there a quicker path to creating human-level artificial intelligence? Is the biological brain just messy hardware, which scientists can improve upon by running learning algorithms on computers? Can AI bypass the evolutionary programming of "grown" networks? Through a series of fictional discussions between researchers across disciplines, complemented by in-depth seminars, Hiesinger explores these tightly linked questions, highlighting the challenges facing scientists, their different disciplinary perspectives and approaches, as well as the common ground shared by those interested in the development of biological brains and AI systems. In the end, Hiesinger contends that the information content of biological and artificial neural networks must unfold in an algorithmic process requiring time and energy. There is no genome and no blueprint that depicts the final product. The self-assembling brain knows no shortcuts.Written for readers interested in advances in neuroscience and artificial intelligence, The Self-Assembling Brain looks at how neural networks grow smarter.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
The Self-Assembling Brain è disponibile online in formato PDF/ePub?
Sì, puoi accedere a The Self-Assembling Brain di Peter Robin Hiesinger in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Biological Sciences e Neuroscience. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2021
ISBN
9780691215518

1

Algorithmic Growth

1.1

Information? What Information?

The Second Discussion: On Complexity

AKI (THE ROBOTICS ENGINEER): Okay, this was weird. Not sure it adds up. Let’s just start with one problem: the neat vs scruffy discussion was really about the culture of people working in AI in the ’70s and ’80s. It’s not a scientific concept. These words are not properly defined, and their meaning changed with time. I know them to describe Minsky vs McCarthy, the scruffy ‘hack’ versus formal logic—but both worked on symbol processing AI, not neural nets. Only later did people pitch symbol processing vs neural nets using the same words—how does that make sense? It was also too much biology for me. Half of it was the history of this guy Sperry.
ALFRED (THE NEUROSCIENTIST): Well, before ‘this guy Sperry’ people apparently thought development only produced randomly wired networks, and all the information enters through learning. That’s huge. Sperry marks a transition, a step artificial neural networks apparently never made! I liked the idea of the shared problem. It’s interesting that the early computer people thought it just had to be random because of the information problem of real brains. And even Golgi thought there must be a whole network right from the start, the famous ‘reticular theory.’ How does a biologist or an AI programmer today think the information gets into the neural network of the butterfly?
MINDA (THE DEVELOPMENTAL GENETICIST): You need precise genetic information to build it and make it work. I think it is conceptually known how the information got into the network: it’s in the butterfly’s genes. There may be some loss of precision during development, but a neural circuit that works has to be sufficiently precise to do so.
PRAMESH (THE AI RESEARCHER): That sounds like genetic determinism to me. Of course there is precise information in the genes, but that’s not the same as the information that describes the actual network. We need to look at information at different levels. It’s similar in our AI work: We first define a precise network architecture, learning rules, etc., there we have total control. But the network has the ability to learn—we feed it a lot of information, and we may never get to know how it stored and computed that information really. Here you have less control and really don’t want to have more. An unsupervised approach will even find things you never told it to find in the first place.1
ALFRED: I like that. I also liked that Minsky and Rosenblatt both built their networks with random connections, not precise network architecture … and those are the things that worked.…
PRAMESH: Yes, randomness and variability can make outcomes robust, but also unpredictable. In our computational evolution experiments, random processes are required to make evolution flexible and robust, but we can never predict the solution the simulation will find. It’s the same with how an artificial neural network learns. It’s a question of levels: you can have control at a lower level without having control over how the system operates at a higher level. This can actually be true even if a system is completely deterministic, without randomness or variability. Maybe that’s how we should think about genes.
MINDA: As I said before, the genes contain the information to develop the network. In your AI work, you feed additional information for the network to learn. That’s a key difference. A developing biological neural network forms precise connectivity based on nothing but the genetic program. If there is no environmental contribution, how can there be anything in the network that was not previously in the genes?
PRAMESH: Can you predict how a mutation in a gene will play out during development?
MINDA: Of course. We know hundreds, probably thousands, of developmental disease mutations where we know the outcome.
AKI: That’s cheating! You only know the outcome because you’ve seen it. Could you predict the outcome if you had never seen it before?
MINDA: Let me see. It is true that the outcomes of de novo mutations are not as easy to predict, but we can usually gain insight based on previous knowledge. If we know the gene’s function or other mutations, that of course helps to make predictions. We can then test those predictions.
ALFRED: … but then you are again predicting based on comparison with previous outcomes! How much could you really predict if you had nothing but the genome and zero knowledge about outcomes from before? It’s a cool question.
MINDA: No previous data is a bit hypothetical. It would of course be more difficult to predict the effect of a given mutation. The more information I gather, the better I can predict the outcome.
PRAMESH: So, once you have seen an outcome for a given mutation, you can predict it—sometimes with 100% probability. But the point is: a mutation may play out unpredictably during development even if it always leads to the same outcome—with 100% probability.
AKI: Not clear.
PRAMESH: Well, if you run an algorithm or a machine with the same input twice and it produces the exact same output twice, then this is what we call deterministic. But even a deterministic system can function unpredictably, that’s the story of deterministic chaos. It just means there is no other way to find out what a given code produces at some later point in time other than running the full simulation; there is no shortcut to prediction.2
ALFRED: I remember the deterministic chaos stuff from the ’80s and ’90s, … did that ever go anywhere? It always sounded like giving a catchy, fancy name to something that nobody really understood.
AKI: Yeah, as Wonko the Sane said: if we scientists find something we can’t understand we like to call it something nobody else can understand.…3
MINDA: I don’t see where this is supposed to go. It seems to me that if a code always produces precisely the same outcome, then it is also predictable.
PRAMESH: Well, it’s not. Deterministic systems can be what is called mathematically ‘undecidable.’ Do you know cellular automata? People still study them a lot in artificial life research. Conway’s Game of Life is quite famous. Think of a very simple rule set to create a pattern. It turns out that after a few hundred iterations, new patterns emerge that could not have been predicted based on the simpl...

Indice dei contenuti