CHAPTER 1
THE FUNDAMENTAL DIFFERENCES BETWEEN ARTIFICIAL & HUMAN INTELLIGENCE
FEDERICO FAGGIN
SUMMARY
Tripling of the world population over the past century has made us increasingly reliant on machines to sustain life more efficiently than can be managed by humans alone. A rapid growth in Artificial Intelligence (computer science machines aiming to understand & judge like humans) means that dirty, dangerous and dull routines, can be carried out by sophisticated, powered-systems called robots. They are, however, only as good as the people that produce and programme them, so we need to give urgent attention to the moral and ethical issues involved to avoid possible dangers and abuse of such complex inventions. This book introduction considers the basic differences between Artificial and Human Intelligence to begin an important discussion. Radical rethinking of our present way of educating citizens is urgently required, to prepare them for this brave new world, so they remain masters and not servants of these powerful tools.
INTRODUCTION
There is much speculation today about a possible future where mankind will be surpassed and perhaps even destroyed by machines. We hear of self-driving cars, Big Data, the resurgence of Artificial Intelligence (AI) and even of transhumanism – the idea that it may be possible to download our experience and consciousness in a computer to live forever. Major warnings have been given by public figures like Bill Gates and Elon Musk, as well as the late Cambridge astrophysicist, Professor Stephen Hawking, in the Expedition New Earth documentary, about the dangers of robotics and AI. What is TRUE and what is FICTION in this picture? The chapter aims to provide some answers to this important question and present the basis of a radical re-think to traditional education.
In all these projections, it is assumed that it will be possible to make truly intelligent and autonomous machines in the not too distant future, which are at least as good, if not better than we are. Is this assumption correct? I will argue that real intelligence requires consciousness, and this is something our machines do not have, and most likely will never be able to acquire as the requisite for responding flexibly to unpredictable circumstances.
Today, most scientists suggest that we are just machines; sophisticated information processing systems based on wetware*. That is why they think that it will be possible to make such apparatus that will surpass human beings. These experts believe that consciousness is an epiphenomenon of the brain operation, produced by something similar to the software that runs in our computers. Therefore, with more sophisticated software our robots will eventually be conscious, but is this really possible?
DEFINING CONSCIOUSNESS
Let us start by defining what is meant by consciousness: I know within myself that I exist, but how do I know? I am sure I exist because I feel so, confirmed by the interactive responses of others in my environment. It is the feeling that carries the knowing, based on sensory channel modes and inherited genetic information, with a capacity to do this the essential property here. When I smell a rose, I feel and am aware of its perfume, but be careful! The feeling is not the set of electrical signals produced by the olfactory receptors inside my nose. Those signals carry objective information, but this is translated within my consciousness into a subjective feeling: what the smell of that rose feels like to me.
We could build a robot capable of detecting the particular molecules that carry the rose smell and correctly identify this by its distinctive scent, for example. However, the robot would have no feeling whatsoever. It would not be aware of the smell as a particular sensation. To be aware one must feel, but the robot stops at the electrical signals and from these it can generate others to cause a response and action. We do much more than that, however, because we actually feel the smell of the rose and through that special feeling we connect with that flower in a special way. We can also make a free-will decision that is informed by that feeling. For example, we love the scent so much we decide to buy this type of rose for ourselves and others to spread the pleasure!
Consciousness could be defined simply as the capacity to feel. Feeling, however, implies the existence of a subject that feels – a self. Therefore, consciousness is inextricably linked to a self and is the inherent capacity of this to perceive and know through feelings a sentient experience. It is a defining property of a self. Feelings, moreover, are clearly a different category of phenomena than electrical signals and incommensurable with them. Philosophers have coined the word ‘quale’ to indicate what something feels like, and explaining ‘qualia’ is called the hard problem of consciousness, because nobody has yet solved its dynamic quality and constant interactions of many personal attributes. In the rest of this discussion, I will use the word ‘qualia’ to refer to four different classes of feelings: physical sensations & feelings, emotions, thoughts and spiritual feelings. The latter is more difficult to define, but refers to interests of a deep kind that contribute to our awareness and sense of purpose within existence.
Electrical signals, be they in a computer or a brain, do not produce qualia. In fact, there is nothing in the Laws of Physics that tells us how to translate electrical signals into qualia. How is it possible then to have qualia-perceptions? Having studied the problem for over 20 years, I have come to the conclusion that consciousness may be an irreducible aspect of nature, an inherent property of the energy out of which space, time and matter emerged in the Big Bang.
In this view, far from being an epiphenomenon, consciousness is real, being an ability to have all one’s senses working to understand what is happening, with a response to this in an alert, aware manner. In other words, the stuff out of which everything is made is cognitive and the highest material expression of consciousness is what we call life. In this view, consciousness is not an emergent property of a complex system, but the other way around. A complex system is an emergent property of the conscious energy out of which everything physical is made. Therefore, consciousness cannot magically emerge from algorithms, but its seeds are already present in creation. In this view, consciousness and complex physical systems co-evolve.
There is no time to explore this subject in more depth because I want to make a convincing case that to make truly intelligent, autonomous machines, consciousness is indispensable and this is not a property that will emerge from computers. Some people could then insist that computers may be able to perform better than humans without consciousness, which is discussed next. I aim to show that comprehension is a fundamental property of consciousness, even more important than qualia-perception and that this is a defining property of intelligence. Therefore, if there is no consciousness there is no comprehension and without it – no intelligence – so a system cannot be autonomous for long.
MAKING DECISIONS
Let us consider how human beings make decisions. Our sensory system converts various forms of energy in our environment into electrical signals, which are then sent to the brain for processing. The result of this activity is another set of electrical signals representing multi-sensory information: visual, auditory, tactile and so on. At the end of this process we have a certain amount of objective information about the world. Computers can arrive up to this point. This information then is converted somehow within our consciousness into semantic data: an integrated multisensory qualia-display of the state of the world that includes both our inner world (thoughts & ideas represented in words/images in the mind) and the outer world (input from our environment). In fact, it may be even more accurate to say that the outer world has been brought inside of us, with a representation that integrates both for a holistic interpretation.
This is what I call qualia-perception, but it is only the raw semantic data out of which comprehension is achieved through an additional process even more mysterious than the one that produced qualia-perception. Comprehension is what allows us to understand the current situation, within the context of our past experience, existing perceptions and the present set of our desires, aspirations, intentions and goals for providing such insight.
Understanding, resulting from internal and external verbal and non-verbal processing, is the next necessary step before an intelligent choice can be made. It is understanding that allows us to decide if action is needed, and if so, what one is optimal in the present circumstances. The degree to which consciousness is involved in deciding what action to take has a huge range, going from no involvement whatsoever, all the way to a protracted conscious reflection and further pondering that may take days or weeks.
When the situation is judged to be similar to others, with a certain action that produced good results, this same one can be subconsciously chosen, producing something akin to a conditioned response. On the other hand, there are situations unlike anything encountered before, in which case the various choices, based on our prior experience, are likely to be inadequate. Here is where our consciousness gets deeply involved, allowing us to come up with a creative solution. We find the cutting edge of human consciousness, where this is indispensable, not in solving trivial problems but in those requiring deep thoughts and higher-level cognition and communication. Therefore, real intelligence is the ability to correctly judge a situation and find a creative solution. It requires comprehension, which, in human beings, is normally a psycho-linguistic and non-linguistic process, assembling information from many sources for understanding.
Now, to have true autonomy, a robot needs to be able to operate in unconstrained environments, successfully handling the huge variability and unpredictability of real-life situations. It must also deal with issues in hostile environments, where there is deception, aggression and conflict. It is the near-infinite variability of these situations that make comprehension necessary and only this state can reduce or remove the ambiguity present in the objective data. An example of this problem is handwriting recognition or language translation, where the form, content and/or use of information is ambiguous. Therefore, there is not enough data, at that level, to be able to solve the problem.
Autonomous robots are only possible in situations where the environment is either artificially controlled or its expected variability is relatively small. If qualia-perception is the hard problem of consciousness, it is comprehension that is the hardest one. This is where the difference between a machine and a human being cannot be bridged. Comprehension and its expression is the most complex, holistic activity of human beings and its dynamism and unpredictability require continual, creative, original responses.
HOLISTIC SYSTEMS
All the machines we build (computers included) are made by assembling a number of separate parts. Therefore, we can at least in principle disassemble a machine into all its separate components and reassemble these to function once more. However, we cannot disassemble a living cell into its atomic and molecular components and then reassemble them hoping that it will work again. The living cell is a dynamic system of a different kind than our machines: it uses quantum components that have no definable boundaries.
We study cellular reductively like we would a machine, but cells work as holistic systems. A cell is also an open system because it constantly exchanges energy and matter with the environment in which it exists. Thus, the physical structure of the cell is dynamic; it is recreated from moment to moment with parts constantly flowing in and out of it, even if it seems to us that it stays the same. Therefore, a cell cannot be separated from the environment with which it is in symbiosis without losing something. A computer instead, for as long as it works, has the same atoms and molecules that it had when it was first constructed. Nothing changes in its hardware, and in that sense it is a static system, which can only function in prescribed circumstances.
The kind of information processing done in a cell is completely different to that going on in our computers. In a computer, the transistors are connected together in a fixed pattern; in a cell, the parts interact freely with each other, processing information in ways we do not yet fully comprehend. As long as we study cells as reductive biochemical systems rather than quantum information-processing systems, we will n...