Artificial Intelligence and Digital Systems Engineering
eBook - ePub

Artificial Intelligence and Digital Systems Engineering

Adedeji B. Badiru

Share book
  1. 112 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Artificial Intelligence and Digital Systems Engineering

Adedeji B. Badiru

Book details
Book preview
Table of contents
Citations

About This Book

The resurgence of artificial intelligence has been fueled by the availability of the present generation of high-performance computational tools and techniques. This book is designed to provide introductory guidance to artificial intelligence, particularly from the perspective of digital systems engineering.

Artificial Intelligence and Digital Systems Engineering provides a general introduction to the origin of AI and covers the wide application areas and software and hardware interfaces. It will prove to be instrumental in helping new users expand their knowledge horizon to the growing market of AI tools, as well as showing how AI is applicable to the development of games, simulation, and consumer products, particularly using artificial neural networks.

This book is for the general reader, university students, and instructors of industrial, production, civil, mechanical, and manufacturing engineering. It will also be of interest to managers of technology, projects, business, plants, and operations.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Artificial Intelligence and Digital Systems Engineering an online PDF/ePUB?
Yes, you can access Artificial Intelligence and Digital Systems Engineering by Adedeji B. Badiru in PDF and/or ePUB format, as well as other popular books in Informatique & Réseaux de neurones. We have over one million books available in our catalogue for you to explore.

Information

Publisher
CRC Press
Year
2021
ISBN
9781000472530

1

Understanding AI

DOI: 10.1201/9781003089643-1

Introduction

Artificial intelligence (AI) is not just one single thing. It is a conglomerate of various elements, involving software, hardware, data platform, policy, procedures, specifications, rules, and people intuition. How we leverage such a multifaceted system to do seemingly intelligent things, typical of how humans think and work, is a matter of systems implementation. This is why the premise of this book centers on a systems methodology. In spite of the recent boost in the visibility and hype of artificial intelligence, it has actually been around and toyed with for decades. What has brought AI more to the forefront nowadays is the availability and prevalence of high-powered computing tools that have enabled the data-intensive processing required by AI systems. The resurgence of AI has been driven by the following developments:
  • Emergence of new computational techniques and more powerful computers
  • Machine learning techniques
  • Autonomous systems
  • New/innovative applications
  • Specialized techniques: Intelligent Computational Search Technique Using Cantor Set Sectioning
  • Human-in-the-loop requirements
  • Systems integration aspects
As long ago as the mid-1980s, the author has led many research and development projects that embedded AI software and hardware into conventional human decision processes. AI has revolutionized and will continue to revolutionize many things we see and use around us. So, we need to pay attention to the emerging developments.

Historical Background

The background of Al has been characterized by controversial opinions and diverse approaches. The controversies have ranged from the basic definition of intelligence to questions about the moral and ethical aspects of pursuing AI. However, despite the unsettled controversies, the technology continues to generate practical results. With increasing efforts in AI research, many of the prevailing arguments are being resolved with proven technical approaches. Expert system, the main subject of this book, is the most promising branch of AI.
“Artificial intelligence” is a controversial name for a technology that promises much potential for improving human productivity. The phrase seems to challenge human pride in being the sole creation capable of possessing real intelligence. All kinds of anecdotal jokes about AI have been offered by casual observers. A speaker once recounted his wife’s response when he told her that he was venturing into the new technology of AI. “Thank God, you are finally realizing how dumb I have been saying you were all these years,” was alleged to have been the wife’s words of encouragement. One whimsical definition of AI refers to it as the “Artificial Insemination of knowledge into a machine.” Despite the deriding remarks, serious embracers of AI may yet have the last laugh. It is being shown again and again that AI may hold the key to improving operational effectiveness in many areas of application. Some observers have suggested changing the term “Artificial Intelligence” to a less controversial one such as “Intelligent Applications (IA).” This refers more to the way that computers and software are used innovatively to solve complex decision problems.
Natural Intelligence involves the capability of humans to acquire knowledge, reason with the knowledge, and use it to solve problems effectively. It also refers to the ability to develop new knowledge based on existing knowledge. By contrast, Artificial Intelligence is defined as the ability of a machine to use simulated knowledge in solving problems.

Origin of Artificial Intelligence

The definition of intelligence had been sought by many ancient philosophers and mathematicians including Aristotle, Plato, Copernicus, and Galileo. These great philosophers attempted to explain the process of thought and understanding. The real key that started the quest for the simulation of intelligence did not occur, however, until the English philosopher Thomas Hobbes put forth an interesting concept in the 1650s. Hobbes believed that thinking consists of symbolic operations and that everything in life can be represented mathematically. These beliefs directly led to the notion that a machine capable of carrying out mathematical operations on symbols could imitate human thinking. This is the basic driving force behind the Al effort. For that reason, Hobbes is sometimes referred to as the grandfather of AI.
While the term “Artificial Intelligence” was coined by John McCarthy in 1956, the idea had been considered centuries before. As early as 1637, Rene Descartes was conceptually exploring the ability of a machine to have intelligence when he said:
For we can well imagine a machine so made that it utters words and even, in a few cases, words pertaining specifically to some actions that affect it physically. However, no such machine could ever arrange its words in various different ways so as to respond to the sense of whatever is said in its presence—as even the dullest people can do.
Descartes believed that the mind and the physical world are on parallel planes that cannot be equated. They are of different substances following entirely different rules and can, thus, not be successfully compared. The physical world (i.e., machines) cannot imitate the mind because there is no common reference point.
Hobbes proposed the idea that thinking could be reduced to mathematical operations. On the other hand, Descartes had the insight into functions that machines might someday be able to perform. But he had reservations about the concept that thinking could be simply a mathematical process.
The 1800s was an era that saw some advancement in the conceptualization of the computer. Charles Babbage, a British mathematician, laid the foundation for the construction of the computer, a machine defined as being capable of performing mathematical computations. In 1833, Babbage introduced an Analytical Engine. This computational machine incorporated two unprecedented ideas that were to become crucial elements in the modern computer. First, it had operations that were fully programmable, and second, the engine could contain conditional branches. Without these two abilities, the power of today’s computers would be inconceivable. Babbage was never able to realize his dream of building the analytic engine due to a lack of financial support. However, his dream was revived through the efforts of later researchers. Babbage’s basic concepts could be observed in the way that most computers operate today.
Another British mathematician, George Boole, worked on issues that were to become equally important. Boole formulated the “laws of thought” that set up rules of logic for representing thought. The rules contained only two-valued variables. By this, any variable in a logical operation could be in one of only two states: yes or no, true or false, all or nothing, 0 or 1, on or off, and so on. This was the birth of digital logic, a key component of the AI effort.
In the early 1900s, Alfred North Whitehead and Bertrand Russell extended Boole’s logic to include mathematical operations. This not only led to the formulation of digital computers but also made possible one of the first ties between computers and thought process.
However, there was still a lack of an acceptable way to construct such a computer. In 1938, Claude Shannon published “A Symbolic Analysis of Relay and Switching Circuits.” This work demonstrated that Boolean logic consisting of only two-variable states (e.g., on–off switching of circuits) can be used to perform logic operations. Based on this premise, the ENIAC (Electronic Numerical Integrator and Computer) was built in 1946 at the University of Pennsylvania. The ENIAC was a large-scale fully operational electronic computer that signaled the beginning of the first generation of computers. It could perform calculations 1,000 times faster than its electromechanical predecessors. It weighed 30 tons, stood two stories high, and occupied 1,500 square feet of floor space. Unlike today’s computers that operate in binary codes (0’s and 1’s), the ENIAC operated in decimal (0, 1, 2,…, 9) and it required ten vacuum tubes to represent one decimal digit. With over 18,000 vacuum tubes, the ENIAC needed a great amount of electrical power so much that it was said that it dimmed the lights in Philadelphia whenever it operated.

Human Intelligence versus Machine Intelligence

Two of the leading mathematicians and computer enthusiasts during the 1900–1950 time frame were Alan Turing and John Von Neumann. In 1945, Von Neumann insisted that computers should not be built as glorified adding machines, with all their operations specified in advance. Rather, he suggested, computers should be built as general-purpose logic machines capable of executing a wide variety of programs. Such machines, Von Neumann proclaimed, would be highly flexible and capable of being readily shifted from one task to another. They could react intelligently to the results of their calculations, could choose among alternatives, and could even play checkers or chess. This represented something unheard of at that time: a machine with built-in intelligence, able to operate on internal instructions.
Prior to Von Neumann’s concept, even the most complex mechanical devices had always been controlled from the outside, for example, by setting dials and knobs. Von Neumann did not invent the computer, but what he introduced was equally significant: computing by use of computer programs, the way it is done today. His work paved the way for what would later be called AI in computers.
Alan Turing also made major contributions to the conceptualization of a machine that can be universally used for all problems based only on variable instructions fed into it. Turing’s universal machine concept, along with Von Neumann’s concept of a storage area containing multiple instructions that can be accessed in any sequence, solidified the ideas needed to develop the programmable computer. Thus, a machine was developed that could perform logical operations and could do them in varying orders by changing the set of instructions that were executed.
Due to the fact that operational machines were now being realized, questions about the “intelligence” of the machines began to surface. Turing’s other contribution to the world of Al came in the area of defining what constitutes intelligence. In 1950, he designed the Turing test for determining the intelligence of a system. The test utilized the conversational interaction between three players to try and verify computer intelligence.
The test is conducted by having a person (the interrogator) in a room that contains only a computer terminal. In an adjoining room, hidden from view, a man (Person A) and a woman (Person B) are located with another computer terminal. The interrogator communicates with the couple in the other room by typing questions on the keyboard. The questions appear on the couple’s computer screen, and they respond by typing on their own keyboard. The interrogator can direct questions to either Person A or Person B, but without knowing which is the man and which is the woman.
The purpose of the test is to distinguish between the man and the woman merely by analyzing their responses. In the test, only one of the people is obligated to give truthful responses. The other person deliberately attempts to fool and confuse the interrogator by giving responses that may lead to an incorrect guess. The second stage of the test is to substitute a computer for one of the two persons in the other room. Now, the human is obligated to give truthful responses to the interrogator while the computer tries to fool the interrogator into thinking that it is human. Turing’s contention is that if the interrogator’s success rate in the human/computer version of the game is not better than his success rate in the man/woman version, then the computer can be said to be “thinking.” That is, the computer possesses “intelligence.” Turing’s test has served as a classical example for AI proponents for many years.
By 1952, computer hardware had advanced far enough that actual experiments in writing programs to imitate thought processes could be conducted. The team of Herbert Simon, Allen Newell, and Cliff Shaw organized to conduct such an experiment. They set out to establish what kinds of problems a computer could solve with the right programming. Proving theorems in symbolic logic such as those set forth by Whitehead and Russell in the early 1900s fit the concept of what they felt an intelligent computer should be able to handle.
It quickly became apparent that there was a need for a new, higher-level computer language than was currently available. First, they needed a language that was more user-friendly and could take program instructions that are easily understood by a human programmer and automatically convert them into machine language that could be understood by the computer. Second, they needed a programming language that changed the way in which computer memory was allocated. All previous languages would preassign memory at the start of a program. The team found that the type of programs they were writing would require large amounts of memory and would function unpredictably.
To solve the problem, they developed a list processing language. This type of language would label each area of memory and then maintain a list of all available memory. As memory became available, it would update the list, and when more memory was needed, it would allocate the amount necessary. This type of programming also allowed the programmer to be able to structure his or her data so that any information that was to be used for a particular problem could be easily accessed.
The end result of their effort was a program called Logic Theorist (LT). This program had rules consisting of axioms already proved. When it was given a new logical expression, it would search through all of the possible operations in an effort to discover a proof of the new expression. Instead of using a brute force search method, they pioneered the use of heuristics in the search method.
The LT that they developed in 1955 was capable of solving 38 of 52 theorems that Whitehead and Russell had devised. It was not only capable of the proofs but did them very quickly. What took a LT a matter of minutes to prove would have taken years to do if it had been done by simple brute force on a computer. By comparing the steps that it went through to arrive at a proof to those that human subjects went through, it was also found that it had a remarkable imitation of the human thought process.

Natural Language Dichotomies

Despite the various successful experiments, many observers still believe that Al does not have much potential for practical applications. There is a popular joke in the Al community that points out the deficiency of Al in natural language applications. It is said that a computer was asked to translate the following English statement into Russian and back to English: The spirit is willing but the flesh is weak. The reverse translation from Russian to English yielded: The vodka is good but the meat is rotten.
From my own author perspective, AI systems are not capable of thinking in the human sense. They are great in mimicking based on the massive amounts of data structures and linkages available. For example, consider the following natural language interpretations of the following ordinary statements:
  • “No salt is sodium free.” A human being can quickly infer the correct interpretation and meaning based on the prevailing context of the conversation. However, an “intelligent” machine may see the same statement in different ways, as enumerated below:
  • “No (salt) is sodium free,” which negates the property of the object, salt. This means that there is no type of salt that is sodium free. In other words, all salts contain sodium.
Alternately, the statement can be seen as follows:
  • “(No-salt) is sodium free,” which is a popular advertisement slogan for the commercial kitchen ingredient named (No-salt). In this case, the interpretation is that this product, named No-salt does not contain sodium.
Here is another one:
  • “No news is good news.” This is a common saying that humans can easily understand regardless of the context. In AI reasoning, it could be subject to the following interpretations:
  • “(No news) is good news,” which agrees with the normal understanding that the state of having no new implies the absence of bad news, which is good (i.e., desirable). In this case, (No-news), as a compound word, is the object.
Or, an AI system could see it as:
  • “No (news) is good news,” which is a contradiction of the normal interpretation. In this case, the AI system could interpret it as a case where all pieces of news are bad (i.e., not good). This implies that the object is the (news).
Here is another one from the political arena:
  • “The Britis...

Table of contents