AI is Everywhere
The moral of this story [1] is simple. When it comes to any form of new technology, the destination you arrive at may not be the one you were planning for. As a society, we are embarking on arguably the most exciting technological journey; the quest to improve the way we live and, especially, the way we work through the practical use of Artificial Intelligence (AI). Reading the daily media reports and the recent AI Index report [2], many would argue that we are already using AI extensively in our personal and professional lives.
One reason that AI is seen as ubiquitous is that there are many high-profile and successful applications that do have a major impact on our daily lives. Many of these applications are driven by large web companies such as Google, Amazon and Facebook and enable impressive applications such as search, question answering, and image classification. In our online shopping, we experience AI-enabled profiling and targeted advertising continually and can be forgiven for thinking that the AI knows us better than we know ourselves. We no longer need to learn a foreign language as online translation tools allow us to read foreign language text or order a drink anywhere in the world. When we call our bank or book a holiday, we can count on dealing with virtual agents that seem to understand our conversation as if they were human. So, whatâs the problem?
Enterprise Applications
Enterprise applications are different from these popular consumer web applications, and they really do run the world in both the commercial and public sectors. In the commercial sector, enterprise applications are used in retail sales, insurance, finance, telecommunications, transportation, manufacturing, and many other industries. In the public sector, government agencies deploy applications to support areas including Law Enforcement, Social Security Administration, Internal Revenue Service, Health Services and National Defense. With so much AI already available and working for our benefit, you may be surprised to know that not only are actual enterprise applications of AI still in their infancy, but that the majority of enterprise AI projects fail. As evidence, recent analyst reports [3, 4, 5, 6] estimate that at most 20%â40% success rate for the adoption of AI to create business value. This supports the assertion that moving AI from a proof-of-concept to a real business solution is not a trivial exercise.
While the consumer web applications mentioned above are extensive in their reach and impact, they represent just a tiny fraction of real Information Technology (IT) applications around the world. Hidden under the covers of the organisations that we rely on for our day-to-day lives are tens of thousands of applications. Individually, these applications may appear much smaller in reach and impact than web search or online shopping, but, in reality, they are really critical to our lives. These applications perform all the critical functions of the modern world from managing our prescriptions to evaluating life insurance risks, controlling city traffic, managing bank accounts and scheduling the maintenance of trains and buses. There is a vast number of such enterprise applications, and many could benefit from the application of AI. However, many well-intended AI projects still underestimate the extra complexity of their delivery in an enterprise setting, and often even after stellar early success, still fail to deliver actual business benefit. In creating these enterprise applications, we need to recognise that they are very different from consumer web applications and that delivering AI in the enterprise is different from delivering AI in a web company.
Consider, for example, a consumer web application such as one of the personal assistants that we all now have in our homes and use in our everyday activities. These assistants are designed to answer the most frequently asked questions such as, âis it going to rain today?â To answer these questions, the developers provide specific services and then capture data from millions of users to discover all the possible ways the question could be asked.
But what about answering general knowledge questions? While we all love to be impressed by the power of our online assistants, answering general knowledge questions is considerably easier than answering enterprise questions. âWho was the British Prime Minister during the Suez Crisis?â requires a factual response. To find the answer to such a transactional question, the technology can exploit the massive levels of information redundancy on the internet. There are tens, if not hundreds of thousands of documents online about the Suez Crisis. This information redundancy means that it is possible to use simple correlation algorithms to identify the correlation between the terms âSuez crisisâ, âPrime Ministerâ and âAnthony Edenâ. All the web AI has to do is take the statistically strongest answer. The internet also includes many trusted sources of data, including news agencies, educational organisations and encyclopaedic sites that aim to provide validated and trustworthy information. In tuning their algorithms, the web companies can use the feedback from millions of users to spot and correct errors.
In other words, consumer web applications tend to focus on quite simple, common tasks that can be performed with huge volumes of information, from many (often trusted) sources, while using feedback from tens of millions of users to tune the algorithms. In the enterprise, the questions asked are rarely that simple and the volumes of information are much, much lower. Often information is contradictory and may take a skilled user to assess and really understand. Alternatively, the required information may not exist at all. As for the number of people involved, in an enterprise, far fewer people ask far more complex questions and the differences between questions may be subtle, but important. Our ability to capture high volume feedback is limited.
Finally, there is one further and very significant difference between applying AI in an enterprise setting and applying AI for consumer web applications. We are much more forgiving of errors in web AI; it is mostly of low consequence if a web AI brings back a wrong answer. If we ask, âAlexa play my favorite tuneâ and the response is, âTHE MONTH OF JUNE HAS 30 DAYSâ, itâs another thing to laugh about at dinner parties. However, we will be far less forgiving if a police application leads to the arrest of the wrong person or a medical AI leads to a misdiagnosis.
The excitement generated by AI web applications does still add value to enterprise applications. First, it changes the way we might think about enterprise applications by placing greater emphasis on ease of access and simplicity. Graduates joining modern enterprises expect web technology and web style user experiences in their life at work. Second, it drives innovation and helps push forward the business cases for enterprise applications of AI. Of course, such an endeavour will succeed only if the AI can deliver on the expectations assumed in the business case. In this respect, we must recognise that the domain of AI has a track record of failed delivery.
AI Winters
There have been two periods in the history of AI where dashed expectations have led the industry to lose all confidence in AI. The perception of AI was so poor that these periods were called âAI Wintersâ. During these periods, funding for new AI endeavours all but disappeared and there was widespread disillusionment, some would say cynicism, about AI.
Origins of Artificial Intelligence Research
The Beginning
The modern formulation of AI started with Alan Turingâs famous paper, âComputing Machinery and Intelligenceâ [1], where he discussed the concept of computing machines emulating human intelligence in some context, giving rise to the term âTuring Testâ for machines. The next big event was the 1956 Dartmouth Summer Workshop [2] organised by John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon when the term âArtificial Intelligenceâ was introduced as an interdisciplinary area of research. Two key people who could not attend the event were, Alan Turing who had died in 1954, and the computing visionary, John von Neumann who was already seriously ill. Even though the workshop did not produce anything specific, it gave the participants the motivation to approach AI from their different perspectives over the next many decades.
What Is AI?
John McCarthy defined [3] AI as, âthe science and engineering of making intelligent machinesâ and defined âintelligenceâ as âthe computational part of the ability to achieve goals in the world.â Marvin Minsky offered [4] a similar definition, âthe science of making machines do things that would require intelligence if done by menâ. The Encyclopedia Britannica currently defines AI as, âthe ability of a digital computer or computer- controlled robot to perform tasks commonly associated with intelligent beingsâ.
These definitions all share a common theme in that they refer to performing tasks that would normally require or be associated with human intelligence. There is clearly a paradox in this definition. Pamela McCorduck [5] describes this: âPractical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the âfailures,â the tough nuts that couldnât yet be cracked.â There is actually a name for this! It is called the âAI Effectâ and summarised in Larry Teslerâs theorem [6], âIntelligence is whatever machines havenât done yetâ. This is because the society wants to associate intelligence only with humans and does not want to admit that human tasks can indeed be performed by machines!
REFERENCES
- A.M. Turing, âComputing machinery and intelligence,â Mind, New Series, 59(236), pp. 433â460 (October 1950).
- âDartmouth summer research project on Artificial Intelligence,â http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf.
- J. McCarthy, âWhat is AIâ, http://jmc.stanford.edu/articles/whatisai/whatisai.pdf.
- M. Minsky, Semantic Information Processing, (Cambridge: MIT Press, 2003).
- P. McCorduck, Machines Who Think, (London, UK: Routledge, 2004).
- D. Hofstadter, quoted Larry Tesler differently as âAI is whatever hasnât been done yetâ in âGödel, Escher, Bach: An eternal golden braid,â (1980).
The first AI Winter occurred between 1974 and 1980 following the publication of a report by Sir James Lighthill [7] criticising the failure of AI research to meet its objectives and challenging the ability of many algorithms to work on real problems. Funding for AI research has been cut across UK universities and across the world.
In fact, AIâs fortunes seemed to be on the up again in the mid-1980s when investment banks found that neural networks and genetic algorithms seemed to be able to predict stock prices better than humans. A stampede of activity took place as banks competed to get the upper hand with better more sophisticated automated trading algorithms. What could possibly go wrong? In the rush to get rich, IT architects failed to acknowledge the critical weakness of neural nets; they need historical precedent in their training data for predictions. They can be unpredictable when applied to previously unseen situations. The stock price boom, made possible by AI, was (you guessed it) unprecedented. By this point everybody trusted the algorithms, when they said, âSell!â the bank sold, when they said âBuy!â, it bought, no questions asked. The global financial crash of 1987 aka âBlack Mondayâ was enabled by a chain reaction of AI trading algorithms going off the rails [8].
Unsurprisingly, this was a major fac...