If we teach todayâs students as we taught yesterdayâs, we rob them of tomorrow
John Dewey
THE ROBOTS ARE COMING! In fact, robots have been coming for a long time now. Over the past sixty years Robbie, HAL, R2-D2 and Wall-E have been established as mainstays of popular culture, while their âreal-lifeâ equivalents regularly make headline news. Many people still recall chess grandmaster Garry Kasparov being defeated by IBMâs Deep Blue in 1997, while Hanson Roboticsâ âSophiaâ gained notoriety in 2016 as the first humanoid robot to be granted national citizenship. Regardless of their context, people certainly take notice of robots. There is something about these machines that seems to provoke strong reactions and soul-searching about what it means to be human.
Beyond being a regular feature of news reports and science fiction, the primary practical significance of robots relates to the changing nature of contemporary work. A wide range of jobs and professions face the prospect of increased high-tech automation. Industries such as circuit-board manufacturing, underground mining, and fruit-picking are now reliant on automated, mechanized robots. Elsewhere, intelligent systems are expected to soon take the place of human doctors, lawyers and accountants. High-tech automation is seen as a genuine proposition across many sectors of work and employment.
A notable exception to this trend is education. Despite occasional speculation over âcomputer tutorsâ and ârobot teachersâ, it has been generally assumed that education is one area of work destined to remain the preserve of humans. Most people intuitively feel that education is an essentially human undertaking. While disagreeing on many other points, education experts broadly concur that learning is a social process dependent on interactions with more knowledgeable others. All told, the belief persists that learning is something best guided by expert human teachers in socially rich settings.
Such thinking is certainly reinforced by the continued dominance of mass schooling and lecture-based university degrees. Yet the past two decades have seen significant technological advances in areas of artificial intelligence (AI) such as robotics and machine learning. These technologies are increasingly social in nature, and able to operate at speeds and scales that far outstrip the capacity of any human. This is beginning to fuel demands from outside the education sector to reconsider the âcookie-cutterâ model of a single teacher presiding over twenty students. Instead, it is claimed that AI technologies are now capable of supporting superior forms of education that do not entail the central involvement of a human teacher. Given this, we need to seriously consider the implications of robotics, artificial intelligence and the digital automation of teaching work.
Robots and artificial intelligence
We first need to establish some basic terms of reference. What concepts and ideas underpin the question of ârobotsâ replacing âteachersâ? In terms of the technological aspect of these discussions, it helps to move quickly on from imagining robots in the guise of R2-D2 or Wall-E. Of course, âphysical robotsâ are being used in education, and certainly raise a host of interesting issues (these will be tackled in Chapter 2). Yet robotics is just one area involving the application of artificial intelligence. In this sense, our interests lie primarily with the broad field of AI and associated advances in machine learning and big data.
The field of artificial intelligence emerged in the 1950s as computer scientists became interested in developing machines capable of thinking intelligently. Up until the 2010s, AI work focused mainly on the challenge of adding âthinking-likeâ features to computerized technology. This involves a number of different ingredients to provide the computer with an expert knowledge base, codified reasoning and logics required to make decisions. One important aspect of this work is based around the concept of machine learning. This is the process of algorithms being âtrainedâ to parse large amounts of data in order to learn how to make informed decisions and perform tasks. Adrian Mackenzie describes this as using data to lend a degree of âcomputabilityâ, predictability and control to real-life phenomena.1 Until recently, these forms of machine learning tended to focus on relatively specific tasks, with any AI system requiring the guidance of programmers to steer it toward correct calibrations. However, the 2010s saw machine learning take on a more powerful guise â what is termed âdeep learningâ.
There has been much recent excitement over deep learning as the key to developing forms of AI with the potential to radically transform areas of society such as education. One of the central characteristics of deep learning is the application of machine learning techniques to artificial neural networks. These are networks modelled on the complex layered structure of biological brains. Deep learning involves sets of training data being continually dis-assembled and re-assembled through the layers of an artificial neural network, with each network node continually assigning different weightings to a specific data point. Crucially, a deep learning system is able to train itself to refine the accuracy of these algorithms until they are capable of reaching accurate conclusions. This capacity to learn autonomously using the operating principles of neural networks is seen to offer the possibility of achieving powerful levels of âhuman-likeâ reasoning and language skills â what some commentators see as âthe holy grailâ of âgeneralized intelligenceâ.2
An early âproof of conceptâ for deep learning came from a team of Google engineers led by Andrew Ng in 2012, training huge neural networks on data from 10 million YouTube videos. This breakthrough took advantage of three technological developments of the time â the reduced cost of graphics processing units, the vast storage capacity of cloud computing, and the growing availability of massive data sets. In particular, the early 2010s marked a tipping point in âbig dataâ, with terabytes of digital content being generated every hour through digital sensors, social media and other commonplace technologies. This massive volume of available data is seen to have transformed the potential of machine learning. As Andrew Ng has since reflected: âThe analogy to deep learning is that the rocket engine is the deep learning models and the fuel is the huge amounts of data we can feed to these algorithms.â3
These processes now underpin various types of AI application. For example, machine learning is a key element of the image processing capabilities that underpin the operation of self-driving tractors and autonomous drone weaponry. Elsewhere, big data processing is used to identify people and locations at increased risk of crime (so-called âpredictive policingâ), and to configure forms of customized healthcare through the analysis of population-wide genomic data (so-called âprecision medicineâ). Many of these advances are driven by the expansion of types of digitized data. For example, the emerging field of âaffective computingâ seeks to detect and recognize human emotions from a range of data relating to facial detection, body gestures, galvanic skin response and other physiological measurements. To paraphrase a long-standing business mantra, there is now a growing belief that âif you canât measure it, you canât improve itâ.
Despite such enthusiasms, the fast-expanding scope of these applications is proving contentious. For every optimistic declaration of âbetter living through AIâ there are concerns over inaccuracies, misrecognitions and faulty decision-making. The growing hype around big data and machine learning has highlighted the fact that AI systems are only as good as the logics they are programmed with and the data sets they are trained on. While cases of computer vision processing failing to distinguish between sheep and grass are understandably embarrassing for the developers involved, failures of AI applications to recognize African-American faces as human are clearly discriminatory.4 Automated image misrecognition can lead to a cat being mistaken for a dog, or an Afghani wedding party mistaken for a military convoy. Algorithmic sorting has already led to unjust decisions in criminal sentencing and social welfare payments.5 AI has quickly become an area of computer science associated with profound social consequences.
Many AI developers and vendors acknowledge these shortcomings but see them as teething troubles that will eventually be overcome. These systems are designed to become more accurate and efficient with increased use over time. As a result, many proponents of AI reason that any short-term limitations should be seen in light of the potential for longer-term transformations on an unprecedented scale. Some commentators anticipate the development of âa distributed planetary computer of enormous powerâ6 involving billions of connected processors working with a continuous supply of data from millions of data sources. Others see the potential realization of the âtechnological singularityâ where artificial superintelligence suddenly begins to outstrip human intelligence and prompt a new evolutionary phase. It is generally hoped that such scenarios will be life-enhancing rather than life-threatening. As Garry Kasparov has argued in his latter-day role as an ambassador for âresponsible roboticsâ: ânew forms of AI will surpass us in new and surprising ways ⌠Humans, meanwhile, will continue up the ladder ⌠Weâre not being replaced by AI. Weâre being promoted.â7