AI for Learning
eBook - ePub

AI for Learning

Carmel Kent, Benedict du Boulay

  1. 104 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

AI for Learning

Carmel Kent, Benedict du Boulay

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

What is artificial intelligence (AI)? How can AI help a learner, a teacher or a system designer? What are the positive impacts of AI on human learning?

AI for Learning examines how artificial intelligence can, and should, positively impact human learning, whether it be in formal or informal educational and training contexts. The notion of 'can' is bound up with ongoing technological developments. The notion of 'should' is bound up with an ethical stance that recognises the complementary capabilities of human and artificial intelligence, as well as the objectives of doing good, not doing harm, increasing justice and maintaining fairness. The book considers the different supporting roles that can help a learner – from AI as a tutor and learning aid to AI as a classroom moderator, among others – and examines both the opportunities and risks associated with each.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
AI for Learning è disponibile online in formato PDF/ePub?
Sì, puoi accedere a AI for Learning di Carmel Kent, Benedict du Boulay in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Computer Science e Computer Science General. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Editore
CRC Press
Anno
2022
ISBN
9781000538595

1 How Do I Tell the Difference between Good AI and Bad?Or – About Our Five-Step Evaluation of Cake Mixes

DOI: 10.1201/9781003194545-1
Synopsis: This chapter describes a simple five-step process for evaluating AI-based technologies for education. If you are a parent or an educator, following these five steps should assist you in evaluating the use of a new AI-based technology in your classroom or in learning at home. It will help you cut through the sales talk to see which AI systems are useful for you, which have the potential to harm and which are simply not suitable. If you are an EdTech developer or designer and you have an idea about how to use AI in learning or teaching, following these five steps will help you in making design choices aiming at developing an AI-based technology with a real educational value. This way, your product would not join the shelf of AI-based white elephants that no learner, teacher or parent would actually use effectively.
Think about the framework as evaluating the quality of a marriage or a partnership: a partnership between humans and machines. This partnership would look completely different in different educational settings or contexts. As always with education – everything is in the context, right? For example, evaluating the use of an AI system to tutor preschool children’s reading would look completely different (and would take completely different considerations into account) than evaluating an AI system that trains teachers. The outline of the five steps is the same, but it is totally up to you to apply your own context on top of it.
This chapter will walk you through these five steps, and the rest of the chapters will provide you examples of how to apply the five step process in specific educational contexts, so that you can go on and ‘write your own chapter’, fill it with the details about the learners, the domain knowledge and the pedagogy you know best – and evaluate how AI should – or should not – be used (Figure 1.1).
Figure 1.1 How do I tell the difference between good AI and bad? Illustrated by Yael Toiber Kent.
Our evaluation process (summarised in Figure 1.2) consists of four main steps, while the fifth is about going back and iterating through the whole process when needed. In the first step, we consider what each partner does best. In other words, we would try to map what we (human beings) are good at and what AI systems are good at, in the context of each specific educational challenge. Having this distinction in mind, in the second step we examine how this partnership looks (or rather should look) like. We choose the way in which human intelligence and artificial intelligence might work together, again – to achieve the specific educational challenge in mind. When the collaboration architecture is in place, the third step is to list the ways in which this partnership might open opportunities or constraints to the learning process we have in mind. The fourth step is to specify how to use our evaluated AI technology as a learning or teaching activity. Lastly, the fifth step focuses on reflecting on how well the technology works within the chosen learning context, which aspects need to be rethought and how to redesign and evaluate again a newer, better, evidence-based version.

On Wheels, Cake Mixes and Clippy

A century ago, flour companies invented cake mixes. American housewives were promised that they would get a desirable product at an affordable price, as well as being freed them from laborious hours in the kitchen. They were tempted by the dream of using their time more efficiently, instead of ‘wasting’ it on searching for a recipe (remember, Google was not there all along!), bothering to find the individual fresh ingredients, measuring, mixing and worrying about countless matters that could go wrong in the process. With the new cake mixes – even nonexpert bakers could achieve an expert result! Could anyone with their right mind refuse such an offer? Yes, as it turns out. How surprised the cake-mix manufacturers were to realise that their target audience – housewives – still preferred to search, mix, clean and stand for hours on their tired feet. Why would they? What went wrong?
Ernest Dichter, a psychologist and a marketing consultant hired to solve the mystery, used focus groups to transform a whole industry in the 50s, with one simple magic ingredient: a fresh egg.
In fact, the magic ingredient was not the egg itself but rather the exclusion of the egg powder from the cake mix and its reinstatement into the hands of the home baker as a fresh egg. This way, home baking would not feel ‘fully automated’. In other words, Dichter had restored the human-in-the-loop. Years later, this transformational pivot is still being taught in marketing classes as the perfect example of how irrational we humans are. The problem with the cake mix, marketing students are being taught, is that home bakers didn’t feel emotionally invested enough by just adding water [1]. Baking had been presented as ‘an emotional activity’, and the ‘repositioning’ of the egg had been presented as a foxy deception, which made the ‘irrational’ bakers regain their emotional investment and feeling of control.
But were those home bakers really irrational? And how did a baking story find its way into an education book on AI? Well, for one thing – baking, much like educating, is a science and an art. Following a good recipe and using a mix of ingredients is a must in both professions. However, the ‘glue’ cannot be synthesised, whether it is an egg or personal human feedback. A factory-made batch of identical cakes might not be exactly right every day for every person. It takes an experienced and perceptive baker to finesse the final touch, take into account the room temperature and consider the various preferences and potential allergies of the cake’s potential eaters.
Every amateur baker knows that baking is an act of love, cherishing and intent. Just because something can be taken out of human hands, it does not mean it should.
Cake mixes, the wheel, the printing press, the computer and even technologies which are almost taken for granted today, such as the personal calculator, were designed to augment human abilities and achieve our goals by complementing human deficiencies (such as memory decay or distortion) with technology’s unfair advantage. Technology’s unfair advantage is the capacity that it undoubtedly and uniquely has over us humans (such as the ability to store a large amount of data and process it quickly).
Evidently, when the cake mix was taken completely out of the human’s hands, things did not go well. Similarly, Microsoft’s Word designers in the 1990s assumed that people would interact with an automatic assistant. As it turns out, the designed digital assistant (called Clippy for those of us too young to have met it) did not adhere to many of the social norms expected by people from ‘an assistant’, such as getting to know the person it was trying to help [2]. Eventually Clippy, which was introduced in 1995, did not endure beyond the early 2000s’ upgrade of Office.
So how do we – educators, parents and EdTech designers – ensure that a technology effectively supports the needs of learners, as well as ourselves, without penalising our or their sense of being able, sense of trust, and our fundamental right to develop our own human qualities and excel in what we are good at? How do we make this human–machine partnership work properly?
In this book, we suggest that a robust design of a human–machine partnership can be achieved only when we (educational experts or designers) gain a profound understanding of what are our (human) unfair advantages and what are the augmenting technology’s unfair advantages. These unfair advantages should be sensitively considered in the context of a particular task or need (whether it is being able to bake a cake happily or in an educational context, as we will examine in this book). Unlike a typical human–human collaboration, in a human–machine collaboration, we are dealing with two parties who are very far apart in terms of their skill sets. It is only when we have figured out what each party is best at, and where they fall short, can we get on to design AI-based systems.
In the rest of this chapter, we will detail the five steps of our framework to design and evaluate AI systems in education.

Step 1: Map Out Our/Their Unfair Advantages

AI systems are good at picking out patterns from large datasets and reapplying them (potentially repeatedly and fast) onto similar datasets. For example, a machine learning (ML) model trained on historical data of a specific population, predicting the efficacy of a specific drug, is likely to predict the efficacy of this drug accurately on a new population, that is – given that this is the same drug and that the new population is very similar to the one it was trained on.
Furthermore, algorithms, and in particular AI algorithms, are heavily reliant on hardware. Advancements in storage capabilities, computing and parallelism have enabled AI to increase its performance and ultimately its popularity over recent years. The ‘memory device’ called the Internet and the immense acceleration in computing power have armed AI applications with an unfair advantage over humans in terms of speed and memory capacity.
One of humans’ unfair advantages is our ability to comprehend worlds, scenes or situations we might not have seen before and that are not rigidly predictable, well defined and certain; AI falls short in this ability. AI is at its finest when recalling, searching and recognising well-described, narrow worlds. Playing Go or Chess, image classification, speech recognition, handwriting transcription, digital assistants (such as Siri and Alexa) and even autonomous driving – to some extent – are all challenges that are often well defined, narrow and relatively static in their scope of logic. These are challenges that often requires the ability to quickly and accurately process large datasets. These are all examples of AI’s unfair advantage over us. Even as knowledgeable, trained and seasoned humans, we will always possess a limited short-term memory that falls short when attempting to do as well as machines in these kinds of task.
Human beings are born with a limited memory and cognitive processing ability. We typically compensate for this shortage with far-from-ideal mental shortcuts, such as heuristics, schemas and scripts. For example, we might use rules of thumb, educated guesses and stereotypes to get us through everyday decisions such as whether to cross the road, how to approach a person we do not know or when to take the bus in order to get to our meeting on time. Obviously, in addition to allowing us to make effective and quick decisions [3], such mental shortcuts can also lead us to make mistakes and introduce biases in our decision-making (such as confirmation bias, anchoring bias, group-favouritism, limited attention span and the framing effect).
AI systems, despite having their own blind spots, don’t need to employ such shortcuts, as their processing capacity is far less limited. Consequently, AI algorithms are well-positioned to assist us in identifying our own human biases and making better-informed decisions.
Humanity’s unfair advantage consists of our ability to transfer skills between differ...

Indice dei contenuti

  1. Cover
  2. Half-Title
  3. Series
  4. Title
  5. Copyright
  6. Contents
  7. Foreword: The power of learning, the power of AI
  8. Introduction: What Is All This about AI in Education?
  9. 1 How Do I Tell the Difference between Good AI and Bad?: Or – About Our Five-Step Evaluation of Cake Mixes
  10. 2 AI as a Learner: How Can AI Help Me Learn Things That I Did Not Understand Before?
  11. 3 AI as a Tutor: How Can AI Tutor Me about Stuff?
  12. 4 AI as a Classroom Moderator: How Can AI Give Me Eyes in the Back of My Head?
  13. 5 Conclusion: So, Are We Friends Now?
  14. Glossary
  15. Index
Stili delle citazioni per AI for Learning

APA 6 Citation

Kent, C., & du Boulay, B. (2022). AI for Learning (1st ed.). CRC Press. Retrieved from https://www.perlego.com/book/3127234/ai-for-learning-pdf (Original work published 2022)

Chicago Citation

Kent, Carmel, and Benedict du Boulay. (2022) 2022. AI for Learning. 1st ed. CRC Press. https://www.perlego.com/book/3127234/ai-for-learning-pdf.

Harvard Citation

Kent, C. and du Boulay, B. (2022) AI for Learning. 1st edn. CRC Press. Available at: https://www.perlego.com/book/3127234/ai-for-learning-pdf (Accessed: 15 October 2022).

MLA 7 Citation

Kent, Carmel, and Benedict du Boulay. AI for Learning. 1st ed. CRC Press, 2022. Web. 15 Oct. 2022.