Chapter 1
Testing and Revising: The Evidence
There are no magic potions to reach for when exam season approaches. There is no Asterix and Obelix âGetanexamfixâ druid. Unfortunately, as far as I know, there are no magic exam beans either. The next new initiative might not be a panacea but, in fact, another way to foster an atmosphere of pernicious accountability and âsupportâ a teacher out of the profession.
Nor are there any silver bullets to ensuring student academic success. Sometimes, though, the talk of exam success and students getting excellent grades can conjure up images of exam factories â huge, cold, industrial complexes where students are drilled in Victorian-style classrooms, writing out arithmetic on slate grey iPads.
When I started teaching I had no real understanding of how the memory works and even less of a clue about cognitive science. I thought that pace in the classroom was key (partly through received wisdom and partly through my own vanity: âIf you want to see great pace in lessons then go to Jakeâs classroom!â).
This was both comical and sad, as I really did think that doing things quickly would impress observers and keep the students engaged. It did impress observers, but I donât know if it actually helped to engage the students. I fear it didnât because when I started working at a new school, I began teaching lessons at such a brisk pace that the students complained they couldnât understand as I was speaking and doing things too quickly. Fears of accountability fuelled my hyperactivity and led to little or no time for the students to understand the material or process it properly.
Pace became a âtick-boxâ item in lesson observations, added to the list of âthings we must see in a lesson observationâ, such as differentiation. This sometimes led to three different sets of worksheets for clearly identifiable groups of students who, no matter how much stealth you could put into surreptitiously organising the class into âhigher abilityâ, âmiddle abilityâ and âlower abilityâ, the students would always know. In the end, both the students and I became embarrassed by the whole thing. I now know that my own understanding of differentiation was rather ill-founded and not based on âresponsive teachingâ.
I also conducted mini-plenaries (perhaps itâs just the terminology thatâs a problem, since if they were considered as âretrieval practiceâ then mini-plenaries might be thought of more positively) and peer assessment without any awareness of the potential for the DunningâKruger effect â that is, the cognitive bias in which individuals who are unskilled at a task mistakenly believe themselves to possess greater ability than they do. An alternative, perhaps somewhat cruder, definition is that youâre too incompetent to know that you are incompetent.
Iâm not necessarily saying that pace was, and is, a bad thing; just that because I had picked up that it impressed people, it became one of the things I would do when being observed, and also something to look out for when I was required to do lesson observations. Seeking to confirm a prejudiced view was a skew that I never even knew I had.
It felt strange, nonetheless, that in my observed lessons where I limited teacher-talk time and ensured my pace was good, I was given mostly outstanding; yet I always felt that the students learned more from me standing at the front and teaching in a slower and more didactic manner, followed up by some guided practice. This was the style I reverted back to when teaching sans observer, especially when the exam season loomed large.
Giving students summative tasks to improve a summative outcome was also something I believed would help them to learn better over time: if I test them on the big thing in the way they are tested in exams, they will definitely get better at that big thing. This approach influenced the thinking behind a card sort I devised which involved matching up examiner reports and mark schemes.
As a language teacher, I also used listening tasks from textbooks and past papers to try to improve studentsâ listening skills on a later summative listening test. It felt like I was doing my job, primarily because that was how I understood it should work from my teacher training. The fact that studentsâ working memories were being overloaded because the listening exercises were too complex and the skill had not been broken down did not occur to me. (One of the advantages of deliberate practice â where a skill is broken down into smaller tasks â is that there is less of a load on working memory.)
By designing writing tasks which were summative assessments and then expecting students to improve on their next summative assessment, I was confusing learning and performance. Daisy Christodoulou (@daisychristo) notes that learning is about storing detailed knowledge in long-term memory whereas performance is about using that learning in a specific situation. The two have very different purposes.
In a blog post on enhancing studentsâ chances at succeeding at listening, Gianfranco Conti (@gianfrancocont9) raises the following issues:
Rather than focusing on breaking down the skill of listening to ensure the students had mastered bottom-up processing skills, I instead played them extract after extract of a listening comprehension from a textbook. I wasnât aware that breaking down the skill would have been effective in building the studentsâ listening skills because the practice looks different from the final skill. Itâs similar to using past papers to improve studentsâ grades â it doesnât necessarily work.
Maths teacher David Thomas (@dmthomas90) describes how over-focusing on exams can take the joy out of learning in the classroom. He observes that were it possible to teach assessment objectives directly then it would make sense for every piece of work to be a âmini-GCSE examâ, but this isnât possible as they are focused on generic skills, and these skills âcan only be acquired indirectly: by learning the component parts that build up to make the whole such as pieces of contextual knowledge, rules of grammar, or fluency in proceduresâ. Furthermore, âthese components look very different to the skill being sought â just as doing drills in football practice looks very different to playing a football match, and playing scales on a violin looks very different to giving a recitalâ.
The idea of being âexam literateâ might sound superficial (e.g. knowing key parts of the mark scheme or building up a store of key points from the examiner report), but in fact it is about spending time adopting some of the tenets of deliberate practice and building up mental models in each domain.
Just as adopting a deliberate practice model does not look like the final task, so exam literacy does not look like the final exam. I remember thinking that I was quite clever to come up with a homework task early on in a Year 12 Spanish course which got the students to design their own exam papers, and another time when I designed practice tasks which mirrored the exact style of the questions the students would face in their writing exam (even mimicking the dotted style of the lines on which students would write their answers!). I mistakenly thought that if they were familiar with the format of the paper then there would be no surprises in the exam.
The relative merits of different approaches has been a common topic of debate on Twitter and in the edublogosphere over the last few years. For example, there is a great chapter by Olivia Dyer (@oliviaparisdyer) on drill and didactic teaching in Katharine Birbalsinghâs Battle Hymn of the Tiger Teachers, and plenty of wonderful blog posts setting out commonsense approaches combined with aspects of cognitive science, as well as how to best plan a curriculum. A great place to start might be to have a look at any one of the Learning Scientistsâ blog posts.
The education debate seems to have been shifting towards questioning wha...