While there is evidence that TAs have a positive impact on teachers’ workload and stress levels (Blatchford et al. 2012b), until the DISS project, there was next to no empirical research on the impact of TAs on pupils over sustained periods (e.g. a school year) and under everyday classroom conditions. Much of the evidence that does exist concerns mostly small-scale studies of TA-led curriculum interventions. There are some important points to make about the body of research on TAs, which is helpful for contextualising the case made and guidance presented in this book.
When it comes to the research on the direct impact of TAs on learning outcomes, we can separate most of it into two broad categories: (i) studies measuring the effects of curriculum interventions and ‘catch-up’ programmes delivered by TAs; and (ii) studies focusing on the impact from other forms of TA deployment. The first group tends to concern studies involving specific subjects – typically numeracy and aspects of literacy (reading, spelling, writing and phonics) – and pupils in certain year groups. In many cases, these intervention programmes are very often delivered outside the classroom. The second group of studies concerns research on how TAs are used inside the classroom in everyday conditions.
Research on TA-led interventions
Simply put, there is good evidence pupils make progress in literacy and numeracy as a result of structured curriculum interventions delivered by TAs – but only when TAs have been properly trained to deliver those programmes (see Alborz et al. 2009).
Often the positive results from research on TA-led interventions are frequently offered up as conclusive proof of TA impact (Ward 2014), but the overall evidence base is surprisingly thin. The majority of this research comes from international studies conducted on a small-scale, typically involving small samples of 30 to 200 pupils (Sharples et al. 2015). These limitations have implications for the generalisability of the results: how confident can we be that a particular intervention will produce positive outcomes for every pupil in every setting, every time it is used?
The research investigating TAs delivering interventions may be small, but it is growing. Results from randomised control trials (RCT) funded by the Education Endowment Foundation in the UK are among some of the most recent research in this area, and emerging findings from evaluations are consistent with the international picture. RCTs allow researchers to compare results from a group of pupils who received support from TAs trained in an intervention programme with results from control groups who did not receive the programme (but may at a later date) or who received an alternative form of support. Results from well-designed RCTs do not imply certainty of success when applied to your own setting, but they can improve our level of confidence when it comes to making good decisions about using a particular intervention and the conditions under which it works best.
It is easy to get seduced by the results of impact assessments. Evaluations of intervention programmes appearing to show, for example, 12 months of progress in reading in just three months of delivery are bound to appeal to school leaders. But there are important caveats to add regarding how the study or RCT was designed, and the way the intervention was delivered, which can affect the outcomes. These caveats are worth discussing in brief, as they can help us be more mindful when thinking about the implications of results from such studies, as well as improve decision-making about which programmes to buy and use.
Firstly, impacts on pupil progress tend to only be measured in relation to the intervention programme itself. Most ‘off the shelf’ intervention packages come with a tool to take baseline and progress measures, but these only relate to the content and coverage of the programme. Furthermore, any effects cannot be extrapolated with 100 per cent reliability due to the restricted nature of the conditions under measurement. Results from a specific programme delivered to pupils in a specific year group (possibly in a specific school in a specific area) are unable to tell us much about how effective it is outside of these parameters. In other words, the intervention might be successful for some pupils, but not others.
Secondly, only a few studies of curriculum interventions, including some RCTs, separate the effects of TA support from the intervention itself. So we cannot always be sure how much pupil progress is down to the programme and how much to TA support. Thirdly, many of these studies fail to ask whether the impact would have been greater if the programme had been delivered by a teacher, rather than a TA. Indeed, there are studies that show experienced and specifically trained teachers get better results than TAs when delivering the same programme (Higgins et al. 2013; Slavin et al. 2009).
Finally, there is the effect of what is called ‘fidelity to the programme’. This describes how faithful the delivery of the programme is to the protocols and instructions that come with it. Interventions will have been tested and refined before being made available to schools; this is especially the case for commercial programmes. Careful in-house testing and evaluations by independent assessors will have been conducted on the basis that the intervention has been delivered as its creators intended. For example, an intervention might state it should be delivered to groups of three pupils, three times a week, for 20 minutes. So if schools want to achieve similar results to those reported in tests and evaluations, it is essential they deliver the programme in exactly the same way, and do not tinker with these essential factors; for example, delivering it to groups of six pupils, twice a week, for 40 minutes. If changes are made to any part of the programme, the programme itself changes, and the chances of success diminish.
While these factors can water down the effects of a programme, generally speaking, the impact of using properly trained TAs to deliver curriculum interventions has a positive effect on learning outcomes. There are some additional points the research on interventions raises about the use of such programmes, which we address in Chapter 4.
It is important to bear in mind it can take several weeks, months or terms to complete the delivery of an intervention; it varies. It is worth noting research has yet to shed light on how immediate improvements via interventions translate into long-term learning and performance on national tests. This is particularly relevant given that pupils’ learning in interventions is not regularly connected to the wider curriculum and learning in the classroom, as we shall see. What is more, studies of interventions are restricted in being able to tell us anything about the ...