Failure-Free Education?
eBook - ePub

Failure-Free Education?

The Past, Present and Future of School Effectiveness and School Improvement

  1. 256 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Failure-Free Education?

The Past, Present and Future of School Effectiveness and School Improvement

Book details
Book preview
Table of contents
Citations

About This Book

David Reynolds is recognised internationally as one of the leaders of the school effectiveness and school improvement movement, and Failure Free Education? brings together for the first time many of his most influential and provocative pieces. Drawing on the author's work from over three decades, these extracts from his seminal books, chapters, papers and articles combine to give a unique overview of how the movement developed, the problems involved in the application of the knowledge and the disciplines' potentially glittering future now.

The book also covers the issues raised by, and lessons learned from, his close involvement with English government educational policymaking from the mid 1990s to date.

This book is essential reading for those who seek to understand how we can make every school a good school, and what the obstacles may be to achieving that goal.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Failure-Free Education? by David Reynolds in PDF and/or ePUB format, as well as other popular books in Education & Education General. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2010
ISBN
9781134208845
Edition
1

1
Introduction

School effectiveness and school improvement in retrospect, 1971–2010, and prospect, 2010 onwards

Introduction

No discipline in the history of modern educational research has risen, and yet subsequently fallen from view, with greater speed than that of School Effectiveness and School Improvement (SESI). From a position of total marginality in the 1970s and 1980s in the educational research communities of only a couple of countries like the UK and USA, SESI came in the 1990s to be a worldwide phenomenon and to have a huge intellectual reach and also closely developed links with the policy-making and, to an extent, practitioner communities in the UK particularly. Put simply, there were perhaps half a dozen British research articles on SESI-related topics in the mid 1970s, and maybe a dozen from the United States. By the time we came to review the world literature for The International Handbook of School Effectiveness Research (Teddlie and Reynolds, 2000), there were over 2,000 references. Now, at the time of writing in 2009, there are probably 3,000 and maybe more in a still rapidly growing field, from probably 30 or more countries.
But the rapid ascendancy of the discipline, the policy impact in some countries and the publicity that the work generated set up deep oppositional forces in some countries that conspired to make the discipline insecure, in professional doubt and frequently criticised. As fast as doors had opened to it in the 1990s, they closed to it in the 2000s. Understanding why something so apparently useful in its endeavours – a simple desire to find out what makes a ‘good’ school and how to make all schools ‘good’ so that all schools could be helped – evoked such hostility is the subject of our first task in this Introduction.

The medical foundations of the discipline

It is not generally acknowledged nowadays but SESI in the UK had its origins in medical research. The very first findings on school differences came from a team headed by Power (1967, 1972), who were actually working in a Medical Research Council Social Medicine Unit. My own early work came at a time when I was a member of the scientific staff of the Medical Research Council Epidemiology Unit in Cardiff (Reynolds and Murgatroyd, 1974), then headed by Archie Cochrane, who was to become subsequently lionised for his contribution to evaluating medical practice. Indeed, he had tried to apply the same medical research techniques – the randomised controlled trial – to education, in the world’s first randomised trial of the effects of corporal punishment upon pupils’ misbehaviour, showing a substantial negative effect! I had joined the Unit in 1971 and my own most substantial publication from this time is reproduced as Chapter 2 of this collection.
Work also came from a child psychiatrist, Dennis Gath (1972), who looked at variations in child guidance rates between schools, and of course from Michael Rutter, a child psychologist who co-authored the famous ‘Rutter Report’, Fifteen Thousand Hours (Rutter et al., 1979). Peter Mortimore, who made a major contribution with his ‘Rutter for the primary sector’ in the book School Matters (Mortimore et al., 1988), was a member of the Rutter team. Louise Stoll and Pam Sammons, who both worked with Peter Mortimore, were themselves to make major contributions to the fields of SI and SE respectively.
This medical influence was compounded by the fact that the Association for Child Psychology and Psychiatry funded for over a decade the termly meetings of the School Differences Research Group, which met in London from the mid 1970s and had a mailing list of up to 50 or so SESI researchers, spreading increasingly in its reach to researchers outside the UK through the 1980s.

The criticisms

The medical background was responsible for a number of the characteristics of the discipline, which proved to be both its strength and its weakness. The strong quantitative, positivistic orientation endeared it to politicians and policy-makers, who appreciated the apparent certainties rather than the multiple perspectives beloved of some, but this orientation had been rendered unfashionable by the rise of interpretive, naturalistic and qualitative perspectives from the early 1970s that focused more on school processes than school organisation, and upon the ‘culture’ of schools more than their structure.
The core value beliefs of SESI – that more children gaining more conventionally defined academic achievement was a ‘good’ thing that would be associated with societal progress – did not appeal to those who wished to ‘problematise’ school outcomes, and who argued that other, more non-conventional outcomes were important for the system to aim at also.
The close association between ‘New Labour’ (as it was subsequently to be described) from the mid 1990s until the early 2000s may have been something that many of us within SESI were proud of, since this perhaps meant that more pupils would do better, but this link generated suspicion amongst many academics in other areas of educational research. It is possible that these close links generated jealousy too. Frequent criticisms that SESI was managerialist in its orientation, allied to a ‘technocratic’ paradigm and ‘singing the policy-makers’ tune’ were also made.
Nevertheless, SESI persons did sit at the same table as government in a manner that horrified many – Michael Barber headed the Standards and Effectiveness Unit, and was followed by David Hopkins. I chaired the Numeracy Task Force, was on the Literacy Task Force, sat on the boards of government agencies and was a part-time adviser to the DfES (as it was called). This closeness between the leaders of an academic discipline and educational policy was unusual, and was argued by some to be restricting of the conventional role of an academic to critically evaluate ideas from wherever they came, including from government.
There was also one last criticism often made of SESI: that the discipline was inherently a conservative one, inasmuch as celebrating and publicising the schools that often did relatively well within the existing range of variation was a conservative act, since it did not study the possibility of alternative, non-current provision being useful or ‘excellent’. There was, with SESI, no possibilitarianism, as it were.
A final factor predisposing to academic criticism was that SESI ‘cut against the grain’, as it were, of the emotional, professional and political characteristics of worldwide educational research communities, particularly that of the UK. Alexander (1996) expressed this view beautifully when he, with rancid intellectual snobbery, talked of some of us in SESI as ‘the Essex men and women’ of educational research. Alexander’s (2000) later tirade against SESI was made in a book which became the AERA Outstanding Book of the Year, suggesting that rancid intellectual snobbery might have become worryingly widespread across the planet!
Drawing on the additional deep academic distrust of the ‘applied’ in educational research, and the British historical elevation of the pure or the ‘blue skies’ approach, SESI was – well – just not British in many academic eyes! That not one single scholar from the SESI community, or from the educational administration community, or the educational management discipline, has ever been on any of the Research Assessment Exercise panels from the inception of the assessment process in 1992 – in marked contrast to the positive, inclusive treatment of the largely defunct, irrelevant and small-scale History of Education community – tells its own sad tale of British academic snobbery.

The SESI community and its characteristics

The reaction to SESI from others did not really worry the SESI community. Its members were convinced that their commitments on intellectual and policy matters were helpful to society, teachers and children, and that our close influence with policy was nothing to be ashamed of. Indeed, we were proud of it in the 1990s.
First, we believed that school effectiveness research had convincingly helped to destroy the belief that schools could do nothing to change the society around them, and also helped to destroy the myth that the influence of family background upon children’s development was so strong that they were unable to be affected by their schools. In the 1960s there had been a widespread belief that ‘schools make no difference’ (Bernstein, 1968), which reflected the results of American research (e.g. Coleman et al., 1966; Jencks et al., 1972), and the disappointed hopes that followed from the perceived failure of systemic reform, enhanced expenditure and the other policies of social engineering that constituted the liberal dream of the 1960s (Reynolds and Sullivan, 1981). We believed we were helping to banish this.
The second positive effect of SESI, we believed, was that in addition to destroying assumptions of the impotence of education, SESI took as its defining variables the key factors of school and pupil outcomes, from which it ‘back-mapped’ to look at the processes which appear to be related to positive outcomes.
SESI – and this is very different from many educational research specialities – did not celebrate new policies because they were new or because practitioners liked them, or opposed new policies because they potentially damaged the interests of educational producers. For SESI, the ‘touchstone criteria’ to be applied to all educational matters concerned whether children learned more or less because of the policy or practice. Fads, fallacies and policy and practice fantasies largely passed SESI by because we tried to form our views of the educational world on a scientific, rigorous basis.
Third, SESI showed teachers to be important determinants of children’s educational and social attainments and therefore we believed we had managed to enhance and build professional self-esteem. It was always unclear why teachers accepted responsibility for their individual impact upon individual children, and upon individual classes, but would not accept the importance of their impact upon groups of children in schools as groups of teachers. SESI hoped that we had enhanced professional self-esteem by emphasising this.
Fourth, and this was the last of our SESI positive contributions, we began the creation of a ‘known to be valid’ knowledge base which we believed could act as a foundation for training (see the early reviews in Gray, 1990; Mortimore, 1991; Reynolds and Cuttance, 1992; Rutter, 1983; and Scheerens, 1992). With knowledge of school and of teacher effectiveness, the latter of which had unfortunately to be imported from North America until recently because of the historic antipathy towards research in this area in the United Kingdom (see Creemers, 1994), we could avoid the necessity of the endless reinvention of the ‘teaching wheel’ and could move teachers to an advanced level conceptually and practically, or so we believed!
There was, though, one most unfortunate ‘downside’, or negative feature, associated with the popularity of school effectiveness research, which was that we were instrumental in creating a quite widespread, popular view that schools did not just make a difference, they made all the difference. School effectiveness researchers in the UK usually actively sought public attention for their research papers and their books: journal editors and publishers were keen to oblige with this, for their own material reasons. The result was that school quality, the variation in that quality, and the remedies for variation in that quality became more extensively discussed topics in the United Kingdom than in most other societies. Politicians of a right-wing persuasion were able to use the climate of opinion that had been partially created by SESI both to attack school standards generally and to propose the improvement of those standards by use of what were clearly non-rational methods, their argument being that the situation was so dire and perilous that urgent action was called for. Indeed, for a long time in the UK the performance measures of schools used in the national performance tables themselves explained all school variation as due to schools, since no non-school background factors were measured.

The UK knowledge base

SESI’s rise in the UK was rapid, in terms of the quantity of work and the quality of that work also. Key studies in the 1980s involved:
• ‘value-added’ comparisons of educational authorities on their academic outcomes (Department of Education and Science, 1983, 1984; Gray et al., 1984; Gray and Jesson, 1987; Willms, 1987; Woodhouse and Goldstein, 1988);
• comparisons of ‘selective’ school systems with comprehensive or ‘all-ability’ systems (Gray et al., 1983; Reynolds et al., 1987; Steedman, 1980, 1983);
• work into the scientific properties of school effects, such as their size (Gray, 1981, 1982; Gray et al., 1986), the differential effectiveness of different academic sub-units or departments (Fitz-Gibbon, 1985; Fitz-Gibbon et al., 1989; Willms and Cuttance, 1985), contextual or ‘balance’ effects (Willms, 1985, 1986, 1987) and the differential effectiveness of schools upon pupils of different background characteristics (Aitkin and Longford, 1986; Nuttall et al., 1989);
Towards the end of the 1980s, two landmark studies appeared concerning school effectiveness in primary schools (Mortimore et al., 1988) and in secondary schools (Smith and Tomlinson, 1989). The Mortimore study was notable for the very wide range of outcomes on which schools were assessed (including mathematics, reading, writing, attendance, behaviour and attitudes to school), for the collection of a wide range of data upon school processes and, for the first time in British school effectiveness research, a focus upon teaching and classroom processes.
The Smith and Tomlinson (1989) study was notable for the large differences shown in academic effectiveness between schools, and for certain groups of pupils a substantial variation in examination results between similar individuals in different subjects, reflecting the influence of different school departments – out of 18 schools, the school that was positioned ‘first’ on value-added mathematics attainment, for example, was ‘fifteenth’ in English achievement (after allowance had been made for intake quality).
From 1990 onwards, work in the United Kingdom was even more productive, notably in the areas of:
• stability over time of the effects, positive or negative, of schools (Goldstein et al., 1993; Gray et al., 1995);
• consistency of the effects of schools upon different outcomes – for example, in terms of different subjects or different outcome domains such as cognitive/affective (Goldstein et al., 1993; Sammons et al., 1993);
• differential effects of schools for different groups of students (for example, of different ethnic or socio-economic backgrounds or with different levels of prior attainment) (Jesson and Gray, 1991; Goldstein et al., 1993; Sammons et al., 1993);
• the relative continuity of the effects of different school sectors over time (Goldstein, 1995; Sammons et al., 1995);
• the existence or size of school effects (Daly, 1991; Gray et al., 1990), where there were strong suggestions that primary school effects were greater than those of secondary schools (Sammons et al., 1993, 1995);
• departmental differences in educational effectiveness (Fitz-Gibbon, 1991, 1992; Sammons et al., 1997).
This, then, was the British knowledge base by the mid 1990s. Overall, it had four positive features:
1 High levels of methodological sophistication, in which the utilisation of a cohort design, matched data on individuals at intake and outcome, and multiple-level methodologies were widely agreed as axiomatic. The UK was also in the forefront of the development of multilevel sta...

Table of contents

  1. CONTEXTS OF LEARNING
  2. Contents
  3. Figures and tables
  4. Acknowledgements
  5. 1 Introduction
  6. 2 The delinquent school
  7. 3 The study and remediation of ineffective schools Some further reflections
  8. 4 The truth, the whole-class truth
  9. 5 Creating world-class schools What have we learned?
  10. 6 Teacher effectiveness Better teachers, better schools
  11. 7 School effectiveness and teacher effectiveness in mathematics
  12. 8 The High Reliability Schools Project
  13. 9 The remit and methods of the Numeracy Task Force
  14. 10 School improvement for schools facing challenging circumstances
  15. 11 What leaders need to know about teacher effectiveness
  16. 12 Schools learning from their best
  17. 13 The future agenda for school effectiveness research
  18. 14 How can recent research in school effectiveness and school improvement inform our thinking about educational policies?
  19. 15 What do we want our educational system of the future to look like? What do we need to do to make it happen?
  20. References
  21. Index