Part 1
Evaluation: Nature, politics and tensions
Evaluation is not a universal entity: conceptions of evaluation are socially constructed and thus shaped by a range of economic, political and cultural influences. This part of the book seeks to promote a critical understanding of evaluation through an exploration of its dynamic and evolving nature. The dominant discourse of evaluation is examined and challenged in relation to its use in supporting those engaged in evaluating youth and community work. Whilst much of the content of this part of the book may well be new to many youth and community workers, developing a critical understanding of evaluation is essential to making the case for a more diverse approach to evaluating youth and community work practice.
Chapter 1 (âWhat is evaluation?â) focuses on the evolving nature of evaluation and provides underpinning knowledge of the origins, purpose and functions of evaluation. It charts the developments in evaluation thinking over the past 50 years drawing on a broad range of research (in the UK and internationally) and acknowledges that these developments are influenced by cultural and social forces. The âTheory of Changeâ approach is introduced and examined. The chapter concludes with a critical appraisal of evaluation underpinned by the experimental paradigm which is now dominant in many contexts.
Evaluation is inherently political and thus Chapter 2 (âThe politics of evaluationâ) focuses on the impact of neo-liberalism and managerialism on the processes and practices of evaluation in youth and community work. The chapter critically examines the concepts of impact, impact measurement and shared measurement, before examining the âpolitics of evidenceâ through a questioning of what counts as good or credible evidence. Three sets of standards (the âWhat Worksâ Standards, the Nesta Standards and the Bond Principles) are critically reviewed in relation to their applicability in the context of youth and community work evaluation.
The final chapter in this part of the book, Chapter 3 (âPractitionersâ tensions and dilemmasâ) draws on empirical research with youth workers in England. It seeks to illuminate the challenges they experienced as a consequence of using an accountability-focused evaluation. In particular, this chapter examines the value dilemmas they encountered, and the impact they felt the process had on their daily practice.
1
What is Evaluation?
This chapter begins by unpacking âevaluationâ in order to consider some of the associated terms such as merit and worth, quality, value and significance. These are explored as a means of establishing a critical awareness of their subjective nature. Paradigmatic approaches influence the way in which we conceive of and practice evaluation; two approaches (positivism and interpretivism) are explored. The dynamic and evolving nature of evaluation in professional practice is then examined in detail. âTheory of Changeâ as a basis for evaluation of social programmes has grown in popularity over the past decade. Its origins and developments are examined and its use in relation to youth and community work is explored. Evaluation can be experienced in a variety of ways depending on how it is conceptualised and enacted, and the chapter concludes by offering a critical overview of these responses.
Unpacking âevaluationâ
At a basic level, evaluation can be understood as an everyday activity; we constantly evaluate our experiences, make judgements about the âvalueâ of the products we use, the relationships we form and the usefulness of our actions. We do this consciously and, more often, subconsciously. Our judgements are subjective, influenced by our values, expectations and our lived experience. However, evaluation, as a professional activity or discipline, is more than this in that while it involves these everyday judgements about value and worth, it entails a much more systematic and rigorous approach. Evaluation in professional contexts is defined in the Encyclopaedia of Evaluation as an applied inquiry process for generating and analysing evidence in order to draw conclusions about the value, merit, worth, significance or quality of a programme, product, policy or plan (Fournier cited in Mathison 2005). These terms: âvalueâ, âmeritâ, âworthâ, âsignificanceâ and âqualityâ warrant some discussion.
Lincoln and Guba (1986 cited in Mark, Greene and Shaw 2006) make the distinction between merit and worth based on context. They define âmeritâ as the intrinsic, context-free qualities of the subject being evaluated. âWorthâ is defined as the context-determined value of the subject being evaluated, in other words, the value of a particular intervention in a particular setting. âQualityâ, on the other hand, is an elusive concept (Harvey and Green 1993). Dahlberg, Moss and Pence (2007: 4) problematise the objectification of quality; they argue that âthe concept itself has achieved such dominance that it is hardly questioned. For most part it is taken for granted that there is something â objective, real, knowable â called qualityâ. Bush and Phillips (1996) support the view that quality needs to be recognised as a social construction as the way in which quality is understood will vary across the world according to the particular stakeholder, their socioeconomic status and culture. They view quality as both a dynamic and relative concept that is subject to change as a variety of factors evolve.
Kelemen (2003) offers two opposing perspectives of quality; the managerial perspective and the critical perspective. The managerial perspective views quality as a technical, operational achievement, seeing it as a self-contained entity that can be planned and controlled with technical and managerial knowledge. This perspective assumes quality can be assessed in a neutral, value-free way through an objective lens. In contrast, the critical perspective views quality as a political, cultural and social concept. This perspective regards quality as a complex and contested social and political phenomenon, which acquires its meaning through processes of communication in which organisational and societal power play a substantial role. The critical perspective argues that quality cannot be studied in a neutral, value-free way through an objective lens.
Conceptions of âvalueâ and âsignificanceâ are equally evasive, and raise questions as to why greater value or significance is given to one form of outcome as opposed to another. It is necessary to recognise that judgements of value will be informed by how we understand the concept of âvalueâ and to accept that understanding is contingent and variable. In other words, judgements of âqualityâ will be based on subjective conceptualisations of what âgoodâ practice is and what we believe constitutes âsuccessâ (Mark et al. 2006). What this means then is that different people will have different ways of conceptualising good practice, for example a funder, a senior officer, a practitioner or a service user may all hold different ideas about what quality means.
Weiss (1998: 4) offers a definition of evaluation that takes us forward as it introduces the idea of a set of standards to inform our judgement-making, she states:
Evaluation is the systematic assessment of the operation and/or the outcomes of a programme or policy, compared to a set of explicit or implicit standards as a means of contributing to the improvement of the policy or programme.
(Original emphasis)
It is important to note that in this definition, the standards against which judgements are made can be explicit or implicit. In other words, we must recognise that in the absence of an explicit set of standards, our judgement-making will still be informed by a range of âmarkersâ. Our âimplicitâ standards will be informed by our subjective understanding of âvalueâ and only be available to us, though not always consciously. By trying to make these âmarkersâ explicit through dialogue, we can at least surface some of the subjectivities. This is not an easy process as articulating our subjective understanding to others is challenging. People are generally able to recognise quality when they see it, but will struggle to actually define it (Stephenson 2003 cited in McMillan and Parker 2005: 153). However, having an explicit set of standards does not fully address the issue either as these will be subject to interpretation and are likely to be interpreted differently by different people as they draw on their own set of contending influences that are informed by cultural and personal circumstances (Orr 2008).
What is evaluation for?
There is not a single definitive purpose for evalaution but there are some commonalities. Chelminsky (1997) proposes that evaluation has three purposes:
- accountability, which responds to the demands of funders and stakeholders to meet contractual agreements;
- programme development, which focuses on improving the quality of the programme;
- generating knowledge, which aims to develop understanding about what forms of practice are successful.
Everitt and Hardiker (1996) assert that the purpose of evaluation in the context of social welfare should be to promote âgoodâ practice and firmly situate evaluation in democratic processes. For them, the twin purposes of evaluation involve the generation of evidence about an activity, policy, programme or project and the process of making judgements about its value (Everitt and Hardiker 1996).
Kushner (2000) argues for a different purpose, he makes the connection between evaluation and social justice. For him, evaluation can be a form of political action as it can be used to expose political and intellectual authority. âEvaluation as political action does not confront societyâs projects, but it does confront the political infrastructure within which they are designed and operatedâ (Kushner 2000: 38). Stake (2004) recognises the accountability function of evaluation, but offers a typology of six other functions which he sees as dominant in the context of educational and social programmes:
- assessing goal attainment;
- aiding organisational development;
- assessing contextual quality;
- studying policy aiding social action;
- legitimating the program; and
- deflecting criticism. (Stake 2004: 32)
Taking a critical perspective, there are those who suggest that evaluation can be a technology of control. For example, Trinderâs (2000) feminist critique argues that evaluation processes in the context of evidence-based practice can be seen as a covert method of rationing resources and constraining professional autonomy. Davies (2003) supports this view, arguing that evaluation can be regarded both as a product of managerialism and a means of implementing managerialist agendas. Both of these examples relate to forms of evaluation which privilege the accountability function and Issitt and Spence (2005) argue that when evaluation loses its ability to support programme development or generate knowledge it has had a detrimental effect on practice. Dahlberg et al. (2007) argue that evaluation can take on a policing function, positioning it as an integral part of a control system. It is clear then, that while the purpose of any particular evaluation will shape the evaluation approach, it is also important to critically examine the purpose by raising fundamental questions such as what is it we want to know? Why do we want to know this? Who has defined âqualityâ and âvalueâ?
Different paradigmatic approaches
Evaluation is a process of inquiry, it seeks to generate knowledge, and thus evaluation is shaped by the way in which knowledge and knowledge creation is understood. These understandings form our inquiry âparadigmâ, that is, the basic set of beliefs that guide our actions. Denzin and Lincoln (2005) provide a thorough account of the various paradigms; here we will briefly consider two: positivism and interpretivism.
Positivism is based on the belief in the ability of scientific knowledge to solve major practical problems (Carr and Kemmis 1986) and is underpinned by an assumption that scientific knowledge is both accurate and certain (Crotty 1998). The positivist paradigm views knowledge as objective and value-free, and therefore generalisable and replicable (Wellington 2000). Evaluation approaches shaped by a positivist paradigm aim to provide explanation and often, although not always, will utilise quantitative methods. Evaluation informed by positivism is generally termed âexperimentalâ or âquasi-experimentalâ. Positivism has its critics; for example, some consider it inappropriate for researching complex social issues (Pring 2000; Whitehead and McNiff 2006) while others, for example Siraj-Blatchford (1994) and OâDonoghue (2007) highlight the lack of recognition and consideration of values as the main problem.
Interpretivism emerged as a reaction to positivism. The interpretive paradigm aims to understand rather than to explain the meaning behind something. It seeks to explore multiple pe...