Participatory Evaluation in Youth and Community Work
eBook - ePub

Participatory Evaluation in Youth and Community Work

Theory and Practice

  1. 178 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Participatory Evaluation in Youth and Community Work

Theory and Practice

Book details
Book preview
Table of contents
Citations

About This Book

Evaluation is an essential element of professional practice. However, there is little in the literature that is designed to help students involve and support young people in evaluating the impact of youth work activities. This comprehensive book explores current thinking about evaluation in the context of youth work and community work and offers both theoretical understanding and practical guidance for students, practitioners, organisational leaders and commissioners.

Part 1 provides underpinning knowledge of the origins, purpose and functions of evaluation. It charts the developments in evaluation thinking over the past 50 years, and includes an exploration of 'theory of change'. Concepts such as impact, impact measurement and shared measurement are critically examined to illustrate the political nature of evaluation. Findings from empirical research are used to illuminate the challenges of applying a quasi-experimental paradigm of evaluation of youth and community work. Part 2 introduces the reader to participatory evaluation and presents an overview of the histories, rationale and underpinning principles. Empowerment evaluation, collaborative evaluation and democratic evaluation are examined in detail, including practice examples. Transformative Evaluation, an approach specifically designed for youth and community work, is presented. Part 3 focuses on the 'doing' of participatory evaluation and offers guidance to those new to participatory evaluation in youth and community work and a helpful check for those already engaging. It provides valuable information on planning, methods, data and data analysis and processes for sharing knowledge.

This essential text will enable the reader to reconstruct evaluation as a tool for learning as well as a tool for judging value. It provides a comprehensive reference, drawing on a wide range of literature and practice examples to support those involved in youth and community work to develop and implement participatory approaches to evaluating and communicating the meaning and value of youth and community work to a wider audience.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Participatory Evaluation in Youth and Community Work by Susan Cooper in PDF and/or ePUB format, as well as other popular books in Medicine & Health Care Delivery. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2017
ISBN
9781317291428
Edition
1

Part 1
Evaluation: Nature, politics and tensions

Evaluation is not a universal entity: conceptions of evaluation are socially constructed and thus shaped by a range of economic, political and cultural influences. This part of the book seeks to promote a critical understanding of evaluation through an exploration of its dynamic and evolving nature. The dominant discourse of evaluation is examined and challenged in relation to its use in supporting those engaged in evaluating youth and community work. Whilst much of the content of this part of the book may well be new to many youth and community workers, developing a critical understanding of evaluation is essential to making the case for a more diverse approach to evaluating youth and community work practice.
Chapter 1 (‘What is evaluation?’) focuses on the evolving nature of evaluation and provides underpinning knowledge of the origins, purpose and functions of evaluation. It charts the developments in evaluation thinking over the past 50 years drawing on a broad range of research (in the UK and internationally) and acknowledges that these developments are influenced by cultural and social forces. The ‘Theory of Change’ approach is introduced and examined. The chapter concludes with a critical appraisal of evaluation underpinned by the experimental paradigm which is now dominant in many contexts.
Evaluation is inherently political and thus Chapter 2 (‘The politics of evaluation’) focuses on the impact of neo-liberalism and managerialism on the processes and practices of evaluation in youth and community work. The chapter critically examines the concepts of impact, impact measurement and shared measurement, before examining the ‘politics of evidence’ through a questioning of what counts as good or credible evidence. Three sets of standards (the ‘What Works’ Standards, the Nesta Standards and the Bond Principles) are critically reviewed in relation to their applicability in the context of youth and community work evaluation.
The final chapter in this part of the book, Chapter 3 (‘Practitioners’ tensions and dilemmas’) draws on empirical research with youth workers in England. It seeks to illuminate the challenges they experienced as a consequence of using an accountability-focused evaluation. In particular, this chapter examines the value dilemmas they encountered, and the impact they felt the process had on their daily practice.

1
What is Evaluation?

This chapter begins by unpacking ‘evaluation’ in order to consider some of the associated terms such as merit and worth, quality, value and significance. These are explored as a means of establishing a critical awareness of their subjective nature. Paradigmatic approaches influence the way in which we conceive of and practice evaluation; two approaches (positivism and interpretivism) are explored. The dynamic and evolving nature of evaluation in professional practice is then examined in detail. ‘Theory of Change’ as a basis for evaluation of social programmes has grown in popularity over the past decade. Its origins and developments are examined and its use in relation to youth and community work is explored. Evaluation can be experienced in a variety of ways depending on how it is conceptualised and enacted, and the chapter concludes by offering a critical overview of these responses.

Unpacking ‘evaluation’

At a basic level, evaluation can be understood as an everyday activity; we constantly evaluate our experiences, make judgements about the ‘value’ of the products we use, the relationships we form and the usefulness of our actions. We do this consciously and, more often, subconsciously. Our judgements are subjective, influenced by our values, expectations and our lived experience. However, evaluation, as a professional activity or discipline, is more than this in that while it involves these everyday judgements about value and worth, it entails a much more systematic and rigorous approach. Evaluation in professional contexts is defined in the Encyclopaedia of Evaluation as an applied inquiry process for generating and analysing evidence in order to draw conclusions about the value, merit, worth, significance or quality of a programme, product, policy or plan (Fournier cited in Mathison 2005). These terms: ‘value’, ‘merit’, ‘worth’, ‘significance’ and ‘quality’ warrant some discussion.
Lincoln and Guba (1986 cited in Mark, Greene and Shaw 2006) make the distinction between merit and worth based on context. They define ‘merit’ as the intrinsic, context-free qualities of the subject being evaluated. ‘Worth’ is defined as the context-determined value of the subject being evaluated, in other words, the value of a particular intervention in a particular setting. ‘Quality’, on the other hand, is an elusive concept (Harvey and Green 1993). Dahlberg, Moss and Pence (2007: 4) problematise the objectification of quality; they argue that ‘the concept itself has achieved such dominance that it is hardly questioned. For most part it is taken for granted that there is something – objective, real, knowable – called quality’. Bush and Phillips (1996) support the view that quality needs to be recognised as a social construction as the way in which quality is understood will vary across the world according to the particular stakeholder, their socioeconomic status and culture. They view quality as both a dynamic and relative concept that is subject to change as a variety of factors evolve.
Kelemen (2003) offers two opposing perspectives of quality; the managerial perspective and the critical perspective. The managerial perspective views quality as a technical, operational achievement, seeing it as a self-contained entity that can be planned and controlled with technical and managerial knowledge. This perspective assumes quality can be assessed in a neutral, value-free way through an objective lens. In contrast, the critical perspective views quality as a political, cultural and social concept. This perspective regards quality as a complex and contested social and political phenomenon, which acquires its meaning through processes of communication in which organisational and societal power play a substantial role. The critical perspective argues that quality cannot be studied in a neutral, value-free way through an objective lens.
Conceptions of ‘value’ and ‘significance’ are equally evasive, and raise questions as to why greater value or significance is given to one form of outcome as opposed to another. It is necessary to recognise that judgements of value will be informed by how we understand the concept of ‘value’ and to accept that understanding is contingent and variable. In other words, judgements of ‘quality’ will be based on subjective conceptualisations of what ‘good’ practice is and what we believe constitutes ‘success’ (Mark et al. 2006). What this means then is that different people will have different ways of conceptualising good practice, for example a funder, a senior officer, a practitioner or a service user may all hold different ideas about what quality means.
Weiss (1998: 4) offers a definition of evaluation that takes us forward as it introduces the idea of a set of standards to inform our judgement-making, she states:
Evaluation is the systematic assessment of the operation and/or the outcomes of a programme or policy, compared to a set of explicit or implicit standards as a means of contributing to the improvement of the policy or programme.
(Original emphasis)
It is important to note that in this definition, the standards against which judgements are made can be explicit or implicit. In other words, we must recognise that in the absence of an explicit set of standards, our judgement-making will still be informed by a range of ‘markers’. Our ‘implicit’ standards will be informed by our subjective understanding of ‘value’ and only be available to us, though not always consciously. By trying to make these ‘markers’ explicit through dialogue, we can at least surface some of the subjectivities. This is not an easy process as articulating our subjective understanding to others is challenging. People are generally able to recognise quality when they see it, but will struggle to actually define it (Stephenson 2003 cited in McMillan and Parker 2005: 153). However, having an explicit set of standards does not fully address the issue either as these will be subject to interpretation and are likely to be interpreted differently by different people as they draw on their own set of contending influences that are informed by cultural and personal circumstances (Orr 2008).

What is evaluation for?

There is not a single definitive purpose for evalaution but there are some commonalities. Chelminsky (1997) proposes that evaluation has three purposes:
  • accountability, which responds to the demands of funders and stakeholders to meet contractual agreements;
  • programme development, which focuses on improving the quality of the programme;
  • generating knowledge, which aims to develop understanding about what forms of practice are successful.
Everitt and Hardiker (1996) assert that the purpose of evaluation in the context of social welfare should be to promote ‘good’ practice and firmly situate evaluation in democratic processes. For them, the twin purposes of evaluation involve the generation of evidence about an activity, policy, programme or project and the process of making judgements about its value (Everitt and Hardiker 1996).
Kushner (2000) argues for a different purpose, he makes the connection between evaluation and social justice. For him, evaluation can be a form of political action as it can be used to expose political and intellectual authority. ‘Evaluation as political action does not confront society’s projects, but it does confront the political infrastructure within which they are designed and operated’ (Kushner 2000: 38). Stake (2004) recognises the accountability function of evaluation, but offers a typology of six other functions which he sees as dominant in the context of educational and social programmes:
  • assessing goal attainment;
  • aiding organisational development;
  • assessing contextual quality;
  • studying policy aiding social action;
  • legitimating the program; and
  • deflecting criticism. (Stake 2004: 32)
Taking a critical perspective, there are those who suggest that evaluation can be a technology of control. For example, Trinder’s (2000) feminist critique argues that evaluation processes in the context of evidence-based practice can be seen as a covert method of rationing resources and constraining professional autonomy. Davies (2003) supports this view, arguing that evaluation can be regarded both as a product of managerialism and a means of implementing managerialist agendas. Both of these examples relate to forms of evaluation which privilege the accountability function and Issitt and Spence (2005) argue that when evaluation loses its ability to support programme development or generate knowledge it has had a detrimental effect on practice. Dahlberg et al. (2007) argue that evaluation can take on a policing function, positioning it as an integral part of a control system. It is clear then, that while the purpose of any particular evaluation will shape the evaluation approach, it is also important to critically examine the purpose by raising fundamental questions such as what is it we want to know? Why do we want to know this? Who has defined ‘quality’ and ‘value’?

Different paradigmatic approaches

Evaluation is a process of inquiry, it seeks to generate knowledge, and thus evaluation is shaped by the way in which knowledge and knowledge creation is understood. These understandings form our inquiry ‘paradigm’, that is, the basic set of beliefs that guide our actions. Denzin and Lincoln (2005) provide a thorough account of the various paradigms; here we will briefly consider two: positivism and interpretivism.
Positivism is based on the belief in the ability of scientific knowledge to solve major practical problems (Carr and Kemmis 1986) and is underpinned by an assumption that scientific knowledge is both accurate and certain (Crotty 1998). The positivist paradigm views knowledge as objective and value-free, and therefore generalisable and replicable (Wellington 2000). Evaluation approaches shaped by a positivist paradigm aim to provide explanation and often, although not always, will utilise quantitative methods. Evaluation informed by positivism is generally termed ‘experimental’ or ‘quasi-experimental’. Positivism has its critics; for example, some consider it inappropriate for researching complex social issues (Pring 2000; Whitehead and McNiff 2006) while others, for example Siraj-Blatchford (1994) and O’Donoghue (2007) highlight the lack of recognition and consideration of values as the main problem.
Interpretivism emerged as a reaction to positivism. The interpretive paradigm aims to understand rather than to explain the meaning behind something. It seeks to explore multiple pe...

Table of contents

  1. Cover
  2. Title
  3. Copyright
  4. CONTENTS
  5. Lists of figures
  6. List of tables
  7. Acknowledgements
  8. Introduction
  9. PART 1 Evaluation: Nature, politics and tensions
  10. PART 2 Participatory evaluation
  11. PART 3 Participatory evaluation in practice
  12. Conclusion
  13. Index