Practical Program Evaluation
eBook - ePub

Practical Program Evaluation

Theory-Driven Evaluation and the Integrated Evaluation Perspective

Huey T. (Tsyh) Chen

Condividi libro
  1. 464 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Practical Program Evaluation

Theory-Driven Evaluation and the Integrated Evaluation Perspective

Huey T. (Tsyh) Chen

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

The Second Edition of Practical Program Evaluation shows readers how to systematically identify stakeholders' needs in order to select the evaluation options best suited to meet those needs. Within his discussion of the various evaluation types, Huey T. Chen details a range of evaluation approaches suitable for use across a program's life cycle. At the core of program evaluation is its body of concepts, theories, and methods. This revised edition provides an overview of these, and includes expanded coverage of both introductory and more cutting-edge techniques within six new chapters. Illustrated throughout with real-world examples that bring the material to life, the Second Edition provides many new tools to enrich the evaluator's toolbox.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Practical Program Evaluation è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Practical Program Evaluation di Huey T. (Tsyh) Chen in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Social Sciences e Social Science Research & Methodology. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2014
ISBN
9781483315652

Part I Introduction

The first three chapters of this book, which comprise Part I, provide general information about the theoretical foundations and applications of program evaluation principles. Basic ideas are introduced, and a conceptual framework is presented. The first chapter explains the purpose of the book and discusses the nature, characteristics, and strategies of program evaluation. In Chapter 2, program evaluators will find a systematic typology of the various evaluation approaches one can choose among when faced with particular evaluation needs. Chapter 3 introduces the concepts of logic models and program theory, which underlie many of the guidelines found throughout the book.
Image 1

Chapter 1 Fundamentals of Program Evaluation

The programs that evaluators can expect to assess have different names such as treatment program, action program, or intervention program. These programs come from different substantive areas, such as health promotion and care, education, criminal justice, welfare, job training, community development, and poverty relief. Nevertheless, they all have in common organized efforts to enhance human well-being—whether by preventing disease, reducing poverty, reducing crime, or teaching knowledge and skills. For convenience, programs and policies of any type are usually referred in this book as “intervention programs” or simply “programs.” An intervention program intends to change individuals’ or groups’ knowledge, attitudes, or behaviors in a community or society. Sometimes, an intervention program aims at changing the entire population of a community; this kind of program is called a population-based intervention program.

The Nature of Intervention Programs and Evaluation: A Systems View

The terminology of systems theory (see, e.g., Bertalanffy, 1968; Ryan & Bohman, 1998) provides a useful means of illustrating how an intervention program works as an open system, as well as how program evaluation serves the program. In a general sense, as an open system an intervention program consists of five components (input, transformation, outputs, environment, and feedback), as illustrated in Figure 1.1.
Figure 1
Figure 1.1 A Systems View of a Program

Inputs.

Inputs are resources the program takes in from the environment. They may include funding, technology, equipment, facilities, personnel, and clients. Inputs form and sustain a program, but they cannot work effectively without systematic organization. Usually, a program requires an implementing organization that can secure and manage its inputs.

Transformation.

A program converts inputs into outputs through transformation. This process, which begins with the initial implementation of the treatment/intervention prescribed by a program, can be described as the stage during which implementers provide services to clients. For example, the implementation of a new curriculum in a school may mean the process of teachers teaching students new subject material in accordance with existing instructional rules and administrative guidelines. Transformation also includes those sequential events necessary to achieve desirable outputs. For example, to increase students’ math and reading scores, an education program may need to first boost students’ motivation to learn.

Outputs.

These are the results of transformation. One crucial output is the attainment of the program’s goals, which justifies the existence of the program. For example, an output of a treatment program directed at individuals who engage in spousal abuse is the end of the abuse.

Environment.

The environment consists of any factors that, despite lying outside a program’s boundaries, can nevertheless either foster or constrain that program’s implementation. Such factors may include social norms, political structures, the economy, funding agencies, interest groups, and concerned citizens. Because an intervention program is an open system, it depends on the environment for its inputs: clients, personnel, money, and so on. Furthermore, the continuation of a program often depends on how the general environment reacts to program outputs. Are the outputs valuable? Are they acceptable? For example, if the staff of a day care program is suspected of abusing children, the environment would find that output unacceptable. Parents would immediately remove their children from the program, law enforcement might press criminal charges, and the community might boycott the day care center. Finally, the effectiveness of an open system, such as an intervention program, is influenced by external factors such as cultural norms and economic, social, and political conditions. A contrasting system may be illustrative: In a biological system, the use of a medicine to cure an illness is unlikely to be directly influenced by external factors such as race, culture, social norms, or poverty.

Feedback.

So that decision makers can maintain success and correct any problems, an open system requires information about inputs and outputs, transformation, and the environment’s responses to these components. This feedback is the basis of program evaluation. Decision makers need information to gauge whether inputs are adequate and organized, interventions are implemented appropriately, target groups are being reached, and clients are receiving quality services. Feedback is also critical to evaluating whether outputs are in alignment with the program’s goals and are meeting the expectations of stakeholders. Stakeholders are people who have a vested interest in a program and are likely be affected by evaluation results; they include funding agencies, decision makers, clients, program managers, and staff. Without feedback, a system is bound to deteriorate and eventually die. Insightful program evaluation helps to both sustain a program and prevent it from failing. The action of feedback within the system is indicated by the dotted lines in Figure 1.1.
To survive and thrive within an open system, a program must perform at least two major functions. First, internally, it must ensure the smooth transformation of inputs into desirable outcomes. For example, an education program would experience negative side effects if faced with disruptions like high staff turnover, excessive student absenteeism, or insufficient textbooks. Second, externally, a program must continuously interact with its environment in order to obtain the resources and support necessary for its survival. That same education program would become quite vulnerable if support from parents and school administrators disappeared.
Thus, because programs are subject to the influence of their environment, every program is an open system. The characteristics of an open system can also be identified in any given policy, which is a concept closely related to that of a program. Although policies may seem grander than programs—in terms of the envisioned magnitude of an intervention, the number of people affected, and the legislative process—the principles and issues this book addresses are relevant to both. Throughout the rest of the book, the word program may be understood to mean program or policy.
Based upon the above discussion, this book defines program evaluation as the process of systematically gathering empirical data and contextual information about an intervention program—specifically answers to what, who, how, whether, and why questions that will assist in assessing a program’s planning, implementation, and/or effectiveness. This definition suggests many potential questions for evaluators to ask during an evaluation: The “what” questions include those such as, what are the intervention, outcomes, and other major components? The “who” questions might be, who are the implementers and who are the target clients? The “how” questions might include, how is the program implemented? The “whether” questions might ask whether the program plan is sound, the implementation adequate, and the intervention effective. And the “why” questions could be, why does the program work or not work? One of the essential tasks for evaluators is to figure out which questions are important and interesting to stakeholders and which evaluation approaches are available for evaluators to use in answering the questions. These topics will be systematically discussed in Chapter 2. The purpose of program evaluation is to make the program accountable to its funding agencies, decision makers, or other stakeholders and to enable program management and implementers to improve the program’s delivery of acceptable outcomes.

Classic Evaluation Concepts, Theories, and Methodologies: Contributions and Beyond

Program evaluation is a young applied science; it began developing as a discipline only in the 1960s. Its basic concepts, theories, and methodologies have been developed by a number of pioneers (Alkin, 2013; Shadish, Cook, & Leviton, 1991). Their ideas, which are foundational knowledge for evaluators, guide the design and conduct of evaluations. These concepts are commonly introduced to readers in two ways. The conventional way is to introduce classic concepts, theories, and methodologies exactly as proposed by these pioneers. Most major evaluation textbooks use this popular approach.
This book, however, not only introduces these classic concepts, theories, and methodologies but also demonstrates how to use them as a foundation for formulating additional evaluation approaches. Readers can not only learn from evaluation pioneers’ contributions but also expand or extend their work, informed by lessons learned from experience or new developments in program evaluation. However, there is a potential drawback to taking this path. It requires discussing the strengths and limitations of the work of the field’s pioneers. Such critiques may be regarded as intended to diminish or discredit this earlier work. It is important to note that the author has greatly benefited from the classic works in the field’s literature and is very grateful for the contributions of those who developed program evaluation as a discipline. Moreover, the author believes that these pioneers would be delighted to see future evaluators follow in their footsteps and use their accomplishments as a basis for exploring new territory. In fact, the seminal authors in the field would be very upset if they saw future evaluators still working with the same ideas, without making progress. It is in this spirit that the author critiques the literature of the field, hoping to inspire future evaluators to further advance program evaluation.
Indeed, the extension or expansion of understanding is essential for advancing program evaluation. Readers will be stimulated to become independent thinkers and feel challenged to creatively apply evaluation knowledge in their work. Students and practitioners who read this book will gain insights from the discussions of different options, formulate their own views of the relative worth of these options, and perform better work as they go forward in their careers.

Evaluation Typologies

Stakeholders need two kinds of feedback from evaluation. The first kind is information they can use to improve a program. Evaluations can function as improvement-oriented assessments that help stakeholders understand whether a program is running smoothly, whether there are problems that need to be fixed, and how to make the program more efficient or more effective. The second kind of feedback evaluations can provide is an accountability-oriented assessment of whether or not a program has worked. This information is essential for program managers and staff to fulfill their obligation to be accountable to various stakeholders.
Different styles of evaluation have been developed to serve these two types of feedback. This section will first discuss Scriven’s (1967) classic distinction between formative and summative evaluation and then introduce a broader evaluation typology.

The Distinction Between Formative and Summative Evaluation

Scriven (1967) made a crucial contribution to evaluation by introducing the distinction between formative and summative evaluation. According to Scriven, formative evaluation fosters improvement of ongoing activities. Summative evaluation, on the other hand, is used to assess whether results have met the stated goals. Summative evaluation informs the go or no-go decision, that is, whether to continue or repeat a program or not. Scriven initially developed this distinction from his experience of curriculum assessment. He viewed the role of formative evaluation in relation to the ongoing improvement of the curriculum, while the role of summative evaluation serves administrators by assessing the entire finished curriculum. Scriven (1991a) provided more elaborated descriptions of the distinction. He defined formative evaluation as “evaluation designed, done, and intended to support the process of improvement, and normally commissioned or done, and delivered to someone who can make improvement” (p. 20). In the same article, he defined summative evaluation as “the rest of evaluation; in terms of intentions, it is evaluation done for, or by, any observers or decision makers (by contrast with developers) who need valuative conclusions for any other reasons besides development.” The distinct purposes of these two kinds of evaluation have played an important role in the way that evaluators communicate evaluation results to stakeholders.
Scriven (1991a) indicated that the best illustration of the distinction between formative and summative evaluation is the analogy given by Robert Stake: “When the cook tastes the soup, that’s formative evaluation; when the guest tastes it, that’s summative evaluation” (Scriven, p. 19). The cook tastes the soup while it is cooking in case, for example, it needs more salt. Hence, formative evaluation happens in the early stages of a program so the program can be improved as needed. On the other hand, the...

Indice dei contenuti