Leg III
Program evaluation
This final leg of the book is intended, not only for P/CVE program evaluators, but for program designers, program managers, their frontline staff, and others intimately involved in trying to demonstrate the effects of a given program. By reading itâregardless of oneâs job titleâone can expect to become better equipped to capture, analyze, and communicate important information regarding the effects of a programs. Additionally, the following section will make clearer the importance, not only of evaluation itself, but of the advantages of involving evaluation specialists early in a programâs design phase.
A misconception
It would be a misconception to believe that evaluation is necessarily a lost cause if itâs brought to bear on a program that has already been designed and is up-and-running. If that were so, we wouldnât have quality evaluations of the US public-school system, given that the school system was in effect before formal evaluations of it were made. On the contrary, one of our greatest evaluation methodologists, Thomas Cook, co-author of the venerable âExperimental and Quasi-experimental Designs for Generalized Causal Inference,â often made public education his cause cĂ©lĂšbre (Shadish, Cook, & Campbell, 2002).
However, it is true that data collection opportunities areâde factoâlimited if they are captured later vs. sooner. Furthermore, some (though not all) very strong research designs require data to be collected at an early âpre-implementationâ phase. So, methodological options become narrower the later that evaluation design is brought to bear relative to program implementation. Regardless, whether one is planning for measurement and evaluation relatively early or late with respect to a programâs implementation, know that there are rigorous research and evaluation options available; though, to underscore, it is typically advantageous to design evaluations prior to program implementation.
A word to evaluation funders/commissioners
The upcoming chapters can serve myriad evaluation-involved stakeholders, including those who commission evaluations. For example, Chapter 13 includes links to an innovative, freely available, online tool, known as âGeneraToR,â which is designed to help commissioners develop Terms of Reference (ToR; aka Request for Proposals [RFP]; aka Notice of Funding Opportunity [NOFO]). However, this leg of the text is unlikely to satisfy those who wish to have guidelines for the bureaucratic machinations of commissioning an evaluation (e.g., formation of an evaluation steering group, and deciding and codifying who will be responsible for making various bureaucratic decisions). For such advice, see the guide featured in this endnote.1
Instead, the upcoming leg of our journey is about bringing an evaluation to life as though youâre in the evaluatorâs driverâs seat: as though youâre designing and conducting an evaluation first-hand. If youâre an evaluation commissioner, this section likely will increase your understanding of what to expect from top-flight evaluations: not only what to expect of the results, but what to expect of the processes of evaluation, and of standards for scientific reporting. Let this section of the book assist you in planning, overseeing, and demanding evaluation excellence in the field of P/CVE.
A word to evaluators and would-be evaluators alike
This leg of the book assumes some training in science; however, one neednât hold a science degree to gain from it. Nevertheless, if the evaluation plan involves surveys, or interviews, or any other data collection from human participants (and what P/CVE evaluation doesnât?) resolve to recruiting a social scientist, trained in such methods, to the team: if nothing else, recruiting one as a consultant. Otherwise, conducting an evaluation without doctoral-level expertise in research design, data collection, and data analysis, is tantamount to âpracticing without a license.â In principle, anyone can evaluate, just as anyone can represent themself in court. However, if there isnât a well-trained social scientist on the team, even if the team is incredibly smart, it simply doesnât know what it doesnât know; or, as itâs said, âone who represents themself in court has a fool for a client.â Save yourself time, trouble, and the possible faux pas of acting on faulty intelligence (i.e., the ostensible results of a homespun evaluation) by teaming with a competent doctoral-level social scientist (more on team selection in Chapter 13). In modern societies, we recognize that it is unethicalâindeed illegalâto practice medicine without a license. The stakes are simply too high for it to be otherwise. In the realm of P/CVEâanother potentially life-and-death enterpriseâshould we expect anything less than to be assisted by qualified doctors?
Orientation
Though this final section contains an enormous amount of information regarding evaluation and its methods in general, of course, it will focus on evaluation/methods as they pertain to P/CVE evaluation specifically. If you enjoy science, youâre going to enjoy the journey ahead: plenty of logic in motion, an engineerâs perspective on assessing slices of the human condition. If one is not so inclined toward science, donât be daunted. Although any field of scientific inquiry is infinite, the general processes of evaluation science can be (at least conceptually) compartmentalized.
12 Defining the problem and identifying goals
As described in the previous leg of this text, those who design P/CVE programs must identify both the need(s) that a program intends to fulfill and the programâs operational goals. So, too, must those attending to P/CVE program evaluations identify both the need(s) that an evaluation intends to fulfill and its operational goals. Consequently, goals for the evaluation must be articulated and prioritized. This chapter describes the various types of evaluations (e.g., impact evaluation, developmental evaluation, process evaluation) and their uses. Additionally, it will discussâfrom the perspective of program evaluation (vs. program design)âapproaches to developing logic models and articulating a programâs theory of change.
Learning objectives
-
Understand âthe problemâ to be addressed by âutilization-focused evaluation,â and how the latter addresses the former?
-
Be able to describe the major types of evaluations.
-
Understand why every good evaluationâthose that are scientifically groundedâ must include at least some aspect(s) of a process evaluation.
-
Be able to discuss the importance of informational priority-setting for evaluation.
-
Understand why evaluators should articulate a programâs theory(ies) of change, even if one already has been developed for the program.
-
Understand functions that logic models serve for evaluators.
The âproblemâ to be addressed
As mentioned, similar to how P/CVE programs must identify both the needs they intend to fulfill, and their operational goals, so, too, must P/CVE program evaluations be tailored to fulfill identified needs. In the case of evaluation, that need is information. Whose needs? The answer to that is simple: the primary intended users of the evaluation. Though that answer is easy to articulate, evaluators need to take pains to have a clear understanding of the informational needs of the persons they serve.
The point is that evaluations are useless unless they provide accurate, actionable information to those who could benefit from the information (e.g., programmatic decision makers or public policymakers). This is at the heart of so-called âutilization-focused evaluationâ: an evaluation watershed movement pioneered several decades ago by Michael Patton (see Patton, 2008). Evaluation is not just basic research. Itâs applied research; it serves the actionable informational needs of predefined others. In short, all types of evaluation should be developed in the spirit of utilization-focused evaluation.
REALITY CHECK
Bear in mind that the audience of researchers and practitioners, in the field of P/CVE, can be considered among the legitimate primary intended users of a given P/CVE evaluation. Therefore, evaluators ought to put substantial consideration into how a given evaluation can satisfy, not only the perhaps narrow informational needs of (for example) program staff and evaluation funders but those of the theory and practice of P/CVE more broadly. In other words, program evaluations can be vehicles both for theoretical developments and for codification of evidence-based practices relevant to P/CVE: assuming that to do so does not place undue burdens on program staff or program participants (Williams & Kleinman, 2013).
Evaluation neednât be either applied or basic research; it should be both. That dual function is the brass ring of evaluation. To miss an opportunity to make both practical and theoretical contributions to the field is to do a disservice to the important mission of P/CVE itself.
As mentioned, the evaluation âproblem,â to be addressed is a lack of information in some regard. However, the needed information is not necessarily about whether a given program âworksâ (i.e., an impact evaluation). Instead, to find out what needs to be known, once again, evaluators must consult the primary intended users. For example, primary intended users might be interested primarily, or additionally, in whether a program is being executed as planned (i.e., a process evaluation), or how they can develop a new intervention for a given population/clientele (i.e., a developmental evaluation). Furthermore, the evaluand might not be a P/CVE program per se, but a P/CVE-related policy, strategy, or behavior of a network, etc. (see âClarify what will be evaluated,â n.d.).
Identify primary intended users
In some (arguably most) cases, an evaluation may have several uses (see âIdentify who are the primary intended users of the evaluation and what will they use it for,â n.d.). By identifying primary intended users, one may subsequently query them to learn of their informational needs, so that those needs can be met by the evaluation (ibid.). Primary intended users are not all of those who have a stake in the evaluation, but those who have the capacity to affect change informed by the evaluation (ibid.). These parties are in a privileged position to âto do things differentlyâ (e.g., change tactics, strategies, policies), because of their juxtaposition to the evaluation and/or the program itself (ibid.). Therefore, the informational needs of these parties are privileged over others (for example, over a general audience who might be curious as to an evaluationâs results; ibid.). The following endnote provides a link to a guide, from the International Development Research Centre that includes questions intended to guide the identification of primary intended user(s).2
Determining the intended use(s) of an evaluation
Though it might seem obvious, the intended use(s) of an evaluation defines its purpose(s), and from a utilization-focused perspective, the intended uses are sacrosanct. It is not enough simply to assert that an evaluation will be used for âaccountabilityâ or for âlearningâ (see âDecide purpose,â n.d.). Those should go without saying. The aforementioned guide, from the International Development Research Centre, contains questions intended to guide evaluators in their discussions with primary intended users, to ascertain how they seek to use information from a prospective evaluation. In sum, the central question to ask of primary intended users is âWhat do you want to know about the program?â The primary job of evaluators is to translate the answer(s) to that question into a suitable evaluation/res...