Evaluation of newly designed health interventions is a necessary step preceding their implementation in practice. Evaluation consists of a systematic process for determining the merit, worth, or value of health interventions. The value of interventions is indicated by their appropriateness, effectiveness, safety, and efficiency in addressing clientsâ experience of the health problem and in promoting clientsâ health (Chapter 1).
Traditionally, evaluation research has been concerned with demonstrating the effectiveness of interventions on the ultimate outcomes. To this end, several studies are conducted, with the expectation that convergence of the studiesâ findings provides the evidence supporting the effectiveness of an intervention. Cumulating evidence, however, shows limited replicability of the results of studies that evaluate the same health intervention (Woodman, 2014). Limited replicability is indicated by mixed findings, with some supporting and others not supporting the effectiveness of the intervention. The literature is replete with examples of studies revealing mixed results and of systematic reviews and metaâanalyses reporting heterogeneity in the primary studiesâ findings; heterogeneity precludes the synthesis of empirical evidence on the effectiveness of a health intervention. Similarly, the results of implementation studies indicate failure of evidenceâbased interventions to show benefits in practice (Amrhein et al., 2017; Heneghan et al., 2017). For example, Crawford et al. (2016) reviewed the results of trials that evaluated the efficacy (under controlled research conditions) and effectiveness (under real world practice conditions) of the same simple or complex interventions. They found that, although the interventions were initially (in early efficacy studies) reported to be efficacious, 58% of simple interventions and 71% of complex ones returned negative results in subsequent effectiveness trials, implying that the interventions were no longer effective in later studies.
Several factors have been suggested as contributing to the limited replicability of findings on the interventionâs effectiveness. The factors are related to weaknesses in the conceptualization of evaluation studies and in the research methodology used in these studies (Crawford et al., 2016). Conceptually, the evaluation studies were informed by a notion of causality that focuses on the direct impact of the intervention on the ultimate outcomes, and does not attend to other factors with the potential to contribute to the outcomes. The factors are inherent in the context in which the intervention is delivered and are associated with: the characteristics of the setting or environment, the interventionist, and the clients; the fidelity with which the intervention is implemented; the clientsâ perceptions of the treatments included in the evaluation study; and the capacity of the intervention to initiate the mechanism of action. The focus on the direct causal effects of the intervention on the ultimate outcomes results in an emphasis on internal validity, at the expense of other types of validity, and consequently on valuing the experimental design or randomized controlled trial as the most robust in generating evidence of effectiveness.
With the limited attendance to context, the findings of intervention evaluation studies provide answers to the question: Does the intervention work? They fall short of addressing questions of relevance to practice and of importance in guiding treatment decisions: What clients, presenting with which characteristics, benefit from which intervention, given in what mode, at what dose, and in what context? And how does the intervention produce the beneficial effects.
This state of the science has generated some shifts in perspectives underlying intervention evaluation research, accompanied by the acceptance of various designs and methods as appropriate and useful for determining the effectiveness of health interventions in research and practice. The shifts in perspectives are represented in the adoption of the notion of multiâcausality, the emphasis on enhancing all types of validity (discussed in this chapter), and the delineation of what to evaluate and in what sequence. The shifts translated into recommendations for evaluating clientsâ perceptions of health interventions (Chapter 11); the feasibility of interventions (Chapter 12); the contextual factors and the processes contributing to the implementation and effectiveness of interventions (Chapter 13); and a range of research designs (Chapter 14) and methods (Chapter 15) for examining the effects of interventions on a range of outcomes.
In this chapter, the conventional perspectives on causality, validity, and the sequential phases for evaluating health interventions are briefly reviewed. Advances in the field of intervention evaluation are discussed.
10.1 NOTION OF CAUSALITY
Underlying the systematic process for determining the effectiveness of health interventions is the notion of causality. Causality implies that the changes in the outcomes, observed following delivery of an intervention, are attributable to, or represent the impact of, the intervention. The notion of causality is evolving from the traditional perspective of single causality to the more recent view of multiple or multiâcausality.
10.1.1 Traditional Perspective
Demonstrating the effectiveness of health interventions involves the generation of evidence indicating that the intervention causes the ultimate outcomes. A cause is something that creates an effect or produces a change in a state or condition that would not happen without it (Powell, 2019). Causality refers to a structural relationship that underlies the dependence among phenomena or events (Stanford Encyclopedia of Philosophy, 2008), whereby the occurrence of one phenomenon or event is contingent on the occurrence of another. As applied to intervention evaluation, causality implies an association between the intervention (i.e. cause) and the outcome (i.e. effect). The association is characterized by the dependence of the changes in the outcome on the receipt of the intervention. In other words, the changes in the outcome take place in the presence (or exposure, receipt) of the intervention and do not occur in the absence of the intervention. This association enables the attribution of the outcomes solely and uniquely to the intervention.
This notion of causality focuses on the single, deterministic, and direct association between the intervention and the ultimate outcome. It rests on the counterfactual claim that if an intervention occurs, then the effect would occur or take place and conversely, if an intervention does not occur, then the effect would not occur (Cook et al., 2010). This notion of causality and the way in which it is represented in an evaluation study have been criticized on theoretical and empirical grounds. The traditional perspective on causality is considered simplistic, ignoring the potential direct and indirect influence of a range of factors on the delivery, mechanism of action, and outcomes of health interventions (e.g. Greenhalgh et al., 2015; Wong et al., 2012).
10.1.2 Recent Perspective
The recent perspective has extended the notion of causality to encompass chains of structural relationships among phenomena or events. The shift was engendered by the widening recognition that multiple factors, in combination with the intervention, contribute to changes in the outcomes (Chapter 5). The factors are experienced in various domains of health (e.g. physical, psychological, social) and at different levels (e.g. client, community, society). The factors, independently and collectively, predict the health problem or other outcomes; they may also interact with the intervention in shaping clientsâ perceptions of, responses to, the health intervention, as well as improvement in the immediate and intermediate outcomes that mediate the effects of the intervention on the ultimate outcomes.
The recent notion is that of multiâcausality. It acknowledges the interdependence among phenomena or events in that they are posited to influence each other, forming a complex system of causal relationships. The application of the notion of multiâcausality to intervention evaluation research translates into three propositions. The first is that a set of contextual factors influence directly the delivery of the intervention by interventionists, the implementation of treatment recommendations by clients, the initiation of the interventionâs mechanism of action, and the outcomes. The second suggests that contextual factors moderate the causal effects of the intervention on the outcomes. The third proposition indicates that the effects of the intervention on the ultimate outcomes are indirect, mediated by the immediate and intermediate outcomes that operationalize the hypothesized mechanism of action. The direct and indirect relationships are tested empirically to determine what exactly causes the beneficial effects of health interventions on th...