Evaluating the Impact of Implementing Evidence-Based Practice
eBook - ePub

Evaluating the Impact of Implementing Evidence-Based Practice

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Evaluating the Impact of Implementing Evidence-Based Practice

Book details
Book preview
Table of contents
Citations

About This Book

The Evidence-Based Nursing Series is co-published with Sigma Theta Tau International (STTI). The series focuses on implementing evidence-based practice in nursing and midwifery and mirrors the remit of Worldviews on Evidence-Based Nursing, encompassing clinical practice, administration, research and public policy. Evaluating the Impact of Implementing Evidence-Based Practice considers the importance of approaches to evaluate the implementation of evidence-based practice.
Outcomes of evidence-based practice can be wide ranging and sometimes unexpected. It is therefore important to evaluate the success of any implementation in terms of clinical outcomes, influence on health status, service users and health policy and long-term sustainability, as well as economic impacts.
This, the third and final book in the series, looks at how best to identify, evaluate and assess the outcomes of implementation, reflecting a wide range of issues to consider and address when planning and measuring outcomes.

  • An informative, practical resource for an international readership
  • Providing critical evaluation of models and approaches to measuring outcomes
  • Explores the importance of measuring successful implementation
  • Examines outcomes in terms of long-term sustainability
  • Addresses economic impacts and influence on health policy
  • Provides practice-based examples
  • Written by a team of internationally respected authors

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Evaluating the Impact of Implementing Evidence-Based Practice by Debra Bick, Ian D. Graham, Debra Bick, Ian D. Graham in PDF and/or ePUB format, as well as other popular books in Medicine & Nursing. We have over one million books available in our catalogue for you to explore.

Information

Year
2013
ISBN
9781118702338
Edition
1
Subtopic
Nursing

Chapter 1

The importance of addressing outcomes of evidence-based practice

Debra Bick and Ian D. Graham
Key learning points
  • Most of us reside in countries where healthcare resources are finite but healthcare costs and demands are increasing.
  • Health service funders, providers, and policy makers need to ensure interventions associated with evidence of shorter- and longer-term clinical and cost-effectiveness are implemented. This is often hindered by a lack of information on what comprises a “good” or a “bad” outcome from the perspectives of relevant stakeholders.
  • Use of evidence-based practice (EBP) is assumed to lead to better health outcomes; however, it is clear that use of tools such as guidelines, protocols, and pathways may not lead to anticipated benefits if all relevant outcomes, including process outcomes, are not considered from the outset.
  • Despite the development of models and theoretical frameworks to support EBP, implementation remains a complex undertaking. Interventions to support the use of EBP need to reflect context, culture, and facilitation.
  • Approaches to derive and evaluate outcomes need to be undertaken with the same level of rigor as other interventions and procedures that the EBP movement focuses on.

Introduction

In this chapter, the background to the development of the book is outlined as are some of the reasons why we felt it was timely and appropriate to bring together a text which focuses on the outcomes of implementation of evidence-based practice (EBP). Experts in the field of knowledge translation and EBP invited to contribute chapters to the book were asked to consider how to determine if outcomes of EBP in their areas of expertise were efficacious, how efficacy could be measured, and how to ascertain if the outcomes of interest were the most important from the perspectives of relevant stakeholders. As described by Ian Graham and colleagues in Chapter 2, outcomes of EBP could include change in behavior demonstrating use of evidence in practice and impact of use on outcomes such as better health and more effective use of healthcare resources.

Why are outcomes of EBP important?

We hope that by reading the chapters and following the perspectives presented by the authors that the need to accord equal priority to the outcomes of implementation as with all other steps to support the use of research in practice will become apparent. Most of us live in countries where healthcare resources are finite, an issue whether our healthcare is largely funded through our taxes or private insurance schemes. Some readers will reside in countries where healthcare systems face an unprecedented increase in the burden of ill health arising from chronic, non-communicable diseases—for example as a consequence of the epidemic of obesity or an aging population. Others will reside in countries which face epidemics of disease including TB, HIV/AIDS, persistent high maternal and infant mortality and morbidity, or where poor or fractured infrastructure cannot support an effective healthcare system. For those living in developed countries, while there have been unprecedented advances in healthcare technology and year on year increases in healthcare funding from government, the increase in resources has not been matched by improvements in health. This is most evident in the US, where it is estimated that healthcare costs for 2009 were $2.7 trillion, the highest level of healthcare spend anywhere in the world, yet life expectancy is lower than in many other developed and middle-income countries indicating large discrepancies between healthcare costs and outcomes (Institute of Medicine 2009). We also have healthcare systems where despite a plethora of technology, gaps remain in the quality of data to accurately inform and compare the outcomes of care. In the UK, efforts to gauge whether investment in healthcare following the election of a Labour government in 1997 had resulted in improved health outcomes were hampered by constraints in measures of quality and need for better measures of output and outcome extending beyond hospital episode data (Lakhani et al. 2005).

The development of EBP

For the last two decades, in response to some of the reasons outlined above, greater emphasis has been placed on the need to provide healthcare informed by evidence of effectiveness, the premise being that use of evidence will optimize health outcomes for the service user and maximize use of finite healthcare resources. The main drivers for EBP have come from political and policy initiatives which also instigated the establishment of organizations to develop guidance to inform healthcare such as the National Institute for Health and Clinical Effectiveness (NICE) in England and Wales, the Scottish Intercollegiate Guideline Network, and the US Agency for Healthcare Research and Quality. The remit of a national body such as NICE is to make recommendations for care based on best evidence of clinical and cost-effectiveness. Suites of guidelines to inform a range of acute and chronic physical and psychological health conditions and appraisals of innovations in technology and pharmacology have been developed and published by NICE which aim to standardize patient care, reduce variation in health outcomes, discourage use of interventions with no proven efficacy, and encourage systematic assessment of patient outcomes. The National Institute for Health Research which funds research to inform National Health Service (NHS) care in England requires studies funded across all of its programs to provide evidence of clinical and cost-effectiveness.
The role of NICE in the synthesis and dissemination of evidence to prioritize healthcare interventions has generated criticism that it promotes rationing in healthcare (Maynard et al. 2004), an issue with implications for determining how outcome measures are derived to elicit benefit and from whose perspective. As Maynard and colleagues (2004) write “…rationing is the inevitable corollary of prioritization, and NICE must fully inform rationing in the NHS,” the issue being not whether but how to ration (p. 227). In the UK, publication of NICE guidance which does not support the use of a particular drug or therapy because the evidence reviewed did not indicate clinical or cost-effectiveness has frequently been challenged by industry (Maynard & Bloor 2009), service user charities, and in media reports of an individual’s experience of being refused treatment which did not comply with NICE recommendations. Recent NICE recommendations which generated criticism about its role include restrictions on use of the drugs for people with early stage Alzheimer’s disease, restrictions to fertility treatments, and use of drugs to treat kidney cancer. In some instances, the Department of Health was forced to reverse the original NICE recommendation to deflect public criticism, for example the use of Herceptin for women with early stage breast cancer (Lancet 2005). Nevertheless, this is an interesting juxtaposition—whose outcomes should receive the highest priority when decisions about healthcare interventions and optimal use of finite resources are made? That certain treatments may make a difference to someone’s quality of life will not influence recommendation for use across the NHS if the evidence assessed does not demonstrate clinical or cost-effectiveness at thresholds set by NICE. The recent introduction of “top up” fees to enable patients to bypass NICE recommendations and purchase drugs not recommended for NHS use reflects the power of today’s informed healthcare consumer (Gubb 2008). Although only likely to be utilized by a small group of people, as Maynard and Bloor (2009) propose, this raises issues about the role of NICE and regulation of the pharmaceutical industry; how drug prices should be determined; and how, if at all, to deal differently with rare or end-of-life conditions when making resource allocation decisions in healthcare. It also introduces the issue of consumers opting to purchase interventions which they view as likely to provide a better outcome which could include aspects of physical and/or psychological health and/or well-being.
The development of strategies to encourage use of evidence to inform decisions about healthcare was stimulated initially by what has been referred to as a “movement” for evidence-based medicine (EBM). One of the first people to propose that medical care should be informed by evidence of effectiveness was Archie Cochrane, whose book Effectiveness and Efficiency: Random Reflections on Health Services was published in 1972. Cochrane also advocated that this approach should be applied to education, social work, criminology, and social policy (Cochrane 1972). The work of Archie Cochrane triggered groups such as those led by Gordon Guyatt and David Sackett to develop methods to synthesize and critique evidence to support decisions in clinical practice. In the late 1970s and 1980s, Ian Chalmers at the National Perinatal Epidemiology Unit in Oxford pioneered the methodology to systematically review the evidence related to effective care in pregnancy and childbirth. Building on this work, the Cochrane Centre was established in 1992 and was crucial for the spread of EBM, which in turn stimulated revisions to healthcare education and training, policy development, publication of new journals, and establishment of academic centers. Principles of EBM have subsequently been applied to support the commissioning of healthcare services, recommendations for pharmacology treatments, surgical interventions, diagnostic tests, and medical devices. Of note is that although attention has been paid to the use of measures of “outcome,” limited attention has been paid to the definition or consequences of a “good” or “poor” outcome. Reviewers for the Cochrane Pregnancy and Childbirth group define an outcome as an “adverse health event” (Hofmeyr et al. 2008). In a Cochrane review, data from meta-analyses of relevant trials will be presented in a forest plot with the beneficial effect of an intervention presented to the left of the “no effect” line and a harmful effect to the right of the line. This is an extremely useful way to present outcomes of pooled data, but it is one part of the picture if we are to ensure that outcomes are the most relevant for all concerned. Further exploration of outcomes is required in order that consequences beyond implementation can be considered from a range of perspectives, an important stage in the continuum of research use.

What is evidence?

There is ongoing debate as to the definition of “evidence” and what counts as evidence, although it seems consensus has been reached that evidence can come from a number of sources and not just the findings of randomized controlled trials (RCTs). A recent position paper from Sigma Theta Tau describes research evidence as:
methodologically sound, clinically relevant research about the effectiveness and safety of interventions, the accuracy and precision of assessment measures, the power of prognostic markers, the strength of causal relationships, the cost-effectiveness of nursing interventions, and the meaning of illness or patient experiences.
(Sigma Theta Tau International 2005–2007, Research and Scholarship Advisory Committee Position Statement 2008, p. 57)
In a 1996 commentary in the British Medical Journal, Sackett et al. (1996) defined EBM as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients,” and stressed the need for the clinician to use evidence along with their expertise and judgment to make decisions which also reflected the choice of the individual patient. A later British Medical Journal commentary reiterated that evidence alone should not be the main driver to change practice and that preferences and values needed to be explicit in clinical decision making (Guyatt et al. 2004). Of note is that the authors highlighted that the biggest future challenge for EBM was knowledge translation (Guyatt et al. 2004). The need to synthesize evidence for use by busy clinicians, to place evidence in a “hierarchy” with the most robust evidence at the top of the hierarchy and acknowledgment that evidence can come from a number of external sources continues to be emphasized (Bellomo and Bagshaw 2006).
When reading any literature which refers to use of evidence, it is apparent that a number of terms have been used to describe the process which include EBM, EBP, evidence-based clinical decision making and evidence-informed practice. The term evidence-based practice is more commonly used to describe evidence use by nurses, midwives, and members of the allied health professions (Sigma Theta Tau International Position Statement 2008).
Throughout this book, we refer to EBP in line with the following definition:
the process of shared decision making between practitioner, patient, and others significant to them based on research evidence, the patient’s experiences and preferences, clinical expertise or know-how, and other available robust sources of information.
(Rycroft-Malone et al. 2004)
As we have already indicated, an outcome could reflect behavior change at the individual, team, or organizational level, an improvement in individual health status or better use of healthcare resources. The increase in access to electronic bibliographic databases, such as the Cochrane Library of Systematic Reviews, and dissemination strategies originally adopted by groups such as NICE, professional organizations, and healthcare providers were viewed as ways to increase clinician awareness of research, with an assumption that the use of research evidence would spontaneously occur and improved health patient outcomes would follow. Studies of dissemination and implementation strategies found that few were effective (Grimshaw et al. 2004). Grimshaw and colleagues (2004) undertook a systematic review of the effectiveness and efficiency of guideline dissemination and implementation strategies. Studies were selected for inclusion if they were RCTs, controlled clinical trials, controlled before and after studies, and interrupted time series. A total of 235 studies which looked at 309 comparisons met inclusion criteria. Overall study quality was poor. Multifaceted interventions were addressed in 73% of the comparisons. The majority of comparisons which reported dichotomous outcome data (87%) found some differences in outcomes with considerable variation in observed effects both within and across interventions. Single interventions which were commonly evaluated included reminders, dissemination of educational materials and audit and feedback. The majority of studies only reported costs of treatment, and only 25 studies reported costs of guideline development, dissemination or implementation although data presented in most cases was of low quality and not suitable for extraction for the review. In conclusion, the authors recommended that decision makers needed to use considerable judgment when making decisions about how best to use limited resources to maximum population health.

Models and frameworks to support research use

A number of models and theoretical frameworks to support research use in practice have been developed—for example, the IOWA model (Titler et al. 2001), the PARiHS framework (Kitson et al. 1998) and the Ottawa Model (Graham & Logan 2004) which are described further in Chapter 3 of this book and are the focus of Book 1 of this series (Rycroft-Malone & Bucknall, 2010). It is now appreciated that implementation is complex, multifaceted, and multilayered and interventions need to be able to reflect and take account of context, culture, and facilitation to support and sustain research use. Despite the development of frameworks and models as Helfrich and colleagues (2009) highlight with respect to PARiHS, there is as yet no pool of validated measures to operationalize the constructs defined in the framework. Work in this area is ongoing, as is other work to support research use including tools to assess the extent to which an organization is ready to adopt change. An example of this is the organization readiness to change assessment (ORCA) instrument developed by the Veterans Health Administration (VHA) Quality Enhancement Research Initiative for Ischemic Heart Disease (Helfrich et al. 2009). Although still in the developmental stage, this could be a useful approach for future implementation strategies.

Why is it important to measure/evaluate the impact of EBP?

As illustrated in the following chapters that describe examples ranging from evaluation of outcomes of wound care interventions, cardiac care interventions, and the perspectives of service users, the importance of evaluating the outcomes of use of evidence is essential. The need to submit the evaluation of outcomes to the same level of rigor as other interventions and procedures that the EBP movement focuses on is also apparent.
There are many examples in clinical practice of interventions introduced on assumption of benefit rather than evaluation of impact on a range of outcomes from the perspectives of the relevant stakeholders. In maternity care, universal roll-out of interventions such as routine perineal shaving and enemas at the onset of labor, separation of mothers and babies after birth to prevent infection, and routine use of episiotomy occurred with no supporting evidence that immediate or longer-term outcomes were better—it was assumed that they would be. When these interventions were eventually subjected to rigorous evaluation more often than not there were no differences in outcomes or indications of potential harm (Basevi & Lavender 2008; Carroli & Mignini 2008; Reveiz et al. 2007; Widstrom et al. 1990). The Term Breech Trial (Hannah et al. 2000) provides a useful example of why longer-term outcomes from different stakeholders’ perspectives need to be considered and evaluated before universal change in practice takes place.
A small proportion of women (around 2–3%) will have a baby which presents at term in a breech presentation and studies which had previously considered which mode of birth was optimal for the baby and for the woman had been inconclusive due to methodological issues and small sample sizes. In certain cases, for example if it was a footling breech or if the baby was large, planned cesarean section (CS) had been considered safer than planned vaginal birth. The Term Breech Trial was designed to provide the ultimate answer to the mode of birth debate, with the proviso that study centers would have clinicians with the expertise to support vaginal breech births. The trial took place in 121 centers in 26 countries and recruited over 2,000 women. Women and their babies were initially followed up to 6 weeks post-birth. Primary study outcomes included perinatal and neonatal mortality or serious neonatal morbidity and maternal mortality or serious maternal morbidity. At 6 weeks, perinatal and neonatal mortality and morbidity were significantly lower among the planned CS group (17 of 1039 [1.6%] versus 52 of 1039 [5.0%]; relative risk 0.33 [95% CI 0.19–0.56]; p < 0.0001). There were no differences in any of the maternal outcomes. The trial was stopped early due to a higher event rate than expected. The authors concluded that planned CS was better than planned vaginal birth. Trial results were fast-tracked for publication by The Lancet (Hannah et al. 2000) despite need for caution raised by one peer reviewer because of concerns about the impact on practice of differential findings and implications this could have for maternity care in both developed and developing countries (Bewley & Shennan 2007).
Contrary to the usually slow uptake of research findings, in this case, the trial rapidly changed practice in many countries, with planned CS rates rising steeply following publication of the trial (Alexandersson et al. 2005; Carayol et al. 2007; Molkenboer et al. 2003). In England, planned elective CS is now the preferred mode of birth for women with a diagnosed breech baby at term (Department of Health 2008). Debate about the findings of the Term Breech Trial has continued particularly following publication of a two-year planned follow-up of women and babies, which showed no differences in outcomes between the study groups (Whyte et al. 2004). Criticisms of the original trial included lack of adherence to the study protocol, variation in standards of care between trial centers, inadequate methods of fetal assessment, and recruitment of women during active labor when they may not have had a chance to properly consider participation (Glezerman 2006). That women were not supported to birth in upright positions which could have increased the likelihood of a vaginal birth was also criticized (Gyte & Frolich 2001). Criticisms have been refuted by the trial team who defended their position that this was a peer reviewed trial evaluated in a number of countries and that criticisms in the main reflected the prior beliefs of clinicians (Ross & Hannah 2006).
The worldwide impact of study findings and rapid implementation of its findings into practi...

Table of contents

  1. Cover
  2. Contents
  3. Title Page
  4. Copyright
  5. Contributors’ information
  6. Series
  7. Preface
  8. Foreword
  9. Chapter 1 The importance of addressing outcomes of evidence-based practice
  10. Chapter 2 Measuring outcomes of evidence-based practice: Distinguishing between knowledge use and its impact
  11. Chapter 3 Models and approaches to inform the impacts of implementation of evidence-based practice
  12. Chapter 4 An outcomes framework for knowledge translation
  13. Chapter 5 Outcomes of evidence-based practice: practice to policy perspectives
  14. Chapter 6 Implementing and sustaining evidence in nursing care of cardiovascular disease
  15. Chapter 7 Outcomes of implementation that matter to health service users
  16. Chapter 8 Evaluating the impact of implementation on economic outcomes
  17. Chapter 9 Sustaining evidence-based practice systems and measuring the impacts
  18. Chapter 10 A review of the use of outcome measures of evidence-based practice in guideline implementation studies in Nursing, Allied Health Professions, and Medicine
  19. Index