Introduction
Letâs start with whatâs the matter with training evaluation. The vast majority of training evaluation that takes place focuses on learning outcomes rather than performance ones. Few organizations evaluate beyond level 1 or 2 of Kirkpatrickâs (1959) model â measuring reactions to training and learning acquisition (Brown, 2005). It is assumed, incorrectly, that if you got a positive reaction to training, say via a satisfaction survey, or you can show, perhaps through a test, that people have learnt something, then that training was effective. Counter-intuitively the opposite might be true, for reasons discussed later in this book. Focusing on reactions is a very narrow view of training â a bit like judging a cake by the quality of the ingredients before they go in the bowl and oven. While the ingredients might tell us something about how good the cake could be, the proof will only come later.
REFLECTION 1.1: WHAT EVALUATION ACTIVITY DOES YOUR ORGANIZATION CARRY OUT?
Think about the resources invested in workforce education and training in your organization: money, materials and time. How much of this activity is evaluated and what is the nature of the evaluation carried out? If it were necessary, for example, to review the relative effectiveness of different programmes would the information be available to allow decision-making? If your organizationâs senior management asked for evidence of the return on training would you be able to show them? Have you ever ended a programme because of the results of evaluation?
In response to global competition and technological change, the last three decades have seen substantial and enduring changes in the organization and nature of work, as we will discuss in Chapter 3. One consequence of this has been a focus on the importance of organizational resources and capabilities, including employee learning and development, as a means of securing competitive advantage. However, while the strategic importance of training is seen not just by companies but also by governments across the world as a key driver of competitiveness (and much more), few organizations actually evaluate the impact of their investment in formal workforce learning. Even fewer consider the effect of informal learning. In Chapter 16 we will Consider the effect that fostering a positive learning culture can have on performance. You might be surprised at quite how important it is.
It is not though just the way people work that has undergone a fundamental transformation in the last few decades; so has the way we learn at work. To take one example: around 40 per cent of learning is now delivered via virtual learning environments (Chapter 14 looks at evaluating e-learning).
A WORD ABOUT TERMS AND REFERENCES
Terms such as âeducationâ, âtrainingâ, âworkplace learningâ and âdevelopmentâ have distinct, although sometimes contested, meanings in academic literature. There are also many ways that different types of workplace learning can be classified such as formal/informal, general/specific, deliberate/unintentional or expansive/restrictive (for some reason they always come in pairs). In this book terms such as âtrainingâ and âworkplace learningâ are used for any event, experience or activity associated with the workplace that seeks to achieve a change in an employeeâs knowledge, technical or soft skills and/or attitudes regardless of duration, location, trigger or whether it results in a formal qualification or not. I frequently interchange âlearningâ, âeducationâ and âtrainingâ.
This is not an academic book. It is aimed firmly at practitioners. The ideas and insights in it are however based on research, as well as on my own experiences evaluating training. I have tried not to clutter the text with too many references but have included them when particularly important or controversial points are made or of course when I am summarizing the work of others. References and suggestions for further reading are included at the end of the book.
The core argument of this book is that current evaluation practices â probably the ones that you are currently using â are based on an outdated concept of workplace learning, one more suited to the era of scientific management than the reality of 21st-century work. As the nature of work has changed so has workplace learning. Training no longer means employees attending a standalone course in order to acquire learning to perform frequently standardized and physical tasks, often under supervision. Workplace learning now addresses notions such as âemotionsâ, âresilienceâ, âchangeâ, âattitudesâ, and âempathyâ through a wide range of learning approaches, from classroom teaching, to e-learning, the use of social media and the encouragement of knowledge exchange. The location where learning is possible has also changed. We can learn as much on the move as in a classroom. The focus is as much on the learning organization as organizational learning. The problem is that the approaches and practices used to evaluate the rapidly changing field of learning have not kept up with these changes.
What I want to argue is that we need ways of evaluating training that are âscientifically robust but practitioner friendlyâ. What does this mean? Well, âscientifically robustâ methods of evaluation I think:
⢠are based on research;
⢠allow outcomes that can be attributed to training;
⢠provide results that are reliable and consistent;
⢠produce results able to support decision-making;
⢠adopt a âwhole systemsâ approach;
⢠address all ethical issues.
While âpractitioner-friendlyâ approaches to evaluation are:
⢠operational within available financial and time resources;
⢠results-orientated and meet the needs of stakeholders;
⢠technically feasible;
⢠credible;
⢠simple to operate and understand;
⢠flexible.
Moving ahead: complete evaluation
Most organizations want to evaluate the impact of their investments in training but very few actually do â actually probably less than 5 per cent. There are a number of reasons for this, as we will see in Chapter 5; three of these I think are particularly important:
1 Training evaluation is believed to be technically challenging.
2 Training evaluation is seen to be resource-intensive.
3 Current methods appear not to work because they are based on an outdated conception of workplace learning.
Plenty of people have argued that training evaluation is just too hard. Berge (2008:393) has described it as âcomplex, difficultâ, Gibb (2002:107) as âdifficult and challengingâ, Reid and Barrington (1999:342) as âone of the more difficult of training officersâ tasksâ and Sims (1993:591) as âdifficult, tedious and time-consumingâ. It is perhaps not surprising that no less an august body than the ASTD (American Society of Training and Development) has said that training professionals have a âlove/hateâ relationship with evaluation: they know itâs necessary but they struggle to do it. In fact our growing understanding of workplace learning means that there has never been a better time to think about evaluation.
This is where this book comes in. It draws not only on the latest research but also on the authorâs practical experience of carrying out evaluations, often with limited resources. It aims to provide practitioners with accessible âhow-toâ knowledge and tools to undertake evaluations including impact evaluation. It challenges the idea that training evaluation is too complex and difficult. I do not think it is tedious either!
In the following pages I argue that complete training evaluation practice needs to embrace the complex reality of the workplace in order to move from its current restrictive approach â those largely based on assessing the acquisition of learning and narrow reactions to training â to expansive approaches that can address issues such as: what is the nature of the organizationâs learning culture; are trainees motivated to learn; what barriers are there to the transfer of learning; what unintended consequences have arisen from learning; what impact do managers have on the effectiveness of training and what is the long-term effect of training? This is the scientifically robust bit.
This book sets out a range of approaches that can be adapted depending on the learning, the audience for the evaluation and capacity and capability. It aims to be comprehensive in setting out the methods that you can use to evaluate. It assumes no prior knowledge of evaluation. Fundamentally it aims to be practical. For brevityâs sake, I describe this approach as Complete Evaluation.