Business

Training Effectiveness

Training effectiveness refers to the extent to which a training program achieves its intended outcomes and objectives. It is measured by assessing the impact of training on employee performance, skills development, and overall organizational goals. Evaluating training effectiveness helps businesses determine the return on investment and make informed decisions about future training initiatives.

Written by Perlego with AI-assistance

7 Key excerpts on "Training Effectiveness"

  • Instructional Design for Action Learning
    • Geri McArdle(Author)
    • 2010(Publication Date)
    • AMACOM
      (Publisher)
    Let’s begin by reviewing the definitions of some key terms and processes that pertain specifically to evaluation. You should have become familiar with these because of their use in earlier chapters of this book, but they particularly pertain here:
    Training Effectiveness refers to the benefits that the company and the trainees experience as a result of training. Benefits for the trainees include acquiring new knowledge, learning new skills, and adopting new behaviors. Potential benefits for the company include increased sales, improved quality, and more satisfied customers.
    Training outcomes or criteria refer to measures that the trainer and the company use to evaluate the training.
    Training evaluation refers to the collection of data pertaining to training outcomes, needed to determine if training objectives were met.
    Evaluation design refers to the system from whom, what, when, and how information is collected, which will help determine the effectiveness of the training.
    Because companies have made large dollar investments in training and education, and they view training as a strategy to be more successful, they expect the outcomes or benefits of the training to be measurable. Additionally, training evaluation provides a way of understanding the investments that training produces and provides the information necessary to improve training. Figure 6-1 shows the interlocking nature of training evaluation.
    Figure 6-1. Interlocking nature of training evaluation.

    The Evaluation of Training

    We end this book as we began, discussing the success of a training program. From the start, all efforts to define a need for training, through design and implementation, were aimed at achieving measurable success. It’s time to review the process for determining that success. Ultimately, the evaluation leads to improvements in the program (content, instructional strategies, pace, or sequencing).
    Thus, evaluation is a significant part of the design and development process. While we usually think of evaluation as something that takes place during or after the training, it really is critical to the entire process. And the heart of evaluation is the assignment of value and the making of critical judgments. The evaluation process measures what changes have resulted from the training, how much change has resulted, and how much value can be assigned to these changes.
  • Understanding Occupational & Organizational Psychology
    Tamkin, Yarnall and Kerrin (2002) describe training evaluation as ‘a bit like eating five portions of fruit and vegetables a day’. That is, ‘everyone knows they are supposed to be doing it, everyone says they are planning to do it better in the future and few people admit to having got it right’ (p. 1). Clearly, training is supposed to make a difference, to bring about changes to both individuals and organizations. Yet, very little training evaluation is actually carried out, and when it is, it will usually be limited to the convenient but largely inconsequential assessment of trainee reactions (Schott, Grzondziel, & Hillebrandt, 2001) – despite the availability of many well-established evaluation methods (Breakwell & Millward, 1995). A lot of training is therefore conducted on ‘faith’ rather than clear analysis. This is particularly surprising given today’s economic preoccupation with accountability and bottom-line ‘results’.

    Training Effectiveness

    There have been seven noteworthy systematic reviews of the organizational training literature dating from 1971 to 2003: Campbell (1971); Goldstein (1980); Wexley (1984); Latham (1988); Tannenbaum and Yukl (1992); Salas and Cannon-Bowers (2001); Arthur, Bennett, Edens and Bell (2003) and one pertaining to management training in particular: Burke and Day (1986). With only two exceptions (Arthur et al., 2003; Burke & Day, 1986), these reviews are based on a narrative rather than statistical synthesis of relevant material.
    Arthur et al. (2003) reported a sample-weighted effect size for mainly the off-site training of individuals (rather than teams) of 0.60 to 0.63 (depending on the criterion), which they interpreted as a medium to large effect size. Reviews, indeed, consistently suggest that effectiveness depends on the criterion used against which to measure effectiveness (for example, learning or behaviour criteria). In particular, Arthur et al. found that there was a subsequent decrease in effect size the more distal the criterion from the training event. Thus, reactions are the most proximal criteria, followed by ‘learning’, but the ‘behavioural’ or performance criteria and especially the organizational criteria denote increasingly distal indicators of Training Effectiveness. This makes sense given the now renowned problems of transferring training to the work environment. The more distal the criterion (behaviour, results), the more likely it is that ‘other’ factors moderate the relationship between the training event and its impact on outcomes including performance opportunity and the transfer climate (Colquitt, Le Pine, & Noe, 2000).
  • Skill Acquisition and Training
    eBook - ePub

    Skill Acquisition and Training

    Achieving Expertise in Simple and Complex Tasks

    • Addie Johnson, Robert W. Proctor(Authors)
    • 2016(Publication Date)
    • Routledge
      (Publisher)
    10

    Designing Effective Training Systems

    Without a knowledgeable and skillful workforce, organizations are likely to suffer. With that in mind, training in organizations should be of the utmost importance. E. Salas, K. A. Wilson, H. A. Priest, & J. W. Guthrie (2006, p. 503)
    Throughout this book we have described techniques for training different types and components of skill. We have discussed factors influencing skill acquisition and have discussed the major models of how skill is acquired. In this chapter we argue that there is more to training than the application of techniques for acquiring skill. Effective training programs are built on an understanding of the task being trained for, the people being trained, and the environment in which the trained task will be performed. Not only will the quality of the design and delivery of the program influence training success, but so will the attitudes and the motivation of the trainees and the organizational context (Noe, 1986). The success of any training program will thus depend on gaining the support of trainees and the organization and on removing any barriers to implementing trained knowledge or skills. Given that motivation, ability, and organizational factors all play a role in the success of training programs, it is not surprising that training is an interdisciplinary research area, with contributions from the fields of cognitive and industrial/organizational psychology, human factors and ergonomics, instructional design, and human resource management.

    Assessing Training Requirements

    According to common wisdom, the first phase of training program development for specific skills should be needs assessment (I. L. Goldstein & Ford, 2002). Needs assessment involves determining what knowledge or skills are prerequisite to training activities and what the goals of the training should be. It should be viewed as an ongoing process in which the organization acts proactively, anticipating training needs and any barriers to implementing training. Needs assessment focuses on identifying specific problem areas in the organization and determining whether training is an appropriate solution to the problems.
  • Open and Flexible Learning in Vocational Education and Training
    • Judith Calder, Ann McCollum(Authors)
    • 2013(Publication Date)
    • Routledge
      (Publisher)
    [Learning effectiveness]. Well who for? Is it effective for me? My employer? Whoever pays the fees? Is the outcome the college results? For me personally, [it is] effectiveness in practice. People may come in with preconceived ideas… I question and question the nature [of people’s strong opinions] and see that they have moved…
    [Learning effectiveness] comes from the very start, making sure that the right people get on to the right course. Not setting people up to fail or sending them in the wrong direction. You have to see that the modules are appropriate, that content is set and at an appropriate level for understanding but stretches them and covers exact objectives (FE College tutors).
    Source: Calder et al. (1995)
    Company trainers
    Company trainers also showed a sophisticated appreciation of the problems inherent in trying to define and to measure the learning effectiveness of a specific mode of learning or approach to teaching. The sorts of example which were given included changes in work patterns, work process improvements and acquiring or improving competences in the use of new technology tools. In other words, quite specific changes might be expected from a course which focused on particular job skills.
    The main focus of the training is to help people to do their current job and to integrate them into the department they are working in. Training is provided so that people can do their jobs correctly and with confidence. Training should provide clear guidelines to trainees along with the appropriate skills and knowledge and confidence to do the job (In-company trainers).
    Source: Calder et al. (1995)
    Differences in the purpose of the training did affect aims and, therefore, how effectiveness was perceived. Trainers engaged in induction training argued that if trainees could carry out the job for which they had been trained correctly and confidently then the training was effective. This view of the ‘whole’ job was in turn subsumed by the view that effective training meant that the newly trained person should fit into the department, and should be easily integrated into the workplace. In other words, they should not be seen as a separate and independent employee but as a member of a team, with the success of the team and the contribution of that person to the team being an important aim of the training. Thus, they saw the effectiveness of the training in terms of the improvement in:
  • Business Development in Licensed Retailing
    • Guy Lincoln, Conrad Lashley(Authors)
    • 2012(Publication Date)
    • Routledge
      (Publisher)
    Overlapping (but distinct from) functional flexibility, investment in training can produce benefits regarding the extent to which employees are willing to accept change. Investment in training employees with a wide range of generic skills, or in the specific needs for change, are likely to reduce resistance to change and to aid transition from one form to another. For example, studies of empowerment initiatives show that the defining feature of the success of an initiative is the extent to which employees are prepared for new responsibilities and feel empowered.
    In summary, training is undertaken to achieve business objectives, and it is important that you monitor the effectiveness of training to measure the extent that it has achieved the objectives set, whether this is to:
      improve performance
      increase productivity
      increase sales per transaction
      reduce staff turnover
      reduce staff absenteeism
      reduce wastage and equipment damage
      improve service quality
      improve customer satisfaction
      improve employee satisfaction
      increase staff flexibility
      improve the ability to accept change.
    Box 6.1 provides an example.

    Box 6.1   J.D. Wetherspoons

    The approach to crew training by J.D. Wetherspoons has several features that exemplify best practice in the licensed retail sector:
      all staff are trained, including full- and part-time employees
      training is competence-based
      ultimately, training aims to develop a flexible work force capable of undertaking all jobs
      much training involves learning to do the job in ‘the one best way’
      competencies are defined for each task
      completed training is rewarded through a pay increase.
    Case study evidence from executives confirms that unit managers have a high impact on training levels. Despite a uniform approach and a strong cultural commitment to training at senior levels, some unit managers give training a lower priority, and as a result their units generally have:
  • Training Design in Aviation
    • Norman MacLeod(Author)
    • 2017(Publication Date)
    • Routledge
      (Publisher)
    7 Measuring the Effectiveness of Training

    Introduction

    Now that we have designed our course, we need to consider if it works - or not. In this chapter we will consider the methods by which we attempt to evaluate the effectiveness of our training. You might be forgiven for thinking that we need to actually run the course first before we can say whether it is successful or not. However, at the planning stage, it is worth spending some time thinking about the measures of performance you will need to have in place if you want a convincing answer to the question of effectiveness. In general terms we are interested in the concept of validation - the extent to which training meets declared goals - and evaluation - the extent to which transfer occurs to the workplace. By the end of this chapter you will have:
    • Considered the need for the measurement of effectiveness.
    • Examined methods of internal course validation.
    • Examined methods of evaluating training transfer.
    • Considered the broader implications of change within the organisation.

    Motives for Measuring

    There are many reasons why we might want to try to assess how well our course works. First on the list, and probably the most obvious, is that of simply being interested in whether training interventions do what they are supposed to do. We all know people who have failed a training course and, certainly in the UK, many families have their summer holiday in the shadow of the impending annual announcement of school exam results. However, was that poor performance on the course or those disappointing exam results the fault of the individual or the course they were given? In the field of CRM training there is a group of pilots, the 'boomerangs' or 'hard-nails', for whom CRM has had zero or even a negative effect. Ignoring any discussion about personality and attitude, the fact remains that the training event did not achieve its goal Some flight simulators have been found to cause decay in aircraft handling skills rather than an augmentation. Negative training effects are very real and can only be detected through evaluation.
  • Complete Training Evaluation
    eBook - ePub

    Complete Training Evaluation

    The Comprehensive Guide to Measuring Return on Investment

    • Richard Griffin(Author)
    • 2014(Publication Date)
    • Kogan Page
      (Publisher)
    Consider how training activity is seen in your organization. If you asked senior colleagues from outside L&D and HR what they thought of training, which of the following words do you think would apply and why:
    •  productivity; •  cost; •  soft; •  performance; •  must-do; •  improvement lever; •  innovation; •  worthy; •  strategic; •  uncertain; •  low status?
    Ultimately it is in no one’s interest to run training programmes that are not effective. The time and money could be better spent elsewhere. Evaluation is the only way to discover whether training is effective or not.
    COMPLETE EVALUATION: FIVE KEY OUTPUTS
    1   To gain insights.
    2   To improve.
    3   To assess effects.
    4   To engage participants and stakeholders.
    5   To understand wider context.
    Mind the gap – why training evaluation is not working
    In recent years, as I mentioned in the opening chapter, some commentators have gone as far as describing training evaluation as being in ‘crisis’. I think this is a fair comment. We know that almost all organizations want to measure the impact of their investments in training but very few do. Hopefully by now you have been convinced of the need to carry out evaluations but why are so few conducted? Chapter 5 will discuss some of the practical barriers that practitioners face (such as resource constraints and the perception that evaluation is too time-consuming) and how they can be overcome. What I want to discuss here is the issue, or more accurately the ‘gap’, that lies, in my view, at the heart of the current evaluation crisis.
    In 2007 two leading workplace-learning researchers, Holly Hutchins and Lisa Burke, sent a survey to members of the American Society of Training and Development (ASTD). The survey contained 32 statements about training based on the latest research. Some statements reported the opposite to what research findings had found. Hutchins and Burke asked ASTD members to say whether they thought each was true or false. Some of the statements are reproduced in the box below if you would like a go.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.