Measurement Madness
eBook - ePub

Measurement Madness

Recognizing and Avoiding the Pitfalls of Performance Measurement

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Measurement Madness

Recognizing and Avoiding the Pitfalls of Performance Measurement

Book details
Book preview
Table of contents
Citations

About This Book

A clearer, more accurate performance management strategy

Over the past two decades, performance measurement has profoundly changed societies, organizations and the way we live and work. We can now access incredible quantities of data, display, review and report complex information in real time, and monitor employees and processes in detail. But have all these investments in collecting, analysing and reporting data helped companies, governments and people perform better?

Measurement Madness is an engaging read, full of anecdotes so peculiar you'll hardly believe them. Each one highlights a performance measurement initiative that went wrong, explains why and – most importantly – shows you how to avoid making the same mistake yourself.

The dangers of poorly designed performance measurement are numerous, and even the best how-to guides don't explain how to avoid them. Measurement Madness fills in the gap, showing how to ensure you're measuring the right things, rewarding the behaviours that deserve rewarding, and interpreting results in a way that will improve things rather than complicate them. This book will help you to recognize, correct and even avoid common performance measurement problems, including:

  • Measuring for the sake of measuring
  • Assuming that measurement is an instant fix for performance issues
  • Comparing sets of data that have nothing in common and hoping to learn something
  • Using targets and rewards to promote certain behaviours, and achieving exactly the opposite ones.

Reading Measurement Madness will enable you to design a simple, effective performance measurement system, which will have the intended result of creating value in your organization.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Measurement Madness by Dina Gray, Pietro Micheli, Andrey Pavlov in PDF and/or ePUB format, as well as other popular books in Business & Managerial Accounting. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Wiley
Year
2014
ISBN
9781119960515
Edition
1

Part I
Introduction

Chapter 1
The Road to Insanity

Performance measurement is everywhere: when we look at companies' financial statements, read reports on average waiting times in hospitals, carry out performance appraisals at work, or look at schools' records when deciding where to educate our children. The practice of collecting, analyzing and reporting information on the performance of individuals, groups, organizations, sectors and nations has been around for a long time – but what difference is it making?
You may be intrigued, enthusiastic or frustrated with measurement, by the behaviours it engenders and the results it leads to. Perhaps you are a proponent of the old adage that “what gets measured gets done” or you may take the alternative view that measurement only leads to dysfunction. Maybe you have used a performance report to make an important decision only to be let down by the result, or perhaps you have been pleasantly surprised when customer performance ratings of restaurants have helped you have an extraordinary dining experience whilst on holiday. Whatever your starting position, this book will not only present real-life stories of madness brought about by measurement, but will also discuss ways to navigate your way around the pitfalls so that you may, once again, use measurement as a means to improve your organization's performance.
In order to set the scene and outline many of the issues covered in this book, we would like to describe a fictional scenario, which is based on the many real events we have observed in practice, about a manager tasked with implementing a strategic performance measurement system and not fully understanding the consequences of such a programme. Let us introduce you to Mike, his hopes, ambitions and challenges.
My name is Mike, and I am a senior manager in a relatively large organization. Today is an important day for me as we are having our annual managers' meeting and I am leading it. The theme for this year's conference is “Performance Excellence”. All of the directors and the top managers are here, and we are going to have a series of short presentations on the work undertaken over the past year.
But firstly, let me tell you how all of this has come about. Just over 18 months ago the Board recognized that, as competition was becoming more intense and regulations were becoming ever tighter, we had to improve our performance in all areas of the business. The company therefore commissioned an external firm to carry out a review and they concluded that we were lacking a “comprehensive approach to measuring and managing performance”. Essentially, we did not really understand how efficient and productive we were; different units seemed to be run independently from each other; and employees were not clear about the organization's goals.
Shortly after the report was released I was tasked to lead the “Performance Excellence” project, with the aim of introducing a performance measurement and management system throughout the entire organization. It was hoped that the project would completely change the way in which we set and communicate our objectives; how we measure and report performance; and how we appraise and reward people. Today, after a year's work, it is time to check what progress we have made, both in terms of our achievements and in relation to the implementation of the system itself.
At this point in time, before the conference kicks off, I am feeling a little restless, but quietly confident. The lead up to today has been somewhat stressful, and, although I've spoken to most of the people who are attending today, I am not entirely sure what each speaker will say. The organization has always promoted a culture of openness and constructive criticism and I am looking forward to hearing my colleagues' presentations.
I kick off with a short ice-breaker to set the scene, explain what has been done over the past 12 months, and outline future steps. I see a lot of people nodding, which is encouraging as this project has been a priority within the company, and everyone is aware of what is going on. Then I hand over to our CEO. She seems positive about our results, but states that we have to do more as other companies are catching us up and we can't afford to be complacent. Referring to the Performance Excellence project, she says that we have made progress but that she is aware of some “question marks” that we should openly address today. I wonder what those “question marks” could be …?
The CEO concludes and it is now the turn of the Chief Financial Officer. Our financial results appear to be in line with forecast and it seems that we have even had a few unexpected successes. Referring to the Performance Excellence project, he reports that most people tend to regard indicators as relevant or highly relevant, which is music to my ears, but, somewhat unexpectedly, although the number of indicators has increased, the extent to which information is used to make decisions appears unchanged. He continues by saying that despite our immense efforts to provide a balanced view of the company through the introduction of more customer- and process-related indicators, financial indicators are still considered to be the most important ones. This is rather disappointing, even though he concludes by adding that it is just a matter of time before we see more profound changes.
I have to say I feel a bit of relief on hearing his conclusions, but my relief is short lived when, from the floor, one of our regional directors stands up and addresses the executives: “When the Performance Excellence project began we were promised that little effort would be required on our side. Instead, my people have ended up spending a lot of time collecting all sorts of data, yet nobody in headquarters seems to give two hoots about it. I presented our data at the meeting in June which resulted in us spending half an hour arguing over two irrelevant figures, then about the reliability of the data themselves, and we finally took a decision that bore no relation to the data. What was the point of spending so much time collecting and analyzing it?” Before the CFO can utter a response I intervene, pointing out that this should not be happening and that things are changing. From the look on his face I don't seem to have persuaded the regional director, but at least we can now move on to the next presenter.
The Supply Chain Director is up next, and he is renowned in the organization for his obsession with maximizing efficiency. Trained in Six Sigma and brought up on lean thinking, he has been one of the strongest supporters of the Performance Excellence project. His presentation focuses on operational improvements made in warehousing. After a short introduction he goes through a sequence of histograms and graphs reporting the comparative performance of our warehouses. One after the other, in his monotone voice, he presents figures on the number of cases picked per labour hour, inventory accuracy, warehouse order cycle time and finally concludes with a ranking of all of the warehouses. The league table suddenly wakes everyone up. I was not aware of this ranking, but what harm can it do? If anything, I think to myself, it should spur on a bit of competition among the regional and site directors. However, as he continues on down the list, an ever-increasing hum emanates from the audience. Some people are muttering that it is not clear how the comparisons were made and others question how he calculated the final scores. For me, this is a typical reaction of people who don't want to hear that they are doing worse than others. However, as I consider what he is saying, something starts to bug me: actually not all of the warehouses can be considered to be the same, because we are not using them in the same way. Some of them are inefficient, but that is because they have to handle peak demands; they have to work below capacity for a reason. I make a note that I will have to speak to the Supply Chain Director.
After a short break the conference resumes and the R&D director makes his way to the podium. I should point out that before the Performance Excellence project the organization had unsuccessfully tried to get a grip on this unit, but all attempts to monitor their performance had failed miserably. We had developed a number of measures, such as the number of patents filed, but we had never felt that this was a good indication of the unit's output. What about the R&D workers' productivity? What about forcing them to meet stricter deadlines? Or calculating the unit's financial contribution to the company? I am in no doubt that if we put more effort into more sophisticated measures we will certainly have a more accurate picture and should see an improvement in their performance.
As a bit of background, the R&D director was appointed nine months ago; he was previously in sales, where he achieved great results by introducing individual performance targets and regular, in-depth, performance reviews. Last week when I spoke to him he told me that a few of his R&D people were upset with senior management, although he did not elaborate on why, but he was hoping to resolve the issues very soon. So I am hoping for a critical, but positive presentation. What I get instead is a long list of complaints. He tells everyone that, shortly after he was appointed, he implemented a similar system to the one he had used in the sales department. However, while some of his new team showed their support, the majority of them demonstrated resistance; one could even say sheer defiance. A typical response was “Our performance cannot be measured, we are not sales people!” While he goes through his slides I can't help thinking that people in R&D have had it too good for far too long, and that this research-oriented culture places too low an importance on business performance. To my dismay the presentation ends on a fairly disappointing note with the R&D director suggesting that perhaps the performance measurement system should be designed and implemented differently depending on a unit's tasks and its culture. I am flabbergasted: that's madness, we would end up with dozens of different systems.
It is now the turn of the Sales Director for Western Europe. She says she will be brief: one slide and only a five-minute talk, because “the numbers are the numbers, and they speak for themselves”. After a brief preamble, she puts up one chart: her team's improvement is incredible! Sales in her region appear to have doubled over the past year. Can this be right? I look frantically through the draft report I received from Finance two days ago and I can only make out a 5% increase in total company sales over the past three quarters. I don't have a breakdown for each of the geographical areas, but I can't believe we achieved such a positive result in the saturated market that is Western Europe. While the Sales Director performs an act of self-congratulation and is thanking all her team as if they were in the room, I peer at the graph more carefully. It occurs to me that the y axis doesn't start at zero; so, I squint a bit more and I can now see some of the blurred numbers: the real increase is less than 10%! This is really annoying, as we had said that we would be using this forum to openly discuss failures and successes, and then people turn up with lame graphs just to show off. I will have to talk to her too.
The podium is now being prepared for our Chief Operating Officer. At the beginning of this year we made the news for a major fault in one of our products: after several customers reported having the same problem, we were forced to undertake the largest recall in the company's history. Not a happy time. Since we couldn't really blame the suppliers, this hiccup triggered a series of internal arguments between our design and production units. Eventually, the incumbent COO was replaced by his deputy. Before leaving, the old COO wrote an angry letter in which he accused the board of introducing a perverse system of targets and rewards that he labelled “bribes” in his letter, which had completely corrupted the ethics of his team. According to him, people were aiming to achieve tighter deadlines and reduce lead times, but only because they had been promised a short-term financial benefit. This, rather than incompetence or flaws in the processes, was the main reason for the product recall. To me that letter just felt like he was making excuses, but quite a few people at the senior level of the company showed support for his sentiments; so much so that I thought the whole Performance Excellence project was going to be canned. Thankfully, the new COO appears to have opted for a less controversial theme for his presentation and is showing some general trend data, without mentioning performance targets. Phew! There are still strong feelings in the company about what happened and I wouldn't want to have a heated debate right here and now.
The last presentation before my final wrap up is delivered by the CEO of our IT subsidiary. Five years ago she set up an IT firm to provide customized software to companies in our sector. This firm proved so successful that, two years ago, we decided to acquire it. Because of differences in our two histories, tasks and supposed company culture, they have always been given lots of freedom. At the beginning of the Performance Excellence project we discussed whether to introduce elements of the measurement system there, but in the end we decided to run a little experiment instead. In one half of the company things remained the same; in the other half, people capable of achieving “stretch targets” were offered a substantial bonus, up to 30% of their base pay, if they met those targets. In the beginning, a few of the employees seemed unhappy about the introduction of targets; however, since then, sentiment appears to have changed.
Somewhat surprisingly, the presentation starts on a positive note about the Performance Excellence project. Comparative data, recorded in the first two quarters, suggest that during the first six months the people who were striving for a bonus achieved higher levels of performance in terms of both efficiency and quality. This is great; finally we can see that when people are measured, managed and rewarded appropriately they do a better job...

Table of contents

  1. Cover
  2. Title Page
  3. Copyright
  4. From the Authors
  5. Part I: Introduction
  6. Part II: Performance Measurement
  7. Part III: Performance Management
  8. Part IV: Conclusions
  9. References
  10. End User License Agreement