Performance Measurement and Theory
eBook - ePub

Performance Measurement and Theory

  1. 408 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Performance Measurement and Theory

Book details
Book preview
Table of contents
Citations

About This Book

In this volume, first published in 1983, the editors aim to achieve an understanding of performance from a variety of theoretical perspectives. The papers in this volume will not only spur further research, but will also provide an opportunity for some careful considerations of how performance is measured in various applied settings. The book is divided into four major areas; intraindividual issues, interdividual/organizational dynamics, methodology, and philosophies. This title will be of interest to students of business studies, psychology and human resource management.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Performance Measurement and Theory by Frank Landy,Sheldon Zedeck,Jeanette Cleveland in PDF and/or ePUB format, as well as other popular books in Business & Business General. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2017
ISBN
9781351814195
Edition
1

1Introduction

Frank Landy
Sheldon Zedeck
Performance and its measurement are playing increasingly greater roles in the conduct of human affairs. There are as many kinds of performance as there are occasions for performance. The occasion might be a classroom and the performance that of a teacher or student. The occasion might be a satellite launching and the performance might be the transmission of pictures back to earth. The occasion might be a session of love making and performance might be sexual gymnastics. The occasion might be a work setting and the performance might be the productivity of a single worker. Each of these is an instance of a stiuationally defined expectation of “superior” performance.
Although we might have no problem counting units of production or quality of picture transmission, it might be somewhat more difficult to agree on the definition and measurement of sexual or classroom performance. Thus, in the latter two instances we are faced with a need to interpret what may be ambiguous information. We may even be placed in a position of negotiating a definition of performance. In either case, measuring the performance in question presents some problems.
This book deals with problems in performance measurement. You may have encountered some of these problems before—problems like bias in instruments, bias in evaluators, unreliability, changing definitions of “success.” You may never have considered some of these problems before—problems like the role of capitalism in performance measurement, the effect of ignoring poor performance on the gross national product, and the difficulty of predicting performance levels in situations that have never occurred, and that one hopes will never occur.
You are probably familiar with the traditional reasons for measuring performance. The measurement of individual performance allows for rational administrative decisions at the individual employee level. It also provides the raw data for the evaluation of the effectiveness of such personnel-system components and processes as recruiting policies, training programs, selection rules, promotional strategies, and reward allocations. Finally, it provides the foundation for behaviorally based employee counseling. In this counseling setting, performance information provides the vehicle for increasing satisfaction, commitment, and motivation of the individual. Performance measurement allows the organization to tell employees something about their rates of growth, their competencies, and their potentials. There is little disagreement that if done well, performance measurements and feedback can play a valuable role in effecting the grand compromise between the needs of the individual and the needs of the organization. The key phrase here is “if done well.” To do it “well, ” we need to understand what “it” is. It is this issue—the understanding—that we addressed in the conference “Performance Measurement: Directions for the Future, ” held from November 6 to 8, 1981, in Dallas, Texas, and that we address in this book. We identify some of the boundary conditions for performance measurement. And we point out some of the obstacles to doing it well.

A LITTLE HISTORY

The measurement of performance has occupied the attention of applied psychologists for several decades. The graphic rating scale was introduced in 1920 as an attempt to capture something of the more “impressionistic” characteristics of energy expenditure and its effectiveness. Frederick W. Taylor was more than complete in defining what performance was for Schmidt, the pig-iron handler. The Gilbreths were quite eloquent in their systematic articulation of the language and units of work measurement.
In the 1930s and 1940s, a good deal of research was devoted to the format, method, and physical characteristics of performance-measurement systems. Questions regarding the nature of anchors, the definitions of areas of performance, the physical layout of rating scales were all very popular. Even at that “early” stage, the issue was not if performance would be measured; the only question was how.
During this period, there were excursions into psychometric theory but these were usually only indirectly related to the more applied aspect of performance measurement: They generally were concerned with developing new statistical techniques, uncovering interesting characteristics of transformed distributions, or looking at some basic characteristics of sensation or perception. Variations on Thurstone's Law of Comparative Judgement accomplished all of these purposes. But these studies dealt only peripherally with the issue of measuring human performance in work settings.
In 1952, the late Robert Wherry produced some reports for the Army that directly addressed the rating process. He deduced certain logical relationships among various components of the rating sequence. He suggested that these components included observation, storage, retrieval, and judgment. Further, he stated a number of specific corollaries to the basic propositions. These corollaries spelled out the anticipated effects of altering various aspects of the rating process. The corollaries were based to some extent on logical relationships, to some extent on previous psychometric research on these components, and to some extent on classical test theory. To our misfortune, the reports virtually disappeared. The only people aware of them were people who had professional contact with Wherry either through colloborative research or as Ohio State graduate students.
For the next 20 years or so, research on performance assessment continued to wallow in the quagmire of methodology: scales that went up rather than down; scales with the high end on the left rather than on the right; scales with letters rather than numbers. In a sense, if one examined the research conducted in performance evaluation, it seemed that it was just an engineering problem: It would only be a matter of time before a better mousetrap would be found. Just as spirits were flagging, the Behaviorally Anchored Rating Scale arrived to save the day. For more than a decade, researchers luxuriated in developing BARS systems for measuring the performance of everyone from firefighters and police officers to grocery-store packers, providing yet another example of the mission-oriented research so common in applied experimental psychology. These scales were devoted to satisfying a definite need—the measurement of performance in a particular occupational area. Unfortunately, our knowledge of rating and raters did not develop appreciably during this period.
Partly as a result of the changes within psychology as a discipline, and partly as a result of disenchantment with BARS approaches to performance measurement, researchers began examining variables of a more dynamic nature in the hope of understanding the performance-assessment process. This involved reexamining the traditional notions of error; exploring the relevance of information-processing research, models of person perception, attribution, and implicit personality theory; studying individual difference variables that were nontraditional (such as cognitive complexity, perseveration, etc.); and a host of other macro and micro variables that might conceivably help understand the rating process. In the last several years, there have been several review papers that have called for a broader consideration of the phenomena that comprise interpersonal evaluation. These reviews have suggested borrowing from other subdisciplines and other allied fields of behavioral science. There does seem to be some agreement that a moratorium on rating-scale development research might be in order. It was from this historical context that the structure for the “Performance Measurement” conference and the present text emerged.

LOGICAL CONSIDERATIONS IN THE CONFERENCE STRUCTURE

As a result of independent reviews completed by Landy and Zedek, the editors of this volume, several areas had been identified as “underrepresented” in the research literature. Among these areas were the effect of organizational and suborganizational variables on performance definition and assessment, the effect of performance-measurement systems and definitions on organizational health and well-being, the role of motivation in the behavior of evaluators, and the effect of values and attitudes on the nature of measurement and definitional system. It was decided that the conference would be used to sensitize researchers to the areas as yet unexamined rather than to showcase what was already known about performance measurement. In a sense, the conference was to be an attempt to draft a research agenda for the next decade.
Having identified the thrust of the conference, it was now necessary to determine exactly how time would be allocated and topics developed. Preliminary discussions had been held with the Office of Naval Research (ONR) and the Office of Personnel Management (OPM) regarding financial and logistic support for the conference. Both of these agencies were very interested in a conference of this sort for many reasons. The Office of Personnel Management was charged with the responsibility of determining personnel procedures for the massive work force of federal employees. Further, the Civil Service Reform Act had exaggerated the pressure to improve performance-measurement techniques for the federal work force. The Office of Naval Research was, by definition, concerned with the concept of performance readiness in the military context. But in a more general sense, both of these agencies had a long and commendable record of supporting basic research in psychological processes, including performance measurement.
In the early stages of conference planning, a meeting was held to draft a preliminary set of topics for the conference. This meeting was attended by Bert King from ONR; Jeff Kane, Magda Colberg, Marianne Nester, and Frank Schmidt from OPM; C. J. Bartlett; and Landy and Zedeck, two editors of this volume. At that meeting, a preliminary list was developed for topical consideration. Subsequently, the editors discussed topics among themselves and with Jeff Kane from OPM. The result of these discussions were twelve broad areas of concentration for the conference. These areas are as follows:
1. Political and philosophical considerations in performance assessment.
2. Power distribution within organizations.
3. The effect of organizational characteristics on performance-measurement systems.
4. The effect of performance-measurement systems on organizational characteristics.
5. The effect of individual performance on organizational structure and process.
6. A cognitive view of performance measurement.
7. A social personality view of performance measurement.
8. The supervisor/subordinate Dyad.
9. Performance evaluation as a motivated event.
10. Performance evaluation and definition in military settings.
11. Objective versus subjective performance measurement.
12. Conceptualizing performance through modeling.
As can be easily seen, this was a list with awe-inspiring scope. It would be virtually impossible to satisfactorily cover even one of these topics in a several-day conference, let alone all of them! It was decided that these headings would be used as labels for conceptual categories. This would allow each contributor to determine how to flesh out the concept. In a sense, the contributors were asked to give papers that were examples of the salient issues in particular conceptual categories. The papers that appear in this volume are these examples. The contributions are grouped into four broad categories: organizational issues, individual-difference issues, methodological issues, and sociopolitical issues.
The contributors were chosen carefully, and they all have certain characteristics in common. First, they are scholars. They have demonstrated through their research and writing that they have the capacity to expand areas for consideration, not simply to fill in holes. In addition, for the most part these contributors had not devoted their major research attention to the issue of performance definition and assessment, at least not in the broad context of suggesting research priorities. Finally, each of the contributors had expressed a willingness to listen and discuss what he or she heard, a rather demanding task for individuals who are more often the subjects than the objects in the communication paradigm.
The contributors were also dissimilar in many respects. In fact, few of them agreed with each other about anything. Each of them brought a unique research history to the task. They represented many different disciplines including anthropology, sociology, law, and psychology. Within the psychology group, there were representatives of various subdisciplines including industrial/organizational, social, cognitive, differential, and personality. It was this broad and heterogeneous mixture that the organizers hoped would produce the bridges that were so obviously absent from much of the earlier research in the area of performance assessment.

STRUCTURE OF THE CONFERENCE AND THE PRESENTATIONS

Because we felt that a structured response to each of the twelve contributions might add considerably to the value of that contribution, and because this effort seemed much too intimidating for us, we identified a discussant for each major contribution. These discussants were chosen on the same basis as were the major contributors. They came from varying disciplines, liked to argue, and were recognized scholars. As a matter of fact, several discussants were originally approached as contributors and declined because of the time commitment required to produce a major theoretical statement in an area with which they were only peripherally involved. Because they expressed an interest, optimism, and curiosity, they were natural choices for the somewhat different role of discussant.
The discussants were given several alternative courses of action to pursue. If they liked, they could present a “minority report” on the topic. In other words, they might agree with what the major contributor said, but still feel that certain other things, which required some discussion, were left unsaid, or they might feel that the contribution was not necessarily “wrong, ” just irrelevant to the topic at hand. A second possibility was to disagree with the substance of the major contribution and point out weaknesses while suggesting alternative considerations. A final alternative was to take the major contribution as a point of departure and simply extend the comments of the contributor. Examples of all three approaches appear in this text. We leave it to you to determine which discussion represents which alternative. It is not always obvious.
The process of paper and discussion production was rather simple. Major authors forwarded drafts of their papers to discussants prior to the conference. This provided the discussants with some opportunity to prepare formal comments. In some instances, discussants and contributors had interactions of substance about their respective contributions prior to the conference.
After the conference, final drafts were submitted to the volume editors. The papers that appear in this volume are those final postconference drafts. Major presenters were given the option of formally replying in print to discussants if they so wished. Some contributors took advantage of this opportunity and others felt no need to make additional comments. The “Reply” sections of the chapters represent the formal written replies rather than comments made at the conference immediately following the discussants' presentations. Although it must have been tempting for several major contributors to modify their last drafts in an effort to mitigate the effects of their discussants, no one took that unfair advantage.
During the course of the conference presentations, there was a good deal of spontaneous discussion about the various topics. This discussion involved not only presenters and discussants but also members of a motivated, well-prepared, and critical (in the nicest sense of the word) audience. There were approximately 80 “observers” who attended the 2½day conference. These observers came fro...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Table of Contents
  6. Preface
  7. 1. Introduction
  8. PART I: ORGANIZATIONAL CONSIDERATIONS
  9. PART II: INDIVIDUAL CONSIDERATIONS
  10. PART III: METHODOLOGICAL AND MEASUREMENTCONSIDERATIONS
  11. PART IV: SOCIOPOLITICAL CONSIDERATIONS
  12. Author Index
  13. Subject Index