eBook - ePub
Handbook on Measurement, Assessment, and Evaluation in Higher Education
This is a test
- 738 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Handbook on Measurement, Assessment, and Evaluation in Higher Education
Book details
Book preview
Table of contents
Citations
About This Book
In this valuable resource, well-known scholars present a detailed understanding of contemporary theories and practices in the fields of measurement, assessment, and evaluation, with guidance on how to apply these ideas for the benefit of students and institutions. Bringing together terminology, analytical perspectives, and methodological advances, this second edition facilitates informed decision-making while connecting the latest thinking in these methodological areas with actual practice in higher education. This research handbook provides higher education administrators, student affairs personnel, institutional researchers, and faculty with an integrated volume of theory, method, and application.
Frequently asked questions
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Handbook on Measurement, Assessment, and Evaluation in Higher Education by Charles Secolsky, D. Brian Denison in PDF and/or ePUB format, as well as other popular books in Éducation & Éducation générale. We have over one million books available in our catalogue for you to explore.
Information
p.1
PART I
Measurement, Assessment, and Evaluation in Higher Education
Past, Present, and Future
Introduction
In this opening section of the handbook, there are just two chapters: one authored by Michael Scriven and the other by Robert Mislevy, both highly regarded individuals who have made major contributions to their respective fields of evaluation and measurement. By offering insights into the precedents and resulting developments of the measurement and evaluation fields in higher education, their chapters together provide an important context, and set the stage for the topics presented in the remainder of the book. As editors, we feel fortunate to have received their inputs and perspectives.
Scriven opens Part I with Chapter 1, “The Failure of Higher Education to Follow the Standards It Has Established in Methodology and Evaluation.” Using a wealth of personal and historical observation, he argues that the tools of measurement, assessment, and evaluation, developed over the years by those working in colleges and universities, have been neither sufficiently nor effectively utilized by those very institutions. He then identifies courses of action that define a blueprint for progress in higher education.
In Chapter 2, “On Measurement in Educational Assessment,” Mislevy traces the roots of measurement from the early physical sciences through successive stages of development in educational contexts to future uses in what he and others refer to as “situative learning assessment.” By extending the complexity of psychometric models, Mislevy’s chapter addresses what is in store for educators of the future, and how developments in psychometrics can respond to growing needs to assess learning on new types of tasks, so as to reflect the moment-to-moment demands of the learning environment.
p.3
1
THE FAILURE OF HIGHER EDUCATION TO FOLLOW THE STANDARDS IT HAS ESTABLISHED IN METHODOLOGY AND EVALUATION
Michael Scriven
Overview
The current situation in higher education gives new meaning to the phrase “the treason of the intellectuals.” Julien Benda, in his book by that title (1927), introduced it to refer to what he saw as a failure of leading humanists of his day to control their crude biases, including nationalism and racism. Here we apply quality standards to both humanists and scientists, by expecting scholars to avoid the old (and some subtler) fallacies committed by their colleagues, particularly those that they themselves commit frequently and persistently.
To some extent, this condition of indefensible “self-protection by shared fallacies” is due to academic isolation through excessive specialization, which makes it hard for insights in, for example, experimental design, to percolate into business management, or those in philosophy to influence cosmology or experimental design. A second face of this cause may be academic territoriality—the higher education version of snobbery—which acts as if only those with a PhD in economics can uncover flaws in “rational choice theory,” or as if only those with law degrees can justify claims of judicial bias.
This chapter, like many others in this anthology, examines a handful of the most important large-scale cases of these failures, and their consequences, within and outside the academy, and makes some suggestions for improving the situation. My credentials as a polymath, essential but perhaps not enough for this task, are 66 years (and counting) of teaching, research, and over 460 published papers or books in departments, or centers, or research areas in mathematics, philosophy, psychology, cosmology, history, computer science, history and philosophy of science, law, education, research, evaluation, critical thinking, informal logic, and ethics. I have also headed centers, programs, and a consulting business, founded and edited journals, presided over large professional associations, served as a special assistant to a university president (University of California at Berkeley), and so on.
p.4
The Great Scandal
The scandal is that the research work and results from the field of study targeted by this volume––the foundations of a university’s business as an educational agency––have been massively ignored by the universities.
The whole business of education centers around the process of teaching, and the whole business of teaching is no more and no less than the instructional facilitation of learning. So it absolutely depends on the ability to determine changes in learning (i.e., educational testing) of those it serves. Equally, since changes in learning have to be brought about by some agent––the teacher––the enterprise is absolutely dependent on the selection, retention, and facilitation of teachers, and of those who select and surround them and their students in this endeavor: the administrative and support staff. Those are the simple logical truths which should guide the design and operation of education, and in particular, higher education, which is traditionally and still typically the place where a society solves the research problems associated with teaching, notably the creation and evaluation of learning––its quality as well as its quantity––and of teaching and the management of both. But what do we find in the practical picture? We find huge organizations of teachers, administrators, and alumni, all religiously devoted to the governing principle of specialization in the professional search for knowledge, that are also religiously devoted to amateurism in their supposedly primary job of teaching. They are sometimes committed to a research role as well, where their specialization makes more sense, but that specialized knowledge does not get them beyond amateur status as teachers. And in teaching they assume, and will defend if challenged, the idea that years of experience and deep knowledge of the subject matter are the crucial criteria of merit; a view they would consider laughable if applied to coaching football, selling cars or any other of the many skills where success is outcome-dependent.
It would be unreasonable for one to expect all teaching faculty to be professionally competent about management, assessment, and evaluation at the advanced research level exhibited in this volume, just as it would be unreasonable to expect all family doctors to be familiar with medical research on its myriad fronts. But there’s a second level of professional competence in medicine which we do rightly expect our GPs to master and maintain: namely, familiarity with the listing by the National Institute of Health and the Food and Drug Administration of approved medications and procedures for conditions commonly encountered and dealt with in general practice. In higher education, this corresponds to, for example, knowledge of how to avoid the usual errors in the construction of multiple choice and essay tests, and their range of expectable test-retest and interjudge reliabilities in different subjects. In addition, they should all be familiar with the results of the critically important, competently evaluated, major alternative approaches to higher education instruction: for example, those in recent developments in math-science education at Harvard, Georgia Tech, and elsewhere. College teachers should also have a good general knowledge of the results from roughly 1,000 meta-studies of pedagogical options for those grades, as presented in John Hattie’s Visible Learning for Teachers (2015), since many of the successes and failures covered there have analogs in the postsecondary sphere. The simple truth is that we now know a huge amount about how to teach and how to tell whether we do it well, but very few of those doing it know this. And the best example of the extent of that ignorance about basic teaching knowledge and skills seems to be the extremely unattractive combination of inertia, laziness, elitism, and incompetence of those who have the power to change colleges; that is, the administration, faculty, alumni, and politicians. It would be common today to blame Government as well, but the fact is that Government, especially the National Science Foundation, has funded much of the best research, including the highly interactive approach referred to earlier. The students, against opposition from all the other parties, have also made at least one major contribution––the student ratings of courses and instruction.1
p.5
Looking in slightly more detail at the etiology of the present deplorable state of affairs, it is worth noting an intermediate defensive mechanism that protects the status quo––the ignorance of faculty about pedagogy and how to evaluate their students and themselves—from demonstrably needed change. We can call this procedure “ghettoization.” Most faculty will recognize it as part of the common culture of the academy: it is the process of consigning matters concerning the discipline of education to the “school of education,” which is regarded as a low-grade division of the academy. “Educational psychology,” in this view, is a dummy’s version of psychology; “educational measurement” the same. Defenders of this view often seek to justify it by pointing at, for example, the Graduate Record Examination (GRE) scores of students entering educational psychology, which allegedly peak at about the minimal requirement for acceptance into the mainstream psychology department. Even if true, the mere use of this example of cognitive bias illustrates the point here, which is the dismissal of study of work in education, on which the college depends for its legitimacy when that legitimacy is under fire. Relativity theory sprang from the ashes of careless thinking about the foundations of physics, and the next revolution in online or for-profit colleges threatens to reduce the present system to ashes if it continues to resist the need to rethink the outdated assumptions on which it is built. However low the entry level of students into education, many of them graduate knowing more about teaching than those who give them grades.
A Blueprint for Progress
It may help in understanding the extreme seriousness of this problem if we contrast the current attitude of intellectual superiority towards educational research and development with what might be considered a defensible response. This could be put under four headings.
1 Faculty Evaluation. Select, retain, and promote on the basis of explicit weighted criteria, whose use is supervised and enforced by deans with the requisite skills for doing that. There can be some variation in the weighting between different colleges, and even between faculty members in a college or department––a rubric for a “research professor” might weight teaching versus research as 1:2, whereas the normal in his or her department might be 1:1 (some state universities), or 2:1 or 3:1 (junior colleges), where the research performance in 3:1 might only require the publication of one or two reviews per annum, and attendance at in-service workshops. Acceptable performance on teaching would include regular experimentation with variations in pedagogy, textbooks, and class activities; and good student ratings (but only on a validated questionnaire with an open-ended response analysis—and only if an 85% return rate is demonstrated). The course outline, calendar of topic coverage, all tests, responses, and grades for them must also be submitted, and will be sampled and occasionally subject to examination and review by senior scholars and validated teachers in the field. Assembling this portfolio is a major chore, because teaching is a major part of the workload.
2 Other parts of the workload, with weights set or approved by senior managers, include the part “service” (to department, college, profession, and community). The portfolio must also contain an essay covering the included research projects, and the relevant or recent evaluation research for this individual’s teaching of this subject, that the candidate has undertaken in the review period, and all class notes/text materials. The substantial load involved in reviewing these portfolios, and the time for study of new appointments and reviews for promotion and termination (and retentions), will be recognized in the “service to the department” category of faculty who do it, when they are reviewed, and may require the appointment of an assistant dean for its skilled supervision in larger colleges.
3 Deans will, in general, not be appointed without several years of experience with this “assistant dean” role in a college run with serious faculty evaluation as described here. Presidents will essentially never be appointed...
Table of contents
- Cover Page
- Handbook on Measurement, Assessment, and Evaluation in Higher Education
- Title
- Copyright
- Dedication
- Contents
- List of Figures
- List of Tables
- Foreword
- Preface: Improving Institutional Decision-Making through Educational Measurement, Assessment, and Evaluation
- Acknowledgments
- List of Acronyms
- Part I Measurement, Assessment, and Evaluation in Higher Education: Past, Present, and Future
- Part II Assessment and Evaluation in Higher Education
- Part III Theoretical Foundations of Educational Measurement
- Part IV Testing and Assessment: Implications for Decision-Making
- Part V Test Construction and Development
- Part VI Statistical Approaches in Higher Education Measurement, Assessment, and Evaluation
- Part VII Approaches to Evaluation in Higher Education
- Part VIII Approaches to Assessment in Higher Education
- Part IX Issues in Assessment and Evaluation in Higher Education
- Notes on Contributors
- Index