Impact Evaluation of Quality Management in Higher Education
eBook - ePub

Impact Evaluation of Quality Management in Higher Education

  1. 136 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Impact Evaluation of Quality Management in Higher Education

Book details
Book preview
Table of contents
Citations

About This Book

This volume works towards overcoming the lack of systematic impact evaluation in higher education, particularly analyses which are not restricted to ex-post gathered data and expert assessments. Since (higher) education is more important than ever in knowledge societies, high priority should be ascribed to quality management (QM) in higher education institutions (HEIs). Consequently, impact evaluation of QM effectiveness is indispensable because it generates the knowledge required for quality (management) improvement.

The introductory chapter elucidates the motivation and objective of impact analyses of QM in HEIs and provides an overview of the volume's other contributions. One chapter reflects on success factors and un-/intended effects of QM, while another one analyses more discoursive ways of evidence-informed guidance of QM policies which are complementary to rigorous impact studies. Five chapters investigate QM effectiveness in HEIs by ex-post and simultaneous impact evaluation in European case studies, including assessments of students, teachers, quality managers, and institutional leadership. The case studies comprise universities from Germany, Spain, Finland, and Romania. The final chapter reports a SWOT analysis of impact evaluation of QM in HEIs, which is suggested as a tool for bridging the notorious gap between the demanding methodology of impact evaluation and its proper implementation.

This book was originally published as a special issue of the European Journal of Higher Education.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Impact Evaluation of Quality Management in Higher Education by Theodor Leiber, Theodor Leiber in PDF and/or ePUB format, as well as other popular books in Business & Operations. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2020
ISBN
9781000021226
Edition
1
Subtopic
Operations

Impact evaluation of quality management in higher education: a contribution to sustainable quality development in knowledge societies

Theodor Leiber
Image

ABSTRACT
Since (higher) education is more important than ever in knowledge societies, high priority should be ascribed to quality management in higher education institutions and its effectiveness. However, there is still a lack of systematic evaluation of the latter, particularly analyses which are not restricted to ex-post gathered data and expert assessments. The articles in this special issue contribute to overcome these shortcomings in several ways: One article is reflecting on success factors and un-/intended effects of quality management, another one is analyzing more discoursive ways of evidence-informed guidance of quality management policies which are complementary to rigorous impact studies. Five articles investigate quality management effectiveness by ex-post and simultaneous impact evaluation in European case studies, including assessments of students, teachers, quality managers and leadership. Finally, a SWOT analysis of impact evaluation of quality management in higher education institutions is carried out and suggested as a tool for bridging the notorious gap between methodology and implementation.

The importance of quality higher education for the future of knowledge societies

One cannot help but notice that we are living in an era of upheavals and crises that affect all areas and forms of both, human life and extra-human nature. We experience, often painfully and in different parts of the globe because of varying economic, political, cultural, and religious reasons, tensions that threaten social security and justice, sustainable economies, democracy and human rights, the political world order and even the survival of nature. However, the globe and the people living on it are complex and diversified enough, so that the above statement is not in logical contradiction to the observation that, at the same time, we are undeniably living in times of permanent technological innovation and knowledge enhancement.
Anyway, it is unimaginable how these intricate and serious ambivalences and problems could be successfully approached, not to speak of achieving solutions, without what has been called ‘enlightened Enlightenment’, e.g. in the sense of Ulrich Beck’s ‘reflexive modernization’, ‘second modernity’, and ‘risk society’ (Beck 1992). This presupposes that education and the development of knowledge societies play a core role, the former being ‘one of the essential pillars’ (Afgan and Carvalho 2010, 41) of the latter. In fact, education must be argued for in an even broader sense since it is the universal human right for reflected self-determination and development of any individual including her pursuit of happiness (UNGA 1948, Art. 26, paragraph 1). Consequently, an extensification and intensification of broadly construed education and vocational training is needed which is driven by the developmental goals and demands of the Knowledge Society of the twenty-first century. Among these are Education for All, knowledge-based employability, increasing complexity of educational programmes, and knowledge-based social legitimation of political decisions to name but a few (cf.Innerarity 2012;Marginson 2006;VĂ€limaa and Hoffman 2008;van Weert 2006). Against this backdrop, higher education institutions (HEIs) have three main purposes: they shall serve the maintenance, deepening and spreading of freedom and democracy; knowledge and innovation; sustainable behaviour of humankind and conservation of their environment (cf.Giroux 2002;Hamlyn 1996;Harkavy 2006;HusĂ©n 1991). In order to accomplish these tasks, HEIs must be institutions of higher learning and teaching that shift the frontiers of knowledge on the basis of an adequate range of study and research subjects and convey the knowledge of democratic values and goals. In doing so, they must always strive for high quality outcomes in research, teaching and learning and transdisciplinary knowledge transfer.

The need of impact evaluation of quality management in higher education

Deduced from this fundamental role of HEIs is the relevance of quality management1 (QM) and its impact evaluation in higher education (Leiber, Stensaker, and Harvey 2015). Such knowledge is needed for a variety of reasons adding to those mentioned above. For example, massification, the introduction of economy measures and systemic public underfunding, national and global competition as well as (experienced) accreditation and evaluation overload sometimes may lead HEIs, education ministries and governments to express a need for particularly effective and efficient quality assurance (QA) procedures. The need of impact evaluation of QM in HEIs is further demonstrated and especially stressed by severe organizational, epistemological and ethical challenges, which contemporary (public) HEIs face in the areas of quality enhancement of their core tasks: management of their scarce resources, creativity in knowledge generation and dissemination and improvements of educational models. It can hardly be imagined how these challenges could be approached, and possibly solved, without some sort of systematic QM, including various forms of external and internal evaluation.
However, the call for a better understanding of the effects, benefits and costs of QM in higher education is only served by relatively few QM impact analyses. Moreover, the great majority (if not the totality) of the investigations so far exclusively focus on impact analyses carried out ex post factum of the QA procedures, often with large distance in time and corresponding error-proneness and memorization problems. Furthermore, up to now students’, teachers’ and QA staff’s assessments have been widely neglected in impact analyses of QA. Instead the focus has been on opinions and assessments of institutional leadership and evaluation peers. Over and above these desiderata, QA agencies and HEIs are increasingly seeking for professionalization and competence enhancement in impact analysis and meta-evaluation. In doing so, they want to learn more about the effectiveness, efficiency and options for the strategy and innovation of QM.
Against this background, there is a need for more theoretical reflection and empirical research on impact evaluation of QM in HEIs including the identification and comparison of relevant impact evaluation designs and reliable methodologies. To cover that need, this special issue contains three articles (Beerkens 2018;Brennan 2018;Leiber, Stensaker, and Harvey 2018) in addition to this introductory article, which focus on theoretical and methodological considerations, and five articles with a focus on empirical case studies of impact evaluation of QA carried out in HEIs from Finland, Germany, Romania and Spain. Four of the latter are based on a before-after comparison approach (Bejan et al. 2018;Jurvelin, Kajaste, and Malinen 2018;Leiber, Moutafidou, and Welker 2018;Leiber, Prades, and Álvarez 2018), which is briefly characterized below, while one study concentrates on a regression model to explain QA effectiveness through structural variables as perceived by quality managers (Seyfried and Pohlenz 2018).

Methodological framework of impact evaluation of quality management in higher education institutions: some basic considerations

To relieve the discussion and understanding of the studies presented in this special issue, and to clarify their connections, similarities and commonalities, a methodological framework of impact evaluation of QM in HEIs is drawn in some detail. The framework consists of key guidance issues for impact evaluation and the characteristics of the before-after comparison approach. In addition, the so-called attribution problem is contextualized and the types of available data are described.

Key guidance issues for impact evaluation

A few years ago, Frans Leeuw and Jos Vaessen advised evaluation theorists as well as practitioners with nine methodological and managing key guidance issues for conceptualizing, designing, and implementing impact evaluations:
(1) ‘Identify the type and scope of the intervention.’
(2) ‘Agree on what is valuated.’
(3) ‘Carefully articulate the theories linking interventions to outcomes.’
(4) ‘Address the attribution problem.’
(5) ‘Use a mixed-method approach.’
(6) ‘Build on existing knowledge relevant to the impact of interventions.’
(7) ‘Determine if an impact evaluation is feasible and worth the cost.’
(8) ‘Start collecting data early.’
(9) ‘Front-end planning is important.’ (Leeuw and Vaessen 2009, x).
In the five empirical case studies presented in this special issue, the above key guidance issues could be used as a guideline.
Issue (1) was fulfilled because the QA interventions were well-established QA procedures, such as programme accreditation, institutional accreditation and programme evaluation. Issue (2) was observed in the sense that it was clarified what the intended QA effects were. Issue (6) was also realized as much as possible, since the available knowledge about the QA procedures and their potential efficacy as well as knowledge about other potentially efficacious activities and interventions was used. Furthermore, issue (8) was met in the sense that data were timely collected in all case studies; particularly in the before-after comparison approaches, the time schedules of the baseline, midline and endline surveys were adjusted to the schedules of the QA procedures (Table 2).
Issue (9) was fulfilled since the empirical case studies followed ex-ante plans comprising internal meetings, public conferences, publications and workshops based on work plans with mile stones, division of work tasks and responsibilities. Independence, or impartiality, of the evaluating teams vis-à-vis the stakeholders with whom they were collaborating was also a sub-theme of issue (9): in the projects, a dialogue-based critical friends approach was utilized, i.e. the impact research groups were transparently discussing and deciding all methodological and managerial aspects of their projects independently, i.e. without interference from their home institutions, formal superiors, and national or regional governments. Furthermore, concerning independence it seems appropriate to state that the impact analyses presented in this special issue implemented participatory approaches because stakeholders – namely students from participating programmes, teaching staff, leadership and QA staff from participating universities, and QA agencies’ staff – were involved in the determination of evaluation objectives and relevant indicators as well as in data analysis and publishing of results (Bejan et al. 2015;2018;Damian, Grifoll, and Rigbers 2015;ICP 2016;Jurvelin, Kajaste, and Malinen 2018;Kajaste, Prades, and Scheuthle 2015;Leiber, Moutafidou, and Welker 2018;Leiber, Prades, and Álvarez 2018;Leiber, Stensaker, and Harvey 2015).
The attribution problem, issue (4), was also addressed, see analysis below. Issue (5), i.e. mixed methods in a strict sense, however, could only be applied in one of the five empirical case studies (Seyfried and Pohlenz 2018), while it was not applicable in the other four case studies, simply because quantitative data were not available, see discussion below.
The importance and meaning of issue (3) resides in the fact that it is an ingredient of plausible explanations for the changes observed, ideally identifying causal mechanisms connecting interventions with the consequences generated by the interventions. Therefore, it is assumed that the intervention design is based on a ‘theory’. The required articulation of such ‘theories’, particularly when they are implicit or rather weakly developed,
Can use one or more pieces of evidence – ranging from the intervention’s existing logical framework, to insights and expectations of [
] [well-informed] stakeholders [and experts] on the expected way target groups are affected, to theoretical and empirical research on processes of change or past experiences of similar interventions. (Leeuw and Vaessen 2009, xii)
Such assumptions on the effects of an intervention have to be tested, either ‘by carefully constructing the causal “story” about the way the intervention has produced results (as by using “causal contribution analysis” [and process tracing]) or by formally testing the causal assumptions using appropriate methods’ (Leeuw and Vaessen 2009, xii; see also Befani and Mayne 2014). In the four case studies based on before-after comparison (Bejan et al. 2018; Jurvelin, Kajaste, and Malinen 2018; Leiber, Moutafidou, and Welker 2018; Leiber, Prades, and Álvarez 2018), issue (3) was fulfilled theoretically (see Leiber, Stensaker, and Harvey 2015, Table 1, Figure 2, Figure 3), while more detailed empirical hypotheses would be desirable but were not developed because of lack of resources. Due to a rather specific QA procedure, one of these case studies, however, achieved a little more in this regard by developing some structural mechanism hypotheses based on assumptions about intended aims of the QA procedure (Leiber, Moutafidou, and Welker 2018). The fifth case study approaches issue (3) by an ordinary least squares regression model to explain perceived QA effectiveness through structural variables and certain QA-related activities of quality managers (Seyfried and Pohlenz 201...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Table of Contents
  6. Citation Information
  7. Notes on Contributors
  8. 1 Impact evaluation of quality management in higher education: a contribution to sustainable quality development in knowledge societies
  9. 2 Success factors of quality management in higher education: intended and unintended impacts
  10. 3 Assessing quality assurance in higher education: quality managers’ perceptions of effectiveness
  11. 4 Evidence-based policy and higher education quality assurance: progress, pitfalls and promise
  12. 5 Impact evaluation of programme accreditation at Autonomous University of Barcelona (Spain)
  13. 6 Impact evaluation of EUR-ACE programme accreditation at JyvÀskylÀ University of Applied Sciences (Finland)
  14. 7 Impact evaluation of institutional evaluation and programme accreditation at Technical University of Civil Engineering Bucharest (Romania)
  15. 8 Impact evaluation of programme review at University of Stuttgart (Germany)
  16. 9 Bridging theory and practice of impact evaluation of quality management in higher education institutions: a SWOT analysis
  17. Index