The Prime Minister Goes to China
The International Context
Goals and Expectations
The National Context
Streamlining Management and Consolidation of Universities
Increased Funding
Increased Competition
The Universities
System Design
Main Principles
Strength of the Incentive System
Structure of the Book
Literature
I have visited the Peopleās Republic of China.
All my expectations were exceeded by reality. Jobs are moving around the planet, and the big challenge is that we have to be better in creating new jobs in Denmark to make up for the jobs we loseā¦ In Denmark we are concerned about āpicking the winner,ā but we will get a much larger return by being focused rather than spreading the butter thin.
Prime Minister Anders Fogh Rasmussen, August 2004 1
The Prime Minister Goes to China
In 2004, during an official visit to China, Danish Prime Minister Anders Fogh Rasmussen came to realize the serious challenges presented to Danish society by globalization. He returned to Denmark convinced that one of the responses to these challenges should be to strengthen Danish research. Two months later, in his opening speech to the Danish Parliament, the goal was stated clearly: that the public sector and private industry increase spending for research and development in order to reach an amount in 2010 equivalent to more than three per cent of GNP. 2
A year later, in a white paper prepared by the government, the goal was repeated and a strategy for making Danish research āworld-classā was formulated. The very first component of the strategyācompetition for funding based on the quality of researchātook as its starting point the fact that the existing distribution of resources was based on historical circumstances which did not take into consideration the current quality of research. In future, the white paper stated, funding (basismidler) should be performance based in order to ensure that the best universities received more resources. The evaluation of quality was to be based on an international and independent panel of experts (Regeringen 2005, 5ff.).
The white paper initiated a two-year-long debate among the major stakeholders. Relatively early on, the idea of an international panel was abolished, not least because of opposition from Danish Universities, the professional body representing Danish universities. 3 In the meantime, the Danish Research Agency (Forskningsstyrelsen) sought to develop a ābibliometric quality indicatorā; however, in 2008, the agency gave up, and from then on, it merely promoted a ābibliometric research indicatorā (Forsknings- og Innovationsstyrelsen 2008). In other words, the agency started out with an indicator whose purpose was to measure quality (closely reflecting the wishes of the government) and ended up with an indicator whose purpose was to promote quality.
The result of these deliberations was the Bibliometric Research Indicator (BRI), which was introduced in 2008 as system in which scholars and universities were obligated to record their research publications. The indicator took effect as a component in the formula for allocating funds to universities from fiscal year 2010.
The International Context
The BRI did not come out of the blue. First, it is an example of one of the instruments promoted by New Public Management: Performance Management (PM). 4 The basic claim is that public organizations traditionally perform poorly because they are constrained by rules and regulations, do not have explicit performance standards and are not held accountable for goal attainment. The argument is that performance can be improved by shifting focus towards results rather than inputs or procedures and, with it, increase autonomy and flexibility at decentralized levels by replacing direct control of work with appraisals of its outcomes (Moynihan 2006, 2008). More precisely, PM is associated with setting clear organizational goals, operationalizing the goals to targets on relevant indicators, evaluating goal attainment on the basis of these indicators and taking corrective actions based on performance information when required (Walker et al. 2010, p. 26). Performance measurement is a central part of this. The most debated form of managerial action in relation to performance monitoring is the provision of incentives to promote resultsāthat is, rewarding goal attainment and/or applying sanctions if targets are not met (Boyne 2010; Swiss 2005). Consequently, as the autonomy over work processes is increased, so too are efforts to discipline the use of this autonomy by structuring incentives and enhancing pressures to perform (Soss et al. 2011). The assumption underlying the use of incentives is that the agents (individual or organizational) behave as rational and economically motivated actors with a fixed set of preferences that they seek to maximize on the basis of strategic calculations.
Universities were one of the last bastions of the old days. Around the turn of the century, the Ministry of Science introduced a component of performance in university funding, according to which a marginal and fluctuating part of their annual appropriations was a function of the number of students, external grants and doctoral degrees, with weights of 50, 40 and 10%, respectively (Aagaard 2011, pp. 277, 286). The overwhelming proportion of the appropriations, however, reflected numerous isolated decisions made in the preceding decadesāwhat was referred to above as āthe existing distribution.ā The 50-40-10 system, however, lived its own life behind the scenes and was generally unknown to the individual faculty member and probably also to many department chairs.
With the BRI, Denmark joined in earnest a growing number of countries applying performance-based university research funding systems (PRFS), making government funding of universities dependent on ex post evaluations of research output (Geuna and Martin 2003; Hicks 2012; Tahar and Boutellier 2013; Sivertsen 2017). A recent survey reveals that PRFSs have been introduced in 15 of the 28 member states of the European Union (Jonkers and Zacharewicz 2016). We also find PRFSs outside Europe, notably in Australia and New Zealand (Hicks 2012), while a number of dominant research nations like the United States, Canada and Switzerland use other distributive mechanisms (Aagaard et al. 2014).
Hicks (2012, p. 252) lists five defining criteria for PRFSs, which can be summarized as follows:
Research must be evaluated. Evaluations of the quality of degree programmes and teaching are excluded.
Research evaluation must be ex post. Evaluations of research proposals for project or programme funding are ex ante evaluations and are excluded.
Research output must be evaluated. Systems that allocate funding based solely on PhD student numbers and external research funding are excluded.
Government distribution of research funding must depend, or will soon depend, on the results of the evaluation. Ex post evaluations of university research performance used only to provide feedback to universities or to the government are excluded.
It must be a national system. University evaluations of their own research standing, even if used to inform internal funding distribution, are excluded.
PRFSs are, in other words, a performance-based university budget system that connects university funding to some form of ex post evaluation of their research efforts. However, the different research evaluation systems which fall within the above definition vary considerably between countries, in relation to their design, analytical levels, time intervals, measurement methods, etc. Generally speaking, one can distinguish between three types of models when it comes to PRFSs: (1) panel-based models, (2) publication-based models, and (3) citation-based models (Aagaard et al. 2014).
Panel-based models are based on peer reviews or peer reviews supported by various bibliometric goals. The British Research Assessment Exercises (RAE) is probably the best-known example of a panel-based model and also the first example of a PRFS in general. The RAE system was based on expert panelsā evaluation of the quality of research at all institutions within a given research area at intervals of several years. Each research institution was given a quality rating, which was then used in the allocation of (future) research funds (Barker 2007). Such research assessments were carried out for the first time in 1986 and, since then, in 1989, 1992, 1996, 2001 and 2008. From 2014, the RAE system was replaced by the so-called Research Excellence Framework (REF), which also, among other things, emphasizes resear...