Introduction
Much is made, particularly by academics, of the contribution academic research makes to innovation and the quality of organisational decision-making in the private and public sectors. The intensity of such self-justification has been fuelled in the UK by what is often termed the ‘impact agenda’; public agencies funding research now expect researchers to demonstrate their worth by showing how their work has improved the lives or practices of others. For research councils considering funding proposals, this involves evaluating the proposed ‘pathways to impact’ or the actions to be taken to maximise the effect of grant funding on non-academic audiences (http://www.rcuk.ac.uk/innovation/impacts/) as well as the novelty, significance and sophistication of the planned research itself. For national audits of research quality, such as the Research Excellence Framework (REF) in the UK, evaluation is via impact case studies (www.ref.ac.uk/). Such concerns are not, however, confined to that country (see, for example, Fraser and Taylor 2016; Hicks 2012). Indeed, the importance of impact is such that the Australian government declared in 2015 that it was considering removing the evaluation of staff research publications as a means of allocating research grants and, instead, using measures related to impact and engagement (Watts 2017: 16).
Subject associations whose mission is to promote the interests of academics in their discipline or field now routinely profess the practical value of their members’ academic work (see, for example: Academy of Social Sciences (https://acss.org.uk/publication-category/making-the-case/); The Chartered Association of Business Schools (CABS) 2015; Gerrard 2015). Publishers have also sought simple metrics for demonstrating impact in response to the changing sands of research policy. Perhaps the most prominent system to have emerged thus far is ‘altmetrics’ which combines the number of actors who have mentioned the research with the media used to calculate an ‘attention’ score (http://www.altmetric.com/publishers.php).
The rise of performance-based university research funding systems internationally and associated concerns with ‘impact’ has consequences for university research managers and for academic researchers. For the former, the status of their institution has become connected with their ability to perform well in research evaluation exercises. As result, universities that operate within such systems engage in strategic management (or game playing) to maximise their ‘performance’ and position in global league tables (Yudkevich et al. 2016). For the latter, promotion prospects are becoming influenced not only by an individual’s ability to publish and to attract research grants but also by their capacity to generate impact (Bastow et al. 2014).
Although some researchers create arbitrary subject-based rankings (e.g. Park et al. 2011; Severt et al. 2009), performance-based research funding systems tend not to differentiate their approach to evaluation between disciplines or fields of enquiry (Hicks 2012). This is, perhaps, unsatisfactory when considering non-academic impact because there is evidence of significant variation between disciplines (Bastow et al. 2014). Moreover, while much energy has been expended studying the non-academic impact of universities with contrasting missions (e.g. Hewitt-Dundas 2012) and their relations with knowledge-intensive industries (e.g. Banal-Estanol et al. 2015; Bozeman et al. 2013, 2015), fewer researchers have examined non-knowledge-intensive sectors with the same degree of rigour (Thomas and Ormerod 2017).
The premise of this book is that greater scrutiny of the dynamics of research impact in different contexts is required. It is underpinned by a concern that is recognised by many but articulated by too few; a fear that academic researchers are becoming too embroiled in public relations strategies about their work (Marchant 2017) at the expense of their independence and criticality (Watermeyer 2016). It takes the form of an examination of tourism and related research which is argued to be a marginal interdisciplinary field of academic enquiry often related to a marginal sector of the economy, at least politically. Although the empirical work presented in this book is confined largely to the UK, the implications of the analysis presented extend to other countries operating performance-based systems, notably Australia, New Zealand, and (several countries within) the European Union.
Influencing Others as a Research Policy Goal
There is a substantial normative literature that promotes engagement and influence. This often reflects the value systems of the academics concerned; those interested in promoting sustainability or corporate social responsibility, for example, are centrally concerned with finding ways of influencing individual, institutional or business practices for what they argue is the common good (e.g. Bramwell et al. 2016). Identifying behavioural or technical mechanisms for improving the sustainability of food systems and promoting their use (Marsden and Morley 2014) or using research to empower the disadvantaged (Badgett 2015) are obvious examples. Presumably for these academics, an agenda that values their aspiration to effect change may represent an additional bureaucratic hurdle when seeking research funding but it is not morally or professionally disturbing.
Some proponents of the impact agenda suggest that practitioners will be better placed to take decisions that achieve their goals if they utilise the cutting-edge insights into the physical world (science) and the social world (social science) provided by academic researchers. ‘Practitioners’ is used here to encompass policy-makers. By way of illustration, publicly funded research on genetics has opened up new avenues for exploring treatment possibilities. The commercial exploitation of this knowledge has resulted in improved idiosyncratic pharmacological interventions and, in turn, contributed to a thriving pharmaceutical sector. The most successful enterprises provide significant tax revenues and generate ‘good quality’ (well paid) employment. Publicly funded social science research that examines the corporate behaviour of commercially driven medical enterprises may lead to improved regulation, changes to fiscal policy and, perhaps, more effective laws to protect intellectual property. Simultaneously, it may find ways of remodelling the management and enhancing the organisational performance of the pharmaceutical firms.
This reasoning has great intuitive appeal. Who, after all, would want to conceal from practitioners, or public gaze, the results of publicly funded research of this kind? For many, this knowledge constitutes what economists call a ‘public good’, i.e. its availability to others is not diminished because of its consumption (or use) by one actor and no one is excluded from access to the knowledge. That some are able to ‘exploit’ it more than ...