Professor Don Nutbeam, who recently joined us as Senior Advisor, Analysis and Innovation, says it’s time to get on board with new approaches to measuring research impact and shares his recent experiences from the UK.
It has been wonderfully refreshing to return to Australia following a six-year period as Vice–Chancellor of the University of Southampton in the UK. I am enjoying a return to “normal” academic life at the University of Sydney, and am energised by the opportunity to contribute to the work of the Sax Institute in knowledge development and exchange.
It is a rare privilege to be Vice-Chancellor of a University and my time in the UK was marked by the most radical changes in the funding and assessment of higher education in a generation. These changes included deregulation of university student recruitment and trebling of the fees paid by domestic students. Despite being under pressure in “austerity” UK, , research funding in research-intensive universities such as Southampton has been impressively protected by a government that recognises the great potential of university research to benefit society and drive innovation in the economy.
However, there has been a major change in the way in which the government assesses research quality. During my period as Vice-Chancellor, I lived through the full five-year cycle of the Government Research Excellence Framework (REF) assessment. The 2009‒14 cycle was marked by a major change in the rules, to include an assessment of research impact for the first time, and for this to count for 20% of the overall assessment.
There was the predictable huffing and puffing by the academic community in immediate response to this, but once the government had made clear its position, universities set about systematically understanding and describing the impact of their research on society and the economy, and in working with outside partners to provide the evidence required to validate impact.
Unlike the Australian equivalent (Excellence in Research for Australia (ERA)), this is not just about bragging rights. The results of the UK REF drive the distribution of £1.6 billion (A$3.2b) in research quality allocations to the UK’s 130 universities over the five-year cycle. In my University, £45 million ($A87 million) of funding was at stake.
The tangible benefits of national research investment
The outcomes were astonishing.
At Southampton, and other universities across the UK, well-written case studies of social, health and economic impact emerged ‒ 7000 in all. Except for a few that are commercially sensitive, these are all now accessible and searchable online.
Universities ‒and their researchers ‒ can point to the extraordinary and pervasive impact they have in their local communities, nationally and internationally.
No other country in the world has so much accessible information on the tangible benefits of national research investment. With such a powerful advocacy tool it is unsurprising that the most recent UK government Public Spending Review resulted in continuing protection of national research spending at a time when almost all other government budgets were subject to significant reductions.
While there are refinements that can be made to the process of assessment, no-one in the UK doubts that impact assessment is here to stay, and most believe that it will count for more than 20% of the assessment of university research performance in the future.
Reflecting on academic merit
This innovation in research assessment is occurring at a time when the academic community is (appropriately) having a moment of reflection about the usefulness of current measures of academic merit.
In a recent article in The Conversation Simon Chapman1 draws attention to the ridiculously low level citation rates achieved by most publications, highlighting as evidence a UK Medical Research Council report that described MRC-funded research publications being cited an average of 2.08 times between 2006‒2013. This was twice the world average. Twenty-one per cent of the world’s papers published in this time were uncited and less than 5% were cited more than eight times ‒ the MRC definition of “very highly cited”. There is an immense proportion of published work that has very little or no impact ‒ even within its academic community.
What does this mean for Australian universities, and ultimately for their researchers, as they consider how to respond to the Turnbull government’s Innovation Statement released in December? This was designed to drive an “ideas boom”, revolving around strengthening ties between the business community, universities and scientific institutions. The focus has been on creating a “modern, dynamic 21st century economy”.
Australian universities have a reasonable but patchy track record in knowledge and technology transfer. The private sector has not been as successful as it would wish in finding the right way to work productively with them. Governments have had an occasionally ambivalent relationship with the use of evidence in policy-making.
The Australian experience
All this has to change if we want to optimise the impact and utility of our research spend in Australia. The government’s innovation strategy is rightly forward-looking, but public policy and business decisions are being made today, frequently with inadequate knowledge of available evidence.
The concept of assessing the impact of research on the basis of case studies had originally been developed in Australia in the mid-2000s but was abandoned in the face of criticism from the academic community and a lack of political will. It is time for us to reconsider our position.
Colleagues from the University of Exeter in the UK2 have made the case for impact assessment on the basis of the four A’s ‒ advocacy, accountability, analysis and allocation.
The experience from the UK has demonstrated how each of these can be achieved through a university sector that is willing to embrace the need to demonstrate the impact of its research, is given time to engage in knowledge transfer, and takes pride in the social, cultural and economic impact of its research.
Organisations like the Sax Institute have a vital role to play in the acceleration of knowledge exchange and the delivery of the positive vision of Australia as an innovation nation.
- Are citation rates the best way to assess the impact of research. Simon Chapman, The Conversation, 12 February 2016
- Morgan Jones, M. & J. Grant. 2013. ‘Making the Grade: Methodologies for Assessing and Evidencing Research Impact.’ In 7 Essays on Impact, edited by Dean et al., 25-43. DESCRIBE project Report for Jisc. Exeter: University of Exeter.
Find out more
- Read our profile article about Professor Nutbeam: International public health expert joins Sax team
- An edited version of this article was published in The Australian