News & Views item - January 2009
Nature and Science Report and Comment on the UK's 2008 Research Assessment Exercise. (January 3, 2009)
The results of Britain's Research Assessment Exercise (RAE 2008) were published just a fortnight ago and to date comment is focused as much on its promised successor as on the results for RAE 2008..
First a few statistics:
159 higher education institutions submitted publications produced by 52,400 of their staff.
The cost of the exercise = £10 million (A$19.9 million).
Some 1000 scientists spent up to a year on peer-review panels.
U.K. universities and specialty research institutions obtain about 50% of their budgets from competitive grants and programs, industry, and educational charities.
The remainder, some £1.5 billion (A$2.99 billion), is in annual governmental block grants based on the RAE.
How the RAE data are translated into specific funding will be announced in March.
The 2001 RAE was one factor that led to the closure of many UK science departments.
Because of the expenditure in money, effort and time, the UK government has decided that the methodology must be abandoned for a cheaper assessment procedure which has been dubbed the Research Excellence Framework (REF), and block funding is tentatively scheduled to be based on the REF beginning 2014. It is to be based on "primarily measures" (metrics) such as competitive grants obtained, Ph.D.s granted, and citations received for papers. The details have yet to be worked out.
As Science's John Travis points out: "Many questions remain about whether this [new] approach can truly evaluate research quality across disciplines that include music, midwifery, economics, philosophy, and all the traditional sciences."
Nature's Natasha Gilbert makes the telling point that "traditionally... the same 25 or so institutions win around 80% of the available funding, a situation that is unlikely to change this time round, according to one former vice-chancellor of a research-intensive university". She then follows up with: "the latest RAE results show that highly rated research is spread much more widely than that core of 25. Forty-nine universities had at least some 4-star research in their submissions, and at least half of the submissions from 118 universities were rated 4-star or 3-star".
While Australia's Minister for Innovation, Industry, Science and Research, Kim Carr, continues his obsession with the awkwardly named Excellence in Research for Australia (ERA) and which is scheduled to make use of metrics, though with some reference to peer review, Nature's January 1, 2009 editorial leads with: "There are good reasons to be suspicious of metric-based research assessment". In a comment bordering on sneering the editorialist writes: "The proposed... REF is opaque. Little is known about how it will work other than a central principle: it will assess research quality using metrics, including publication citations. It may also take into account the number of postgraduates completing their studies and the amount of research income won by universities. There will be a smattering of 'light-touch expert review', although the exact form that this will take is not yet clear — it might simply be used to interpret the metrics results", and it sums up simply: "taken alone, publication citations have repeatedly been shown to be a poor measure of research quality."
The editorial's comment could just as well have been about Senator Carr's ERA.
One of the particularly worrying aspects of metric-based vs peer assessments of publications is the wide discrepancies that can and do occur in evaluation between the two approaches in the "hard sciences", let alone the humanities and "softer" sciences. Nature's editorial concludes: "Expert review is far from a problem-free method of assessment, but policy-makers have no option but to recognize its indispensable and central role."
And with all of this no one has mounted a convincing case that a nation's research quality per se has benefitted through the use of the RAE or RAE-like assessments.
Is there a positive correlation between the UK's use of the RAE and improvement in the quality of British research? Yes there is. There is also a positive correlation between the increased funding of British research and its improvement.
So, why not keep the funding levels, eliminate the RAE, its cost, and use of 1,000 peer reviewers who can use the time for research, and improve funding of competitive grants at the partial expense of block grants and see what happens.
On the other hand such assessment exercises, by whatever name, fit in well with governments' propensity to micromanage whether or not it is counterproductive from the viewpoint of the public good.