Abbott, Alison. "Funding Cuts Put Pressure on Peer Review" Nature 383 (17 October 1996), p.

 

Abbott, Alison. "Funding Cuts Put Pressure on Peer Review" Nature 383 (17 October 1996), p.
567.

Capri, Italy. Cuts in research funding, and the resulting increase in competition for funds between
scientists, have exposed the shortcomings of the peer review system. But it remains the best
system in operation, according to an international ‘consensus' conference on research assessment
held in Italy last week.

The conference, on the island of Capri, near Naples, was held under the auspices of an informal
group of representatives from G7 countries. The scientists and policymakers from three
continents who attended agreed that the peer review system needs to be improved to fit new
circumstances.

They also agreed that objective criteria, such as citation data, can be helpful in assessing research
departments, but should only be used very carefully when assessing individuals. This was
particularly true for the academic community of the host country, Italy, which is struggling to
institute wide-ranging reforms, but has little confidence in its current procedures for reviewing
grant applications and for career promotion.

Much of the current pressure on peer review results from oversubscription to research
programmes, with the result that the success rates of applications have fallen to levels widely
considered unhealthy. "This not only strains the time of reviewers, it also strains the concept that
the system is operating fairly," said Susan Cozzens, a policy officer at the US National Science
Foundation.

Cozzens said low success rates tend to favour solid, but low-risk, applications. Others argued that
this situation also undermines the value of peer review, as the difficulty of choosing objectively
between closely-ranked top-quality applications can make the selection of successful projects
little more than a lottery.

Several participants pointed out that grant agencies are acutely aware of the strain on reviewers.
Many leading scientists are unhappy with the drudgery of wading through stacks of low-quality
grant applications, and frustrated that applications they rate highly remain unfunded because of
lack of money.

Some funding agencies are experimenting with pre-screening applications to weed out
‘no-hopers' at an early stage. But pre- screening, with its spectre of ‘assessment by
administrators', is unpopular in much of the scientific community, and can add to a general
perception of unfairness.

The conference agreed that confidence in the fairness of a peer-review system can be increased
by the appropriate use of bibliometric data, such as publication rates and ‘impact factors', which
measure citation rate. But participants acknowledged that such data should be used carefully,
particularly in assessing individuals, and should never replace human judgement entirely.

It is a sensitive issue. Many researchers feel that the mathematical detachment of citation data
implies a certainty that is attractive, but is not always warranted. The conference agreed that such
data can be misleading, for example on multi- author papers, as a laboratory chief or supplier of
biological material may routinely add his or her name to papers to which they have made no
scientific contribution.

Indeed the conference was told by Eugene Garfield that citation indices had never been intended
for evaluation. Garfield, who developed the concept of impact factors as a measure of a journal's
influence, is founder, and now emeritus chairman, of the Institute for Scientific Information in
Philadelphia.

Garfield said that he had been widely depicted as "a Frankenstein" whose monster - the impact
factor of an individual paper - had an enormous potential to be misused "in the hands of
uninformed users".

Pressure to rely heavily on publication based indicators is particularly strong in Italy, where many
grant-giving agencies and charities distribute small amounts of money evenly, to avoid upsetting
unsuccessful applicants, and the system of academic promotion through national competitions, or
concorsi, is widely criticized.

Many Italian scientists feel that the lack of formal selection criteria has allowed personal contacts
to play too important a role in the allocation of professorships and assistant professorships. Those
at the conference described how the academic community is split over how much weight should
be given to citation data in assessing an individual's research performance.

Some feel that such data should at least be used to eliminate individuals with low citation ratings
at an early stage. Complex formulae for weighting the value of each publication have been
suggested as ways of compensating for the weaknesses of citation data - for example, scaling
down the impact factor of a multi-author paper or a review, which normally have higher impact
factors than a research paper.

Proponents argue that such formulae would allow citation data to be used fairly. But others fear
that excessive reliance on such data could allow the merits of a first class scientist, who has
published little, to be overlooked.

The conference agreed in one of a series of formal statements that citation indices should only be
used as one of a number of different performance indicators. It said that an assessment committee
should be able to give a high rating to a ‘low impact' individual - and vice versa - provided that it
is able to give an open justification of its decision.

The conference also agreed that citation indices are too crude to distinguish between
closely-ranked grant applications. As a result, complex formulae intended to quantify the merit of
individual publications