Altman, Lawrence K. "When Peer Review Yields Unsound Science," New York Times, 11 June


Altman, Lawrence K. "When Peer Review Yields Unsound Science," New York Times, 11 June
2002, p. >>>>.

Medical journals are the prime source of information about scientific advances that can change
how doctors treat patients in offices and in hospitals. And to ensure the quality of what journals
publish, their editors, beginning 200 years ago, have increasingly called on scientific peers to
review new findings from research in test tubes and on animals and humans.

The system, known as peer review, is now considered a linchpin of science. Editors of the
journals and many scientists consider the system's expense and time consumption worthwhile in
the belief that it weeds out shoddy work and methodological errors and blunts possible biases by
scientific investigators. Another main aim is to prevent authors from making claims that cannot
be supported by the evidence they report.

Yet for all its acclaim, the system has long been controversial. Despite its system of checks and
balances, a number of errors, plagiarism and even outright fraud have slipped through it. At the
same time, the system has created a kind of Good Housekeeping Seal of Approval that gets
stamped on research published in journals. Although most research is solid, and in some cases
groundbreaking, problems have persisted.

A particular concern is that because editors and reviewers examine only what authors summarize,
not raw data, the system can provide false reassurances that what is published is scientifically

After a series of problems, about 20 years ago, journal editors came under pressure to better
document claims for the system's merits. To do that, a number of editors began their own
primary research into the way the peer review system worked, what was wrong and how it could
be fixed.

A leader has been The Journal of the American Medical Association, which has held four
meetings on research on peer review since 1989 under the direction of Dr. Drummond Rennie, a
deputy editor. Last week, the journal published 34 articles from the latest meeting. And the news
was grim.

Researchers reported considerable evidence that many statistical and methodological errors were
common in published papers and that authors often failed to discuss the limitations of their
findings. Even the press releases that journals issue to steer journalists to report peer reviewed
papers often exaggerate the perceived importance of findings and fail to highlight important
caveats and conflicts of interest.

"Once again," Dr. Rennie wrote in an editorial summarizing the findings, "we publish studies
that fail to show any dramatic effect, let alone improvement, brought about by editorial peer

Under the system, authors submit manuscripts to journals whose editors send the most promising
ones to other experts (peers) in academic medicine to solicit their unpaid advice. The peers check
for obvious errors, internal inconsistencies, logic, statistical legitimacy, reasonableness of
conclusions and many other factors, and make suggestions that editors use in deciding whether to
ask for revisions, publish the paper or return it marked "rejected."

There is general agreement that an overwhelming majority of "weak" papers will survive initial
rejections to find acceptance somewhere among the thousands of medical and scientific journals
that each year publish an estimated two million new research articles, mostly paid for by the
public through government grants. Despite improvements in peer review, "there still is a massive
amount of rubbish" in the journals, Dr. Rennie said.

In recent years, editors have used the importance of peer review to justify imposing punitive
restrictions on authors who disclose information to the press before the paper's publication in
their journals. By linking peer review and publication date, critics say, editors have increased the
news value of their journals, a step that has slowed the free flow of information and helped some
journals raise subscriptions, advertisement rates and profits.

While many editors and others have defended the system, they acknowledge that it is unlikely to
detect fraud and is prone to abuse.

One reason is that the secrecy involved in the system can be unfair to authors. While the names
of authors are generally known to reviewers, the reviewers' names are not disclosed to the
authors. Because the anonymous peers chosen to review manuscripts are often the authors'
scientific competitors, jealousies and competitive advantage can become factors in the reviews.

Occasionally, reviewers have been caught publishing information they lifted from other
researchers' manuscripts. Further, little is known about the quality of the reviewers or what
training they need to do a good job.

"The available evidence," wrote Fiona Godlee of BioMed Central in London, "gives no
indication that anonymous peer review achieves better scientific results than open review."

Apparently, few journals have adopted the open system. Ms. Godlee said she looked forward to
the day when signed reviews were posted on the Internet along with published articles.

The peer review system also tends to set a very high barrier for authors to publish truly novel

In 1796, a peer reviewed journal in England rejected Dr. Edward Jenner's report of his
development of the world's first vaccine, against smallpox. The vaccine was used to eradicate the
viral disease nearly two centuries later, and it may be needed again if bioterrorists release
smallpox virus in an attack.

In recent decades, at least two Nobel Prizes were awarded to scientists who received rejection
slips from one journal before another published their papers.

One paper concerned what turned out to be the hepatitis B virus. The other concerned a
radio-immunoassay technique that can detect trace amounts of substances in the body and that is
now used every day throughout the world.

Another recent problem, critics say, is that many editors have not moved quickly enough to use
newer methods to judge the merits of manuscripts.

One example is the growing importance of statistics to measure the safety and effectiveness of
new therapies and to compare them with older ones. Yet research on peer review has found that
many studies are conducted without the benefit of adequate consultation with statisticians,
sometimes because none were available.

Reasons for errors also include the practice of consulting statisticians after the research project
has been completed, not at the most critical time, when the study was being designed.

"Expert analysis cannot salvage poorly designed research," wrote Dr. Douglas G. Altman of the
Center for Statistics in Medicine in Oxford, England.

Once statistical errors are published, it is hard to stop them from spreading and being cited
uncritically by others.

Dr. Richard Horton, editor of The Lancet, found in his own study that reviewers often did not
detect important limitations to research findings that authors left out of their papers. The
omission, Dr. Horton said, must be judged a failure of peer review.

Yet Dr. Rennie remained optimistic. Earlier research has led to some improvements. For
example, in an interview Dr. Rennie said that findings from research in peer review had led some
medical journals to adopt more standardized systems for reporting findings from the clinical
trials that were used to determine the safety and effectiveness of new drugs and other therapies.

Also, because reviewers are selected for expertise the editors do not have, reviewers often rescue
a paper from rejection.

For example, Dr. Rennie cited a scientist who found scientific merit in an author's manuscript
that Dr. Rennie had sent him. The reviewer "completely rewrote" the manuscript that he "would
have rejected otherwise," Dr. Rennie said.

No one knows how often such "rescues" occur, Dr. Rennie said, in part because the needed
studies have not been done. Such studies would involve funds that have been hard to obtain as
well as cooperation of reviewers, editors and authors of papers that were accepted and rejected,
and many of them might not participate because of their set views.

In 1986, Dr. Stephen Lock, then editor of The British Medical Journal, wrote, "Editors, the
arbiters of rigor, quality and innovativeness in publishing scientific work, do not apply to their
own work the standards they apply to judging the work of others."

Defenders of the system still face Dr. Lock's challenge.