Is Michael Eisen right? Are there *really* problems with peer review?

4
1466
Posted by Rob Walsh, community karma 1466

Team Scholastica has spent the last year gleaning thoughts from journal editors, authors, and reviewers. Most of them have been very passionate about the problems they find in the academic peer review process. Of course, there are some journal editors who are completely fine with the process as they see it. They can't imagine any sort of innovation to improve it.

Michael Eisen, an evolutionary biologist at UC Berkeley, has a post titled, 'Peer review is f***ed up- let's fix it,' where he outlines the following problems:

1.  The process takes a really long time
-"In my experience, the first round of reviews rarely takes less than a month, and often take a lot longer, with papers sitting on reviewers’ desks the primary rate-limiting step. But even more time consuming is what happens after the initial round of review, when papers have to be rewritten, often with new data collected and analyses done. For typical papers from my lab it takes 6 to 9 months from initial submission to publication."

2. The system is not very good at what it purports to do.
- "The values that people primarily ascribe to peer review are maintaining the integrity of the scientific literature by preventing the publication of flawed science; filtering of the mass of papers into to identify those one should read; and providing a system for evaluating the contribution of individual scientists for hiring, funding and promotion. But it doesn’t actually do any of these things effectively."

My questions to the group are: is he right? Do these problems truly exist in your experience? If so, at what extreme? Furthermore, is this something limited to the sciences or applicable to peer review in other disciplines? What do you think of Scholastica's efforts to fix these problems?

2 Comments

5
170792
Brian Cody, community karma 170792

I was interested in the 2nd argument of Eisen's that you cite: "2. The system is not very good at what it purports to do," specifically the sub-point about "filtering of the mass of papers to identify those one should read."

One of the measures that has received increased interest in addressing how effective peer review is at filtering knowledge is inter-rater reliability (IRR), which is a measure of how consistent reviewers' scores are for any given manuscript. Based on IRR, I think that the literature would support Eisen's argument.

One JAMA meta-analysis (a meta-anlysis being a study that aggregates previous findings and tests the extent to which there are consistent results across them) of 19 previous studies found very inconclusive results: "Editorial peer review, although widely used, is largely untested and its effects are uncertain" (Jefferson, et al., 2002).

A more recent meta-analysis of of IRR that reviewed 48 previous studies found that "According to our meta-analysis the IRR of peer assessments is quite limited and needs improvement (e.g., reader system)" (Bornmann L, Mutz R, Daniel H-D, 2010).

Even within a single journal, there seems to be little evidence that reviewers are, on average, agreeing about which articles are high-quality and which are not worth publishing. Looking at the 2264 articles submitted over 4 years, a study of the Journal of General Internal Medicine (JGIM) found that "Reviewers at JGIM agreed on recommendations to reject vs. accept/revise at levels barely beyond chance, yet editors placed considerable weight on reviewers' recommendations." (Kravitz RL, Franks P, Feldman MD, Gerrity M, Byrne C, et al., 2010).

Based on these 3 studies, I think that the literature would support Eisen's argument that peer review as a system, as it exists here-and-now, is not effectively filtering knowledge on a conistent meritocratic basis.

I think that matching reviewers more effectively to the content they want to review (versus materials they feel obliged to review due to social connections to the journal's editorial board), and increasing the number of reviewers for each manuscript, could improve the process. As Scholastica improves matching of reviewers to content and makes it easier to add more reviewers per article, we should see IRR increase.

 

almost 13 years ago
login to leave comment
3
4071
Sheldon Bernard Lyke, community karma 4071

I think that Eisen is correct in some ways.  Peer review does take a long time, but I am not sure what would constitute a useful alternative.  Legal scholarship uses a very different process of review where student editors select scholarship for publication.  This process is much quicker, and the turn around from submission to acceptance is usually less than two months.  The problem is, do you want students, who are at the beginning of their careers being the arbiter of scholarly potential?

 

My biggest concern about peer review is the fallacy that it is blind review.  In our current world, very few people submit work that they have not presented or vetted in another forum (i.e. conference, panel, workshop, etc).  Presentations are usually advertised on the Internet, which leave traces that can be later searched.  All a reviewer has to do is use google.com and search a title and find the name and affiliation of an author.



almost 13 years ago
login to leave comment