Is Peer Review Broken?
In A Jury of our Peers, a post on the parallel blog Nucleus Ambiguous, I introduce two web sites which highlight scientific fraud and error (Retraction Watch and Science Fraud. Update: Science Fraud has now closed due to legal pressure). One interpretation of the high number of entries posted on these blogs is that the current system of quality control, relying as it does on the process of peer review, is in need of serious reform. Not coincidentally, these sites also hint at the possibilities of using new media and crowdsourcing to increase transparency and accountability in scientific research.
One element of the peer review process that is often criticized is the opacity of the process, centering as it does on the presumed anonymity of the reviewers. The frontiers family of journals have provided an existence proof that a more open evaluation system can produce high quality, high impact, research. One recent paper in Frontiers in Computational Neuroscience suggests that, at least among those committed to the basic reform, a rough consensus is emerging about the features of the new approach. The key elements are transparency, identity-verified reviewers, and integration of Web 2.0 elements like user-defined ranking systems. Even the journal Nature conducted an open review trial in 2006, but the editors found the reception from authors lukewarm. Of course, scientists who are even in the running for publication in that prestigious journal may have less incentive to mess with the process.
Personally, my publications have all been reviewed with the traditional anonymous process, and I’ve certainly had at least one reviewer who I felt would have been compelled to provide a more thoughtful response if he or she had been identified. I do worry somewhat about exposing my manuscripts to public review, not because I have anything to hide, but because constructive peer review has often improved my work considerably, so I’d prefer to not have the less compelling drafts shadowing the final version (whatever that comes to mean in science 2.0 terms).
Aside from showing how the sausage is made, a much more fundamental question about the effects of open evaluation is whether identifying reviewers would discourage candid responses. This is not just a question of reducing the quality of individual papers, it also could lead to cliquishness and groupthink as each review is incorporated into larger social networks.