Peer review: open to enquiry

The House of Commons Select Committee on Science and Technology has announced an equiry into peer review.

This seems timely: I’m more and more hearing people wondering if peer review as we know (and love?) it is threatened/challenged/becoming redundant in the web 2.0 environment. Though I think this enquiry is more probably a response to concerns that the current system may be insufficiently robust: the second of the review’s terms of reference mentions “strengthening” peer review.

Would we like it “strengthened”, if this means giving reviewers more power and responsibility? Or is there a case for making it more open – even for publishing research without traditional peer review and letting the academic community give its verdict on the value and significance of the work? (There’s an article on this by Axel Boldt in Journal of Scholarly Publishing, 2011, 42/2, 238-242, DOI10.3138/jsp.42.2.238 – but unfortunately it’s not open access … Also a provocative blog by Paul Fyfe at Florida State University)

Such a significant change in the way scholarly communications are validated won’t happen overnight. But is it a desirable development?

Quality Assurance

On Monday I attended an event at the Royal College of Physicians in London, put on by the Research Information Network – part of their series on Research Information in Transition. This one was titled Quality Assurance – responding to a changing information world.

The first speaker, Richard Grant (his notes here), focused on “motivation”, and brought up the issue of “effort to reward ratio” and the idea of “impact credits”.  These themes have come up again and again (in the conversations I have had with people), and it seems there clearly needs to be a change in the reward system before people will fully buy into a change in the peer-review, or more generally- scholarly communication, system.

Theo Bloom then discussed some of the other issues we face when discussing quality assessment in the world of web 2.0. She noted that although people don’t often comment in the spaces provided on journal homepages, they do comment elsewhere, and there is a need to tie existing comments (from blogs, etc) back to articles.

Stephen Curry was up next and he focused mostly on social media and its place (or does it have a place?) in academics’ lives. I wasn’t sure how this related directly to the idea of quality assurance. He may have been getting at the idea that these tools could be used for quality assurance – or perhaps that these tools need some form of quality assurance…(but I may have come to those conclusions on my own).

Tristram Hooley wrapped things up by further discussing social media – and clearly stated that he doubted the usefulness of the idea of “quality”, and noted that quality means different things to different people. He also described the importance of filtering (and the role of networks and folksonomies in this process) and how this can help to lead to alternative ways of identifying value and quality. (Aside: Cameron Neylon has a blog post and has shared a presentation along the same line).

Of course this event was a space for discussion rather than provision of answers, and many questions still remain.

  • Is quality assurance needed? (for academics articles? – seems like people still feel the answer is yes – but the current peer-review system could change…for other forms of academic communication? – perhaps quality assurance of another kind?)
  • How do we get people to participate / assess the quality of things? – whether it’s in the form of traditional peer-review or in the form of social media? (a different reward systems may be the answer?)
  • With these new forms of quality assurance (blog posts, comments, network, folksonomies) there needs to be a way to connect them back to the item they are evaluating or rating (not sure about this one? Any ideas??)


Check out the “Authoring and Publishing” section on the Quailty page of the Scholarly Communications Action Handbook for more thoughts on this.

Image credit: Kevin (KB35)

Too Much Information?

Book sculpture

I read another really interesting blog post the other day. Henry Bauer, in his post Scientific publications are vanity publications, describes how Universities have changed from the business of educating to the business of well…making money, which has subsequently resulted in individuals needing to fund their own research and graduate students, and grant funding being tied directly to “success” and career advancement.

Bauer then discusses the problem of “vanity publishing” in which academics pay to get published. He gives examples of page charges, processing fees, open access journal charges, and rapid review charges, concluding that “scientific publication is increasingly a matter of having the wherewithal to support vanity publishing”.

In a not-so-recent post on the Scholarly Kitchen blog Kent Anderson commented on a Research Information Network funded project that evaluated Researchers’ e-journal use and information seeking behaviour. One thing Anderson pulled from the report and mentioned briefly was that

researchers are dealing with too much information, and feel there’s “too much literature being produced.”

Some of the largest journals are publishing more than 5000 articles a year, and I am sure many people have had the thought that journals such as PLoS One which choose to “publish all papers that are judged to be technically sound” are increasing the amount of literature out there, though the argument is that they are accelerating the publication process, which is a good thing, right?? I remember reading that the average number of times an article is reviewed before it gets published is 2.5 (Houghton et al.). That’s not really that high, and we can only assume with an average that low that if you submit to enough journals you eventually WILL get published (and this is unrelated to the idea of paying to publish). This brings up queations about the current value of peer-review as well.

So can anyone with money get published…or can anyone with enough patience get published? Are there too many publications? Is the quality suffering? Should peer-review be stricter? Or is the abundance of literature a good thing? And what does this abundance mean for academic institutions, society, and the furthering of knowledge?

Image credit: Thomas Guignard

Peering into the PEER report

Looking at the recent room released PEER baseline report on authors and users regarding journals and repositories from Loughborough University.

This is packed full of very useful and very dense information and analysis which will repay many close readings.

For advocates of open access, and in particular institutional repositories, one immediately interesting question is Q 22: What reservations do you have about placing your peer-reviewed journal articles in publicly available repositories?

Responses to this are quite fascinating. As a (very) rough analysis, putting together the figures seems to show that the most significant concern is a reluctance to put research publications in a repository were other materials have not been peer-reviewed, with nearly 50% considering this either very important or important.

Following this close behind are concerns about infringing copyright and infringing embargo periods; concern about the paper not having been “properly edited by the publisher”; not knowing of a suitable repository; a concern about plagiarism or unknown reuse; then not knowing how to deposit material in a repository and not knowing what a repository was.  Other concerns are then a step change down from these.

If as advocates we want to get more material into repositories, these might well be the key questions for advocacy to address.  Interestingly, none of these are unanswerable, require policy change or mandates and revolve around a simple lack of knowledge.

For instance, the top concern of sharing server-space with pre-prints really revolves around a lack of knowledge as to how the open access repository system works.  I doubt if academics really object to their words being held on adjacent tracks on a hard disk to non-peer reviewed material.  I suspect it is that in accessing the material they see a user being presented with their hard earned peer review material “displayed” alongside non-peer reviewed material.

In other words it is the difference between storage and access.  Material can be deposited and stored in a repository, but users will access the material in a separate fashion and be able to separate out by subject, peer review status, etc. if this distinction is not appreciated by an author, then they may well see the repository as both storage and access mechanism: whereas for almost all users the actual repository — and its accompanying content — will be reduced in use to a single cover sheet on the article that they actually want.

The concerns about copyright and embargo again, are really a matter of the author being given the right information at the right time.  Repository managers commonly use RoMEO to find out this information: there is a strong case for arguing that RoMEO ‘s API should be used more widely to embed the information directly into the deposit process.  Or at least, tell authors that copyright and embargo information is readily available and that this should not be an issue for them.

Concerns about plagiarism and how the material will be used can also be addressed.  Far from being an invitation to plagiarism, making materials openly accessible simply increases the chance that the plagiarist will be detected.

For those concerned about depositing  materials that have not been “properly edited” by the publisher, again the answer is information as to how the system works — allowing, in most cases, the deposit of the authors-final version, after peer review changes.

The other three highest concerns again revolve around a lack of information as to how the system works: not knowing of a suitable repository, not knowing how to deposit, and not knowing what an open access repository is.

Although this question reveals a range of strongly felt concerns which stop authors using repositories, nonetheless it is reassuring to note that none of the concerns need be showstoppers: it’s just an argument for continued, repetitive, hard slog advocacy of the basics.