Mendeley in WIRED

There is an interesting article on the innovative and rapidly growing Mendeley system in the latest (June 2011) issue of WIRED, which gives some background to the hopes and vision of the senior Mendeley team.

Principle investor Stefan Glaenzer: “We are aiming to make Mendeley the biggest knowledge database on the planet [. . . ] In 19 months we have collected over 67 million articles. It took Thomson Reuters 49 years to come up with 40 million.”

Victor Henning, cofounder and CEO, is noted as explaining that the productivity/collaborative component of Mendeley will be monetised, the unique data aggregation will be monetised, Mendeley will be turned into a content distribution platform and targeted advertising will be introduced for Mendeley’s users.

They seem to have established the user base to support this: a claimed 800,000 users uploading seven million research articles (presumably full-text in comparison with the quoted 67 million articles, presumably of bibliographic details).

What is less clear is what monetization routes may be built, or indeed recognised, for the producers and copyright holders of the content which to be distributed, or whether the service itself is repayment enough for the value-added exploitation. Previously, academic authors, and by extension their employing institutions and the funders of their research, have been content to allow commercial exploitation of research articles by publishers. This realisation has helped to bolster arguments for open access, so will future commercial exploitation systems find it as easy to be accepted?

One of the key issues of course, is that traditional publishers have sought to exclusively exploit the material – the basis of subscription-model journals – while Mendeley and others are only using what has been given to them on a freely-reusable basis. This means that they are free to re-use it as they will, make money or not – and if anyone else comes up with a compelling service, then they can get hold of the information too and good luck to them.

Interestingly, as we know from the traditional model, once research dissemination habits have been formed, they tend to become embedded and resistant to change. In this situation, the first to establish a widely used and valued system built on top of freely reusable articles might establish a firm position. Might this happen with Mendeley? Could it be that Mendeley has been in the right place at the right time – as well as giving a service that academics truly value – to become a future dominant underpinning service for research dissemination and re-use?

Bill

RCS reports now online

One of the things we do in the RCS project is to write regular reports on issues in research comunication – largely about open access, but including thoughts on other developments like social networking/bibliographhic management websites. These reports are now available on our website – just press the “reports” button above.

There are three reports – and we have also produced two-page (and glossier) discussion papers that are summaries of the main points of each.

The first summary report talks about “the power of open access” to free and potentially transform scholarly communication.

The second is about open access and institutional benefit.

The third looks at ways in which open access can add value to scholarly communication.

The fourth is about some of the issues raised by sharing research on social networking sites.

Please read, comment and use the material as you wish.

Links between open data

In another move towards more open exploration of university data, Southampton University have recently released a site which allows experiment and mashup with some of their administrative data.  This follows Tim Berners-Lee’s ideas on Linked Data and presents RDF structured data. There is an interesting piece from The Register IT-blog on the initiative which links the approach to work with the Ordnance Survey.

This takes its place in a range of current experiments and acts as one pole of an approach using structured data. The other pole is exemplified by the previously reported competition to use the heterogeneous collection of information available through Mendeley. The tension between usage of large sets of information with basic (if any) metadata and far smaller restricted sets with structure that allows wider experiment exists as a basic question in information management. The debate will doubtless continue.

Bill

Data-mining and repositories

There has been discussion for a while about the limits of using material from typical repositories.  In the absence of formal user-licences, there is an implict permission to read material – but what about data-mining?

A formal, cautious approach is to assume a lowest-common-denominator approach, where the rights to re-use material in a repository are taken as being those of the most restricted piece of content.

Material in a repository is normally pretty heterogeneous with respect to re-use rights.  And where the individual pieces do not have their rights associated with them, liberal rights pieces cannot be told apart from restrictive rights pieces.  Material that is truly Open Access is in a minority and will generally result form archiving true Open Access published materials, from BMC for example, where limited named rights are explicitly given to the publisher.  Many more OA published materials are actually restricted in re-use:  many OA publishers are anything but true OA. The corollary of true OA publishing is that all rights not granted to the publisher are retained by the author and then, presumably, licensed to the repository.

The majority of content is in a different situation from true OA material.  It will be in there as a result of copyright being transferred or exclusively licensed to a publisher, who has then granted back, or allowed retention of, nominated rights.  In such a circumstance then the author (and by extension the repository, see above), has only these certain, nominated, rights and if data-mining or other forms of re-use are not mentioned explicitly, then strictly, no such right exists.  Some publishers explicitly exclude the right to data-mine the article and so, without being able to identify these, the lowest common-denominator approach kicks in.

The easiest solution for data-mining (and it could be argued for open access in general)  is blanket rights for data-mining being retained by funders: or for publicly funded research to be placed in the public domain as regards copyright, as is done in the States.

All this concern, of course, restricts the full potential of open access being realised: what assumptions can or should be made, what liaibility, if any,  should be risked in order to get at this potential?

Mendeley have one solution:  do it and see!  They have just announced a competition to mine the articles that authors have put on their Mendeley accounts.  It will be interesting to see how the rights issue will be handled: it may prove to be a model for others to follow.

Bill

Peer review: open to enquiry

The House of Commons Select Committee on Science and Technology has announced an equiry into peer review.

This seems timely: I’m more and more hearing people wondering if peer review as we know (and love?) it is threatened/challenged/becoming redundant in the web 2.0 environment. Though I think this enquiry is more probably a response to concerns that the current system may be insufficiently robust: the second of the review’s terms of reference mentions “strengthening” peer review.

Would we like it “strengthened”, if this means giving reviewers more power and responsibility? Or is there a case for making it more open – even for publishing research without traditional peer review and letting the academic community give its verdict on the value and significance of the work? (There’s an article on this by Axel Boldt in Journal of Scholarly Publishing, 2011, 42/2, 238-242, DOI10.3138/jsp.42.2.238 – but unfortunately it’s not open access … Also a provocative blog by Paul Fyfe at Florida State University)

Such a significant change in the way scholarly communications are validated won’t happen overnight. But is it a desirable development?

Open access and innovation in scholarly communication

We have published our third report on trends and issues in scholarly communication. Its theme is the scope of current open access practice and the opportunities it offers for innovation in scholarly communication methods.

Some people think that the battle for open access has been won. The number of repositories is growing; funders and (increasingly) HE institutions are mandating researchers to make their work openly available; open access journals are becoming mainstream (a recent blogpost by Heather Morrison asks if PLoS ONE has become the world’s largest journal). Yet it is also true to say that there is still resistance to open access in most areas of the academic community. Not all mandates are complied with; not all researchers believe that publishing in online journals carries as much prestige as publishing by traditional methods.

What might influence authors to change their minds about open access? Perhaps showing them that open access is not just about repositories or OA versions of traditional journals. In all sorts of ways OA can add value to research output. It adds value in an institutional context when the repository becomes part of an integrated system of research  management. It addds value to arts and humanities research when it allows non-text research outputs such as music, images and video to be made available alongside text. It adds value to scientific practice when it contributes to initiatives in open science and open data.

Meanwhile tools such as Mendeley that combine biblographic management with social networking appear to be increasingly attractive to researchers. Maybe OA as it has evolved in recent years, modelled on the traditional publication system, is already outdated, overtaken by Web 2.0 services more responsive to the needs of the academic community. Our report suggests, however, that there are questions to be asked about the sustainability and independence of these services in the light of their need to respond to commercial pressures.

If you are interested in any of the issues raised here, please read the full report. Your comments will be welcome.

Quality Assurance

On Monday I attended an event at the Royal College of Physicians in London, put on by the Research Information Network – part of their series on Research Information in Transition. This one was titled Quality Assurance – responding to a changing information world.

The first speaker, Richard Grant (his notes here), focused on “motivation”, and brought up the issue of “effort to reward ratio” and the idea of “impact credits”.  These themes have come up again and again (in the conversations I have had with people), and it seems there clearly needs to be a change in the reward system before people will fully buy into a change in the peer-review, or more generally- scholarly communication, system.

Theo Bloom then discussed some of the other issues we face when discussing quality assessment in the world of web 2.0. She noted that although people don’t often comment in the spaces provided on journal homepages, they do comment elsewhere, and there is a need to tie existing comments (from blogs, etc) back to articles.

Stephen Curry was up next and he focused mostly on social media and its place (or does it have a place?) in academics’ lives. I wasn’t sure how this related directly to the idea of quality assurance. He may have been getting at the idea that these tools could be used for quality assurance – or perhaps that these tools need some form of quality assurance…(but I may have come to those conclusions on my own).

Tristram Hooley wrapped things up by further discussing social media – and clearly stated that he doubted the usefulness of the idea of “quality”, and noted that quality means different things to different people. He also described the importance of filtering (and the role of networks and folksonomies in this process) and how this can help to lead to alternative ways of identifying value and quality. (Aside: Cameron Neylon has a blog post and has shared a presentation along the same line).

Of course this event was a space for discussion rather than provision of answers, and many questions still remain.

  • Is quality assurance needed? (for academics articles? – seems like people still feel the answer is yes – but the current peer-review system could change…for other forms of academic communication? – perhaps quality assurance of another kind?)
  • How do we get people to participate / assess the quality of things? – whether it’s in the form of traditional peer-review or in the form of social media? (a different reward systems may be the answer?)
  • With these new forms of quality assurance (blog posts, comments, network, folksonomies) there needs to be a way to connect them back to the item they are evaluating or rating (not sure about this one? Any ideas??)

 

Check out the “Authoring and Publishing” section on the Quailty page of the Scholarly Communications Action Handbook for more thoughts on this.

Image credit: Kevin (KB35)