Mendeley in WIRED

There is an interesting article on the innovative and rapidly growing Mendeley system in the latest (June 2011) issue of WIRED, which gives some background to the hopes and vision of the senior Mendeley team.

Principle investor Stefan Glaenzer: “We are aiming to make Mendeley the biggest knowledge database on the planet [. . . ] In 19 months we have collected over 67 million articles. It took Thomson Reuters 49 years to come up with 40 million.”

Victor Henning, cofounder and CEO, is noted as explaining that the productivity/collaborative component of Mendeley will be monetised, the unique data aggregation will be monetised, Mendeley will be turned into a content distribution platform and targeted advertising will be introduced for Mendeley’s users.

They seem to have established the user base to support this: a claimed 800,000 users uploading seven million research articles (presumably full-text in comparison with the quoted 67 million articles, presumably of bibliographic details).

What is less clear is what monetization routes may be built, or indeed recognised, for the producers and copyright holders of the content which to be distributed, or whether the service itself is repayment enough for the value-added exploitation. Previously, academic authors, and by extension their employing institutions and the funders of their research, have been content to allow commercial exploitation of research articles by publishers. This realisation has helped to bolster arguments for open access, so will future commercial exploitation systems find it as easy to be accepted?

One of the key issues of course, is that traditional publishers have sought to exclusively exploit the material – the basis of subscription-model journals – while Mendeley and others are only using what has been given to them on a freely-reusable basis. This means that they are free to re-use it as they will, make money or not – and if anyone else comes up with a compelling service, then they can get hold of the information too and good luck to them.

Interestingly, as we know from the traditional model, once research dissemination habits have been formed, they tend to become embedded and resistant to change. In this situation, the first to establish a widely used and valued system built on top of freely reusable articles might establish a firm position. Might this happen with Mendeley? Could it be that Mendeley has been in the right place at the right time – as well as giving a service that academics truly value – to become a future dominant underpinning service for research dissemination and re-use?

Bill

Advertisement

Innovation Takeaway – Lessons from the Information Environment

On Thursday of last week I was at the JISC Information Environment 2009-11 Programme Meeting at Conference Aston in Birmingham. Links to relevant resources for the day can be found here with extensive notes from the day (I think mostly written by Andy McGregor) here.

A review of the programme in the form of a list of questions was also created: “27 questions the work of the IE programme can answer.”

It was an interesting day filled with review of some of the INF11 projects, but it also included a few more general talks about things within this area of work. Interesting bits from my perspective:

David Millard’s talk on Managing Learning Resources was quite interesting – He spoke of managing teaching and learning resources, and I of course I couldn’t help but draw parallels with publication repositories. He described how at Southampton they looked to YouTube and Flickr for inspiration, and tried to see the learning resources repository more as hosting than archiving. This tactic (should) lead to greater use – though I don’t remember him reporting on actual usage statistics. I do think part of the reason take-up of institutional publication repositories has been so low is that academics do not see them as adding a lot of value – if they want to they keep a copy of their work they do – and publishing already provides them with an outlet to share. So how can we make depositing in a repository useful on an everyday level?  I do think many repositories that have had success have the ability to populate individual’s institutional homepages – something many academics may find useful. Integration with other systems within the institution also seems to support use. Still there is more that can be done in this area – we need to think from the academic’s perspective as opposed to the repository’s or the library’s.

Joss Winn started off an interesting session on “Benefiting from Local Innovation”. His notes are on his blog here. They give an idea of some of the cool things they are doing at Lincoln. I think most of us that attended the session were wishing we had a similar group working at our instution.

I also attended a session on “Benefiting from Open” which had four speakers covering Open Data, Open Education Resources, Open Access and Open Source. Key things that came up in discussion included the need for embedding within institutions, licensing, and the need for cultural change before this “openness” is widely adopted.

Do take a look at the notes and the JISC INF11 webpage if you are interested in learning more about this programme – and what the future could potentially hold for it.

Industrial taskforce urges opening access

A major report by the Council for Industry and Higher Education (CIHE) is urging universities to open access to their knowledge and intellectual property to support and boost UK manufacuring capacity.

The reports assesses the UK’s current position in manufacturing – Britain is still the sixth largest manufacturer in the world by output, with manufacturing contributing £131 billion to GDP (13.5%), 75% of business research and development (R&D), 50% of UK exports and ten percent of total employment.

Given the conventional wisdom that the eighties finished off UK manufacturing, this is cheering to read.  However, the UK currently only ranks 17th in competitativeness and is forecast to slide.  The report identifies greater access to innovative IP and cutting edge research as essential to halt this decline.

From their release:  Simon Bradley, vice-president of EADS, said to gain greater access to universities’ knowledge, ideas and creativity was vital for manufacturing: “Our Taskforce has found that the simple act of universities opening their vast knowledge banks and providing free access to their intellectual property would have the single biggest impact on accelerating the capability and growth of smart manufacturing in the country.”

This is where open access to articles and data cuts into the “real world” and benefits can be seen outside the research community.

Some sceptical publishers continue to argue against Green OA and for locking down copyright on the grounds of (unproven) economic impacts on their business. Open Access journals, while developing, are still far from the norm: “hybrid” journals continue to charge high fees on top of their continuing subscription costs. The response from much of the publishing world has been to see open access as an additional profit line, or as something to allow by exception, rather than a recognition of a different and new way of working and of OA as playing a part in a far larger working environment.

This report highlights that there is an economic world outside the publishing industry too, and one which is crying out for the benefits of OA.

Given the potential for open access to research to benefit this wider economic picture, as well as collaborative developments between research institutes and industry,  restrictive arguments become increasingly untenable. If funders want OA, researchers want OA, institutions want OA and industry wants OA, why are some publisher’s contracts still stopping this from happening?

Bill

 

Publisher and Institutional Repository Usage Statistics (PIRUS 2)

Yesterday I attended a seminar – Counting Individual Article Usage – which reported on the results of the JISC funded PIRUS 2 Project.

It was a full day, with many interesting speakers. In the morning the talks focused on the project itself, while afternoon talks covered the bigger picture.

All About PIRUS 2

  • Hazel Woodward started things off by setting the stage and providing us with the aims and objectives of the PIRUS 2 project. These can be found here. Basically they were looking at the viability of creating a system that can bring together usage (download) data, at an article level, from publishers and repositories, in a standardised format.
  • Peter Shepherd from COUNTER then gave a review of the organisational, economic, and political issues involved with the project. Cost allocation hasn’t been explored fully, but currently publishers would be expected to carry the brunt of the costs with repositories also contributing. Politically, there are still a lot of issues that remain (one being whether publishers and repositories are actually willing to provide their own data, and willing to pay for such a service).
  • Paul Needham then took us through the technical side, and showed us that, yes, it is technically feasible to collect, consolidate, and standardise “download event” usage data from a number of different providers.
  • Ed Pentz from CrossRef then talked about the importance (and relevance to PIRUS) of DOIs, and also described ORCID (Open Researcher and Contributor ID).
  • Paul Smith from ABCe spoke about their possible role as auditor for PIRUS.

The bigger picture

  • Mark Patterson from PLoS, then gave an interesting talk, describing some of the new alternative impact metrics (some that PLoS now provides). He cited people such as Jason Priem (see alt-metrics: a manifesto) and commented that changing the focus from Journal to article, would change the publication process.
  • Gregg Gordon from SSRN also spoke of alternative methods to measure usage, and also noted the importance of context when thinking about usage.
  • Daniel Beuke from OA Statistik then gave a review of their project (very similar to PIRUS) set in Germany. It would be interesting to see how these two teams could work together. These projects (along with SURE) have worked together under the Knowledge Exchange’s Open Access Usage Statistics work (see here for their work on International interoperability guidelines).
  • Ross MacIntyre then spoke about the Journal Usage Statistics Portal, another JISC supported project
  • Paul Needham then gave us a demonstration of the functioning PIRUS database and we closed the day with a panel discussion.

Unfortunately, I felt not enough emphasis was placed on demonstrating the usefulness of PIRUS 2 and the data that it could potentially generate. The political side of the discussion would also have been very interesting to delve into further.

Interesting things that kept popping up:

  • The importance of standardisation of author and research names (ORCID)
  • The need for metadata description standard (e.g. whether the paper is peer reviewed)
  • And the need for all publishers to use DOIs

Some of the questions I’m still thinking about:

  • Are publishers really willing to share this data?
  • What can a publisher really gain from this type of collation of usage data? And a repository?
  • To make it most useful everyone would need to contribute (and have access?) What would be the competitive advantage to having access to this data if everyone has access?
  • We now know it is technically feasible, but is it economically and politically feasible?
  • Are we ready to place value on these alternative metrics of usage (i.e. not Journal Impact Factor)? Who says we are ready? Are institutions ready? Will this usage data count as impact in the REF?
  • What about other places people put articles – personal web pages, institutional web pages, etc. – could this data be included?
  • What about including data from the downloads of briefing papers, working papers, and preprints? Doesn’t usage of these also signify impact?

Open access and innovation in scholarly communication

We have published our third report on trends and issues in scholarly communication. Its theme is the scope of current open access practice and the opportunities it offers for innovation in scholarly communication methods.

Some people think that the battle for open access has been won. The number of repositories is growing; funders and (increasingly) HE institutions are mandating researchers to make their work openly available; open access journals are becoming mainstream (a recent blogpost by Heather Morrison asks if PLoS ONE has become the world’s largest journal). Yet it is also true to say that there is still resistance to open access in most areas of the academic community. Not all mandates are complied with; not all researchers believe that publishing in online journals carries as much prestige as publishing by traditional methods.

What might influence authors to change their minds about open access? Perhaps showing them that open access is not just about repositories or OA versions of traditional journals. In all sorts of ways OA can add value to research output. It adds value in an institutional context when the repository becomes part of an integrated system of research  management. It addds value to arts and humanities research when it allows non-text research outputs such as music, images and video to be made available alongside text. It adds value to scientific practice when it contributes to initiatives in open science and open data.

Meanwhile tools such as Mendeley that combine biblographic management with social networking appear to be increasingly attractive to researchers. Maybe OA as it has evolved in recent years, modelled on the traditional publication system, is already outdated, overtaken by Web 2.0 services more responsive to the needs of the academic community. Our report suggests, however, that there are questions to be asked about the sustainability and independence of these services in the light of their need to respond to commercial pressures.

If you are interested in any of the issues raised here, please read the full report. Your comments will be welcome.

Springer’s Realtime

It seems that traditional publishers may finally be beginning to catch up with the capabilities of the internet, and actually collecting and sharing metrics for the journals and articles they publish.

Springer has recently released Realtime, which aggregates download data from Springer journal articles and book chapters and displays them with pretty pictures (graphs, tag clouds, maps, and icons).

You can look up download data by journal (I looked up Analytical and Bioanalytical Chemistry) and it will show a graph of the number of downloads from the journal over time (up to 90 days). It also lists the most downloaded articles from the journal (with number of downloads displayed), and if you sit on the page for a while (a short while in the case of this journal) you will see which article was most recently downloaded (this is the “Realtime” part). They also display tag clouds of the most frequently used keywords of the most recently downloaded articles, a feed of the latest items downloaded, and an icon display that shows downloads as they happen.

Springer states that,

[t]he goal of this service is to provide the scientific community with valuable information about how the literature is being used “right now”.

Some of this information is definitely valuable (download counts for articles), and some of it is merely fun and pretty (the icon display). The real question is will they be providing download counts for individual articles on an ongoing / long-term basis? Currently you can only look back 90 days, and you can’t search for individual articles…you can only see download counts for your article (or a particular article) if it happens to be one of the most downloaded.  So for this to actually be useful to authors and the institutions they come from, Springer will have to give us a little more, but it is a step in the right direction.

PLoS biodiversity hub: open access working well

Catching up (a little late) on the launch of the PLoS Biodiversity Hub, which looks like a great example of how open access publications can be the basis for an enriched reader experience. The hub will provide a space for researchers to exchange views and ideas – and add value to articles by, for instance, linking to data or other relevant information. Making communication easier and research more fruitful – exactly what open access does best.