Industrial taskforce urges opening access

A major report by the Council for Industry and Higher Education (CIHE) is urging universities to open access to their knowledge and intellectual property to support and boost UK manufacuring capacity.

The reports assesses the UK’s current position in manufacturing – Britain is still the sixth largest manufacturer in the world by output, with manufacturing contributing £131 billion to GDP (13.5%), 75% of business research and development (R&D), 50% of UK exports and ten percent of total employment.

Given the conventional wisdom that the eighties finished off UK manufacturing, this is cheering to read.  However, the UK currently only ranks 17th in competitativeness and is forecast to slide.  The report identifies greater access to innovative IP and cutting edge research as essential to halt this decline.

From their release:  Simon Bradley, vice-president of EADS, said to gain greater access to universities’ knowledge, ideas and creativity was vital for manufacturing: “Our Taskforce has found that the simple act of universities opening their vast knowledge banks and providing free access to their intellectual property would have the single biggest impact on accelerating the capability and growth of smart manufacturing in the country.”

This is where open access to articles and data cuts into the “real world” and benefits can be seen outside the research community.

Some sceptical publishers continue to argue against Green OA and for locking down copyright on the grounds of (unproven) economic impacts on their business. Open Access journals, while developing, are still far from the norm: “hybrid” journals continue to charge high fees on top of their continuing subscription costs. The response from much of the publishing world has been to see open access as an additional profit line, or as something to allow by exception, rather than a recognition of a different and new way of working and of OA as playing a part in a far larger working environment.

This report highlights that there is an economic world outside the publishing industry too, and one which is crying out for the benefits of OA.

Given the potential for open access to research to benefit this wider economic picture, as well as collaborative developments between research institutes and industry,  restrictive arguments become increasingly untenable. If funders want OA, researchers want OA, institutions want OA and industry wants OA, why are some publisher’s contracts still stopping this from happening?

Bill

 

Advertisement

More on Money…OA Publishing Fees and Value

I was talking to a friend this weekend (all his recent publications are open access), and he was saying that he still gets emails from people requesting PDFs of his work. So his question was – is it worth it, economically, for him (or his funder, or institution, or whoever is paying the OA fee) to make his work OA – at the individual article level (we are talking gold OA here as his funder requires deposit in UKPMC). Are the numbers of people who are actually making use of the free OA version enough to make it worth paying $1500? Is this per person/access charge reasonable? How many people would have to access the article to make it worth it? – and we have to subtract out the people who would already have access because they are attached to a subscribing institution (for hybrid journals).

Typically we speak about financial value at a much larger level – economic value for the institution or for the country, but what researchers may want to know is value at their level, the individual – or the individual article even. For him, or his funder, or institution – is it economical to make every paper OA – or should he just make the best papers (the ones that the most people will actually want to read) OA?  Clearly the value of the research has played a role in the past – think about Genome data / publications – much more likely to be OA (see here for the latest issue in that area – NPG making what is supposed to be OA,  “accidently” hidden behind a toll).

All this talk of cost per use, etc., of course made me think back to the PIRUS project I heard about a couple weeks ago. This conversation really made it relevant. With accurate usage statistics researchers could have data on how many times an article has been downloaded and where – which may demonstrate the value of OA (of course we would need some way to tag that the article is OA). This might help demonstrate (at the level of individuals) what OA can do for them (add in a little data about IP addresses and you could possibly even demonstrate that the article has been downloaded at locations unconnected to subscriber institutions – this would be really interesting – and could really demonstrate the moral reasons to academics, and you could even calculate how much you paid in OA fees for each access).

Of course putting your article in a repository (for free) would get around the whole discussion of cost per OA use – but in some instances funder mandates that require deposit into UKPMC make repository use slightly irrelevant for some academics (though of course I think funder mandates are positive – some, although working for OA, make work against the growth of repositories – who is it say if this is for better or worse).

You might also say that, morally,  paying $1500 to have one single person, that wouldn’t otherwise have access, gain access – would be worth it. But unfortunately not everyone’s money to morals equation works the same.

Image credit: -Renegade- (very busy)

Quality Assurance

On Monday I attended an event at the Royal College of Physicians in London, put on by the Research Information Network – part of their series on Research Information in Transition. This one was titled Quality Assurance – responding to a changing information world.

The first speaker, Richard Grant (his notes here), focused on “motivation”, and brought up the issue of “effort to reward ratio” and the idea of “impact credits”.  These themes have come up again and again (in the conversations I have had with people), and it seems there clearly needs to be a change in the reward system before people will fully buy into a change in the peer-review, or more generally- scholarly communication, system.

Theo Bloom then discussed some of the other issues we face when discussing quality assessment in the world of web 2.0. She noted that although people don’t often comment in the spaces provided on journal homepages, they do comment elsewhere, and there is a need to tie existing comments (from blogs, etc) back to articles.

Stephen Curry was up next and he focused mostly on social media and its place (or does it have a place?) in academics’ lives. I wasn’t sure how this related directly to the idea of quality assurance. He may have been getting at the idea that these tools could be used for quality assurance – or perhaps that these tools need some form of quality assurance…(but I may have come to those conclusions on my own).

Tristram Hooley wrapped things up by further discussing social media – and clearly stated that he doubted the usefulness of the idea of “quality”, and noted that quality means different things to different people. He also described the importance of filtering (and the role of networks and folksonomies in this process) and how this can help to lead to alternative ways of identifying value and quality. (Aside: Cameron Neylon has a blog post and has shared a presentation along the same line).

Of course this event was a space for discussion rather than provision of answers, and many questions still remain.

  • Is quality assurance needed? (for academics articles? – seems like people still feel the answer is yes – but the current peer-review system could change…for other forms of academic communication? – perhaps quality assurance of another kind?)
  • How do we get people to participate / assess the quality of things? – whether it’s in the form of traditional peer-review or in the form of social media? (a different reward systems may be the answer?)
  • With these new forms of quality assurance (blog posts, comments, network, folksonomies) there needs to be a way to connect them back to the item they are evaluating or rating (not sure about this one? Any ideas??)

 

Check out the “Authoring and Publishing” section on the Quailty page of the Scholarly Communications Action Handbook for more thoughts on this.

Image credit: Kevin (KB35)

Springer’s Realtime

It seems that traditional publishers may finally be beginning to catch up with the capabilities of the internet, and actually collecting and sharing metrics for the journals and articles they publish.

Springer has recently released Realtime, which aggregates download data from Springer journal articles and book chapters and displays them with pretty pictures (graphs, tag clouds, maps, and icons).

You can look up download data by journal (I looked up Analytical and Bioanalytical Chemistry) and it will show a graph of the number of downloads from the journal over time (up to 90 days). It also lists the most downloaded articles from the journal (with number of downloads displayed), and if you sit on the page for a while (a short while in the case of this journal) you will see which article was most recently downloaded (this is the “Realtime” part). They also display tag clouds of the most frequently used keywords of the most recently downloaded articles, a feed of the latest items downloaded, and an icon display that shows downloads as they happen.

Springer states that,

[t]he goal of this service is to provide the scientific community with valuable information about how the literature is being used “right now”.

Some of this information is definitely valuable (download counts for articles), and some of it is merely fun and pretty (the icon display). The real question is will they be providing download counts for individual articles on an ongoing / long-term basis? Currently you can only look back 90 days, and you can’t search for individual articles…you can only see download counts for your article (or a particular article) if it happens to be one of the most downloaded.  So for this to actually be useful to authors and the institutions they come from, Springer will have to give us a little more, but it is a step in the right direction.