Author Attitudes, Beliefs, Behaviours

I recently looked over another paper on author attitudes towards Open Access. This was InTech’s which was published last month, and is available here. From this report, the work we have done through the RCS project, discussions I have had, and other papers I have read, there are two things that have now become clear to me (perhaps I am a little late coming to these conclusions, but I haven’t been working in this area as long as many others have).

  1. Impact Factor and its influence is not something we can ignore – for many academics the most important thing is the journal name and the impact that is associated with it. This is currently a major barrier to 1) getting academics to publish in new journals (i.e. OA journals), and 2) getting the publishing system to change (high impact journals have no need to change their business model as publishing in them is highly desired).
    • The argument to this is of course self-archiving and repositories – but we have to be aware that many high impact journals do not allow immediate self-archiving. I did a quick analysis using the top ten journals with the highest impact factor (ISI Impact factor – from Wikipedia) and only 3/10 allowed post-print archiving (according to RoMEO). If you use the top ten journals with the highest combined impact factor (ISI impact factor and PageRank – from Wikipedia) it is a bit better with 5/10 allowing post-print archiving. And is you use ScienceWatch’s top ten most-cited journals, 7/10 allow post-print archiving, which is actually pretty good.
    • My point is, this issue unfortunatley is not instantly solved by self-archiving. Instead we may need to change how academics are evaluated, tenured, promoted, etc. My feeling is that this system is not changing anytime soon…what would it change to?
  2.  Academics don’t really have a clue about what Open Access really is. I have posted on this topic before here. They don’t know that there are multiple ways to make their work OA, and that OA can actually benefit them. They are also mostly unaware of funder and institutional mandates, and they often have no clue that repositories even exist at their institution, for their use.
    • How can we expect academics to make their work OA if they don’t even know what it is?
    • So, what is to be done about this? Who should be responsible for advocating and informing academics? Should this occur at the institutional level, national level, or worldwide?

For more on author attitude, beliefs, behaviours see the following (I have not read all of these – but they are all sitting in a stack on my desk :))

Morris, Sally &  Thorn, Sue. (2009). Learned society members and open access. Learned Publishing 22 (3) p. 221-39 http://uksg.metapress.com/app/home/contribution.asp?referrer=parent&backto=issue,14,21;journal,8,71;linkingpublicationresults,1:107730,1

Kim, Jihyun. (2010). Faculty Self-Archiving: Motivations and Barriers. Journal of the American Society for Information Science and Technology. 61(9), 1909-1922. http://onlinelibrary.wiley.com/doi/10.1002/asi.21336/abstract

Stone, Graham. (2010). Report on the University Repository Survey, October-November 2010. Research report http://eprints.hud.ac.uk/9257/

Park, Ji-Hong & Qin, Jian (2007). Exploring the Willingness of Scholars to Accept Open Access: A grounded Theory Approach. Journal of Scholarly Publishing. http://utpjournals.metapress.com/content/c97213218720314m/

 Theodorou, Roxana. (201). OA Repositories: the Researchers’ Point of View. Journal of Electronic Publishing, 13(3).http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0013.304

Allen, James. (2005). Interdisciplinary differences in attitudes towards deposit in institutional repositories http://en.scientificcommons.org/2075479

Moore, Gale. (2011). Survey of University of Toronto Faculty Awareness, Attitudes and practices regarding Scholarly Communication: A Preliminary Report. https://tspace.library.utoronto.ca/bitstream/1807/26446/3/Preliminary_Report.pdf

Image credit: Steve Rhodes

Advertisement

Industrial taskforce urges opening access

A major report by the Council for Industry and Higher Education (CIHE) is urging universities to open access to their knowledge and intellectual property to support and boost UK manufacuring capacity.

The reports assesses the UK’s current position in manufacturing – Britain is still the sixth largest manufacturer in the world by output, with manufacturing contributing £131 billion to GDP (13.5%), 75% of business research and development (R&D), 50% of UK exports and ten percent of total employment.

Given the conventional wisdom that the eighties finished off UK manufacturing, this is cheering to read.  However, the UK currently only ranks 17th in competitativeness and is forecast to slide.  The report identifies greater access to innovative IP and cutting edge research as essential to halt this decline.

From their release:  Simon Bradley, vice-president of EADS, said to gain greater access to universities’ knowledge, ideas and creativity was vital for manufacturing: “Our Taskforce has found that the simple act of universities opening their vast knowledge banks and providing free access to their intellectual property would have the single biggest impact on accelerating the capability and growth of smart manufacturing in the country.”

This is where open access to articles and data cuts into the “real world” and benefits can be seen outside the research community.

Some sceptical publishers continue to argue against Green OA and for locking down copyright on the grounds of (unproven) economic impacts on their business. Open Access journals, while developing, are still far from the norm: “hybrid” journals continue to charge high fees on top of their continuing subscription costs. The response from much of the publishing world has been to see open access as an additional profit line, or as something to allow by exception, rather than a recognition of a different and new way of working and of OA as playing a part in a far larger working environment.

This report highlights that there is an economic world outside the publishing industry too, and one which is crying out for the benefits of OA.

Given the potential for open access to research to benefit this wider economic picture, as well as collaborative developments between research institutes and industry,  restrictive arguments become increasingly untenable. If funders want OA, researchers want OA, institutions want OA and industry wants OA, why are some publisher’s contracts still stopping this from happening?

Bill

 

Publisher and Institutional Repository Usage Statistics (PIRUS 2)

Yesterday I attended a seminar – Counting Individual Article Usage – which reported on the results of the JISC funded PIRUS 2 Project.

It was a full day, with many interesting speakers. In the morning the talks focused on the project itself, while afternoon talks covered the bigger picture.

All About PIRUS 2

  • Hazel Woodward started things off by setting the stage and providing us with the aims and objectives of the PIRUS 2 project. These can be found here. Basically they were looking at the viability of creating a system that can bring together usage (download) data, at an article level, from publishers and repositories, in a standardised format.
  • Peter Shepherd from COUNTER then gave a review of the organisational, economic, and political issues involved with the project. Cost allocation hasn’t been explored fully, but currently publishers would be expected to carry the brunt of the costs with repositories also contributing. Politically, there are still a lot of issues that remain (one being whether publishers and repositories are actually willing to provide their own data, and willing to pay for such a service).
  • Paul Needham then took us through the technical side, and showed us that, yes, it is technically feasible to collect, consolidate, and standardise “download event” usage data from a number of different providers.
  • Ed Pentz from CrossRef then talked about the importance (and relevance to PIRUS) of DOIs, and also described ORCID (Open Researcher and Contributor ID).
  • Paul Smith from ABCe spoke about their possible role as auditor for PIRUS.

The bigger picture

  • Mark Patterson from PLoS, then gave an interesting talk, describing some of the new alternative impact metrics (some that PLoS now provides). He cited people such as Jason Priem (see alt-metrics: a manifesto) and commented that changing the focus from Journal to article, would change the publication process.
  • Gregg Gordon from SSRN also spoke of alternative methods to measure usage, and also noted the importance of context when thinking about usage.
  • Daniel Beuke from OA Statistik then gave a review of their project (very similar to PIRUS) set in Germany. It would be interesting to see how these two teams could work together. These projects (along with SURE) have worked together under the Knowledge Exchange’s Open Access Usage Statistics work (see here for their work on International interoperability guidelines).
  • Ross MacIntyre then spoke about the Journal Usage Statistics Portal, another JISC supported project
  • Paul Needham then gave us a demonstration of the functioning PIRUS database and we closed the day with a panel discussion.

Unfortunately, I felt not enough emphasis was placed on demonstrating the usefulness of PIRUS 2 and the data that it could potentially generate. The political side of the discussion would also have been very interesting to delve into further.

Interesting things that kept popping up:

  • The importance of standardisation of author and research names (ORCID)
  • The need for metadata description standard (e.g. whether the paper is peer reviewed)
  • And the need for all publishers to use DOIs

Some of the questions I’m still thinking about:

  • Are publishers really willing to share this data?
  • What can a publisher really gain from this type of collation of usage data? And a repository?
  • To make it most useful everyone would need to contribute (and have access?) What would be the competitive advantage to having access to this data if everyone has access?
  • We now know it is technically feasible, but is it economically and politically feasible?
  • Are we ready to place value on these alternative metrics of usage (i.e. not Journal Impact Factor)? Who says we are ready? Are institutions ready? Will this usage data count as impact in the REF?
  • What about other places people put articles – personal web pages, institutional web pages, etc. – could this data be included?
  • What about including data from the downloads of briefing papers, working papers, and preprints? Doesn’t usage of these also signify impact?

Quality Assurance

On Monday I attended an event at the Royal College of Physicians in London, put on by the Research Information Network – part of their series on Research Information in Transition. This one was titled Quality Assurance – responding to a changing information world.

The first speaker, Richard Grant (his notes here), focused on “motivation”, and brought up the issue of “effort to reward ratio” and the idea of “impact credits”.  These themes have come up again and again (in the conversations I have had with people), and it seems there clearly needs to be a change in the reward system before people will fully buy into a change in the peer-review, or more generally- scholarly communication, system.

Theo Bloom then discussed some of the other issues we face when discussing quality assessment in the world of web 2.0. She noted that although people don’t often comment in the spaces provided on journal homepages, they do comment elsewhere, and there is a need to tie existing comments (from blogs, etc) back to articles.

Stephen Curry was up next and he focused mostly on social media and its place (or does it have a place?) in academics’ lives. I wasn’t sure how this related directly to the idea of quality assurance. He may have been getting at the idea that these tools could be used for quality assurance – or perhaps that these tools need some form of quality assurance…(but I may have come to those conclusions on my own).

Tristram Hooley wrapped things up by further discussing social media – and clearly stated that he doubted the usefulness of the idea of “quality”, and noted that quality means different things to different people. He also described the importance of filtering (and the role of networks and folksonomies in this process) and how this can help to lead to alternative ways of identifying value and quality. (Aside: Cameron Neylon has a blog post and has shared a presentation along the same line).

Of course this event was a space for discussion rather than provision of answers, and many questions still remain.

  • Is quality assurance needed? (for academics articles? – seems like people still feel the answer is yes – but the current peer-review system could change…for other forms of academic communication? – perhaps quality assurance of another kind?)
  • How do we get people to participate / assess the quality of things? – whether it’s in the form of traditional peer-review or in the form of social media? (a different reward systems may be the answer?)
  • With these new forms of quality assurance (blog posts, comments, network, folksonomies) there needs to be a way to connect them back to the item they are evaluating or rating (not sure about this one? Any ideas??)

 

Check out the “Authoring and Publishing” section on the Quailty page of the Scholarly Communications Action Handbook for more thoughts on this.

Image credit: Kevin (KB35)

Springer’s Realtime

It seems that traditional publishers may finally be beginning to catch up with the capabilities of the internet, and actually collecting and sharing metrics for the journals and articles they publish.

Springer has recently released Realtime, which aggregates download data from Springer journal articles and book chapters and displays them with pretty pictures (graphs, tag clouds, maps, and icons).

You can look up download data by journal (I looked up Analytical and Bioanalytical Chemistry) and it will show a graph of the number of downloads from the journal over time (up to 90 days). It also lists the most downloaded articles from the journal (with number of downloads displayed), and if you sit on the page for a while (a short while in the case of this journal) you will see which article was most recently downloaded (this is the “Realtime” part). They also display tag clouds of the most frequently used keywords of the most recently downloaded articles, a feed of the latest items downloaded, and an icon display that shows downloads as they happen.

Springer states that,

[t]he goal of this service is to provide the scientific community with valuable information about how the literature is being used “right now”.

Some of this information is definitely valuable (download counts for articles), and some of it is merely fun and pretty (the icon display). The real question is will they be providing download counts for individual articles on an ongoing / long-term basis? Currently you can only look back 90 days, and you can’t search for individual articles…you can only see download counts for your article (or a particular article) if it happens to be one of the most downloaded.  So for this to actually be useful to authors and the institutions they come from, Springer will have to give us a little more, but it is a step in the right direction.

Glasgow launches “Easy Access IP”

The University of Glasgow has announced an innovative and exciting scheme for the free licensing of a range of the University’s research-based technical knowledge, consulting skills, ideas and patents. All it seems to want in return is acknowledgement, seeing the likely benefits of free, easy access and use of ideas and skills as outweighing traditional licensing costs. Hmm – that idea sounds familiar in another context . . .

The University of Glasgow has long been an active advocate for Open Access and a leading participant in repository development and growth, not least through a number of JISC and internally sponsored projects and initiatives.  Easy Access IP seems a logical extension of this work – now someone has thought of it and it has been launched! Development paths are easy to see looking back! This is an exciting idea and it will be interesting to see how it is used, the outreach, engagement and knowledge transfer benefits it brings – and what other institutions will adopt a similar approach.

Easy Access IP is an interesting and innovative example of the way that the ideals of Open Access – opening the fruits of research to the world, at minimum cost – can apply in other areas than research publications.  It is also a pointer of a larger trend in research in general of openess and encouragement of new ways of collaborating and communicating.

Bill

Impact of Open Access: Briefing Papers

A couple of briefing papers recently came across my desk, both on the impact of open access. Alma Swan, in Open Access impact: A Briefing Paper for Researchers, universities and funders, summarises the academic impact, impact for universities, and economic impact of open access, and includes sections on visibility, usage, citation impact, competitive profile, and knowledge transfer.

While Frederick Friend, in The Impact of Open Access outside European Universities, describes the current and potential impact and benefits of open access for teaching and learning, small and medium sized enterprises, individual taxpayers, and emerging and developing economies,  and concludes with a section on yet to be realised impact.

Have a read and definitely pass on to those who could learn more about the impact and benefits of open access.