ARMA Conference

I spent the first few days of this week in Glasgow attending the Association of Research Managers and Administrators (ARMA) UK conference. I presented a poster on some of the findings from our Chemists and Economists survey, and had a delightful time speaking with many Research Administrators and Managers, all who seemed quite educated about Open Access and also more interested in the topic than I expected.

I attended a variety of sessions and learned quite a bit about research management and administration, gaining a new insight into this profession. Below are a few notes onwhat I saw as the highlights.

The opening Plenary had two speakers, Professor Anton Muscatelli, the Principal from the University of Glasgow, and Ehsan Masood, the Editor of Research Fortnight and Research Europe. Both speakers gave engaging talks, and both, of course, identified that we are in challenging times when it comes to research funding.  Professor Muscatelli identified a number of things that institutions could focus on in order to meet these challenges. These were: 1) recognise the value of research (knowledge transfer, identifying and quantifying impact, etc.), 2) disseminate research imaginatively (changing approaches to IP), and 3) manage research efficiently and effectively. Mr. Masood discussed some of the other ongoing issues: funding cuts, concerns about using metrics, and using research assessment to allocate funding (which he noted encourages game-play and concentration).

I attended a session on the REF Assessment Framework, presented by Chris Taylor, Deputy REF Manager. Although a lot of the details about the REF will not be released until later in the summer, this session did give me a good idea of what will be expected in the REF process. The conference delegates had many questions of course, and the thing that I found particularly interesting (which I hadn’t realised before) was that for the next REF, the “impact” will be measured for the unit as a whole and not linked to submitting staff (this, I think is the attempt to get away from Impact Factor measurements, which is good!).

I also attended an interesting session on choosing a Research Management system – with Jonathan Cant discussing Hull’s experience using AVEDAS- CONVERIS and Jill Golightly describing Newcastle’s experience with a built in-house system. Ellie James, from Keele, did a session describing her experiences as a Research Planning and Project Manager (responsible for Keele’s REF submission) setting up a repository. It was interesting to see repositories from a different perspective – and it reminded me how important it is that institutions have set goals and objectives when setting up repositories.

It was a really interesting conference – and most importantly I learned that researcher managers and administrators definitely know how to have a good time! 🙂

Advertisement

Innovation Takeaway – Lessons from the Information Environment

On Thursday of last week I was at the JISC Information Environment 2009-11 Programme Meeting at Conference Aston in Birmingham. Links to relevant resources for the day can be found here with extensive notes from the day (I think mostly written by Andy McGregor) here.

A review of the programme in the form of a list of questions was also created: “27 questions the work of the IE programme can answer.”

It was an interesting day filled with review of some of the INF11 projects, but it also included a few more general talks about things within this area of work. Interesting bits from my perspective:

David Millard’s talk on Managing Learning Resources was quite interesting – He spoke of managing teaching and learning resources, and I of course I couldn’t help but draw parallels with publication repositories. He described how at Southampton they looked to YouTube and Flickr for inspiration, and tried to see the learning resources repository more as hosting than archiving. This tactic (should) lead to greater use – though I don’t remember him reporting on actual usage statistics. I do think part of the reason take-up of institutional publication repositories has been so low is that academics do not see them as adding a lot of value – if they want to they keep a copy of their work they do – and publishing already provides them with an outlet to share. So how can we make depositing in a repository useful on an everyday level?  I do think many repositories that have had success have the ability to populate individual’s institutional homepages – something many academics may find useful. Integration with other systems within the institution also seems to support use. Still there is more that can be done in this area – we need to think from the academic’s perspective as opposed to the repository’s or the library’s.

Joss Winn started off an interesting session on “Benefiting from Local Innovation”. His notes are on his blog here. They give an idea of some of the cool things they are doing at Lincoln. I think most of us that attended the session were wishing we had a similar group working at our instution.

I also attended a session on “Benefiting from Open” which had four speakers covering Open Data, Open Education Resources, Open Access and Open Source. Key things that came up in discussion included the need for embedding within institutions, licensing, and the need for cultural change before this “openness” is widely adopted.

Do take a look at the notes and the JISC INF11 webpage if you are interested in learning more about this programme – and what the future could potentially hold for it.

Data-mining and repositories

There has been discussion for a while about the limits of using material from typical repositories.  In the absence of formal user-licences, there is an implict permission to read material – but what about data-mining?

A formal, cautious approach is to assume a lowest-common-denominator approach, where the rights to re-use material in a repository are taken as being those of the most restricted piece of content.

Material in a repository is normally pretty heterogeneous with respect to re-use rights.  And where the individual pieces do not have their rights associated with them, liberal rights pieces cannot be told apart from restrictive rights pieces.  Material that is truly Open Access is in a minority and will generally result form archiving true Open Access published materials, from BMC for example, where limited named rights are explicitly given to the publisher.  Many more OA published materials are actually restricted in re-use:  many OA publishers are anything but true OA. The corollary of true OA publishing is that all rights not granted to the publisher are retained by the author and then, presumably, licensed to the repository.

The majority of content is in a different situation from true OA material.  It will be in there as a result of copyright being transferred or exclusively licensed to a publisher, who has then granted back, or allowed retention of, nominated rights.  In such a circumstance then the author (and by extension the repository, see above), has only these certain, nominated, rights and if data-mining or other forms of re-use are not mentioned explicitly, then strictly, no such right exists.  Some publishers explicitly exclude the right to data-mine the article and so, without being able to identify these, the lowest common-denominator approach kicks in.

The easiest solution for data-mining (and it could be argued for open access in general)  is blanket rights for data-mining being retained by funders: or for publicly funded research to be placed in the public domain as regards copyright, as is done in the States.

All this concern, of course, restricts the full potential of open access being realised: what assumptions can or should be made, what liaibility, if any,  should be risked in order to get at this potential?

Mendeley have one solution:  do it and see!  They have just announced a competition to mine the articles that authors have put on their Mendeley accounts.  It will be interesting to see how the rights issue will be handled: it may prove to be a model for others to follow.

Bill

Peering into the PEER report

Looking at the recent room released PEER baseline report on authors and users regarding journals and repositories from Loughborough University.

This is packed full of very useful and very dense information and analysis which will repay many close readings.

For advocates of open access, and in particular institutional repositories, one immediately interesting question is Q 22: What reservations do you have about placing your peer-reviewed journal articles in publicly available repositories?

Responses to this are quite fascinating. As a (very) rough analysis, putting together the figures seems to show that the most significant concern is a reluctance to put research publications in a repository were other materials have not been peer-reviewed, with nearly 50% considering this either very important or important.

Following this close behind are concerns about infringing copyright and infringing embargo periods; concern about the paper not having been “properly edited by the publisher”; not knowing of a suitable repository; a concern about plagiarism or unknown reuse; then not knowing how to deposit material in a repository and not knowing what a repository was.  Other concerns are then a step change down from these.

If as advocates we want to get more material into repositories, these might well be the key questions for advocacy to address.  Interestingly, none of these are unanswerable, require policy change or mandates and revolve around a simple lack of knowledge.

For instance, the top concern of sharing server-space with pre-prints really revolves around a lack of knowledge as to how the open access repository system works.  I doubt if academics really object to their words being held on adjacent tracks on a hard disk to non-peer reviewed material.  I suspect it is that in accessing the material they see a user being presented with their hard earned peer review material “displayed” alongside non-peer reviewed material.

In other words it is the difference between storage and access.  Material can be deposited and stored in a repository, but users will access the material in a separate fashion and be able to separate out by subject, peer review status, etc. if this distinction is not appreciated by an author, then they may well see the repository as both storage and access mechanism: whereas for almost all users the actual repository — and its accompanying content — will be reduced in use to a single cover sheet on the article that they actually want.

The concerns about copyright and embargo again, are really a matter of the author being given the right information at the right time.  Repository managers commonly use RoMEO to find out this information: there is a strong case for arguing that RoMEO ‘s API should be used more widely to embed the information directly into the deposit process.  Or at least, tell authors that copyright and embargo information is readily available and that this should not be an issue for them.

Concerns about plagiarism and how the material will be used can also be addressed.  Far from being an invitation to plagiarism, making materials openly accessible simply increases the chance that the plagiarist will be detected.

For those concerned about depositing  materials that have not been “properly edited” by the publisher, again the answer is information as to how the system works — allowing, in most cases, the deposit of the authors-final version, after peer review changes.

The other three highest concerns again revolve around a lack of information as to how the system works: not knowing of a suitable repository, not knowing how to deposit, and not knowing what an open access repository is.

Although this question reveals a range of strongly felt concerns which stop authors using repositories, nonetheless it is reassuring to note that none of the concerns need be showstoppers: it’s just an argument for continued, repetitive, hard slog advocacy of the basics.

Bill

Using institutional repositories to raise compliance

JISC’s work over the last few years in encouraging the growth of institutional repositories means that the UK now has a virtually unparalleled and impressive infrastructure of institutional repositories that virtually covers the research-base of UK higher education.

Of course the issue which faces us all in this area is one of content.  The repositories are there, but the content — at least measured against the potential content — isn’t.

It is therefore an interesting development with Funder policies requiring deposit, that some of these require deposit in Funder-repositories. While I quite appreciate the political and organisational benefits from having a Funder-based repository, the experience of Funder mandates so far is of low compliance. The Wellcome Trust report a compliance rate of around 36%.  Some of this lack of compliance is down to individual authors, some down to publishers seemingly not fulfilling their contract to deposit in return for their open access publication fee.

The situation that we seem to have, therefore, is of an already existing network of repositories with institutional staff assigned to deal with deposit, but without any overriding incentive for authors to use them: and the development of a complementary network of Funder-repositories, where there is an incentive for authors to deposit, but with no on-site assistance and low compliance.

As I have suggested elsewhere, I think the best solution is to engage with institutional repository managers, who would be able to provide authors on the spot assistance with depositing material, give person-to-person advice on the suitability of various materials to deposit, and, significantly, to be able to monitor and facilitate compliance.

Of course, the question is then what do the institutional repository managers — and the institutions — get out of it?  This is where the collaborative nature of repository holdings comes in.  If funders ask their authors to deposit into the institutional repository, then it is a simple matter for the Funder-based repository to harvest material (metadata and full-text) from the institutional repository

The advantage of institutional deposit lies in the support and compliance checking that can come from institutional staff, and of course, the author having a “one-stop shop” for deposit.  If all funders harvest material from institutional repositories, then the author only has one interface to learn.  Where an institution offers mediated deposit, then they do not even have to do this — but can let repository staff deposit on their behalf.

Of course, this then brings benefits for the institution, in that it collects a record of the intellectual output of its own staff in its own repository, which can then be used to drive other services within the institution — the generation of publication records, facilitating collection of material for REF, generating staff-web pages, generating research group web pages, etc.

The fact that the material is open means that harvesting into a Funder-repository is straightforward.  Effectively, it means that the institutional repository becomes a personally supported interface or ingest mechanism for the Funder-repository.

There is the issue that some Funder-repositories may require different metadata fields, or different metadata standards than a typical institutional repository.  Again, a Funder-repository might require a particular format of deposit — such as XML.

These are certainly issues to consider, but balanced against the support and compliance which could come from such a system, surely an enhanced institutional deposit mechanism to match funders’ requirements is not beyond joint development?

One possible way forward would be for principal UK funders to agree a joint deposit-requirement and suggest this to be adopted by institutional repositories, in exchange for mandates requiring deposit within institutional repositories.

Bill

Cross-linking between repositories

A thread on JISC-Repositories this week has been discussing whether to delete repository records when an academic leaves.  This set me thinking about such policies in general and how the interaction of different policies between repositories may affect access or collections in the long run. It is an example, I think, of the way that institutional repositories work best when seen as a network of interdependent and collaborative nodes that can be driven by their own needs but produce a more general collective system.

Our policy in Nottingham is that we see our repository as a collection of material that is produced by our staff.  Therefore, our policy developed that when a member of staff leaves, we will not delete their items as this is a record of their research production while they were here.

More than that, authors should not expect such deletion even upon their request, except in very unusual circumstances.  If repositories are to be used as trusted sources of information, the stability of the records they hold is very important.

If authors have put material into the repository which includes their “back-catalogue” produced at previous institutions, then that is fine too — we will accept them and keep them.  Strictly, they did not produce this material while they are employed at Nottingham, but if they are not openly accessible elsewhere, why not take them?  It might be slightly anomalous to hold this material but if it opens access to research information, that’s the basis of what it’s all about.

I think there is a transition period here, while academics adopt the idea of depositing material.  I think it’s likely that academics will put their back-catalogue to date into the first major repository that they use in earnest, if they have the right versions available.  Thereafter, as this material should be kept safe and accessible, they can always link back to it.  In other words, once they have deposited their back catalogue, there are unlikely to want to do it at every subsequent institution they move to: as long as they know it will be safe and that they can link to it.  There is an advocacy theme here to help researchers understand that repositories are linked and that the repository – and repository network – will serve them throughout their career.

For a newly-arrived member of staff with material in a previous institution’s repository, then it all depends on the new institution’s collection policy as to whether the institution would prefer them to just deposit outputs they produce from that time on; deposit all their own material again; or create a virtual full record of outputs by copying the metadata and linking back to full-text in the previous repository(ies). This will depend in turn as to whether the previous repositories are trusted to match the new institution’s own terms for access and preservation.

Maybe if the material is held on a repository without long-term assurance of durability — maybe on a commercial service — and if the institution’s repository works on a level which cannot be matched, then there would be a rationale for holding a local copy of the full-text.  This may be held and exposed, or possibly be held in reserve in case of external service failure. Otherwise, simply linking back to the full-text held on the previous repository seems most practical if a full record is required.

If the previous repository is trusted to provide the same level of service in access, preservation, and stability, then it does not really matter which URL or repository underlies the “click for full text” link.  Academics can compile their list of publications and draw from the different institutions at which they have worked: repositories can hold their own copy of metadata records and link to external trusted repositories; and as far as the reader-user is concerned it’s still “search for paper — find — click — get full-text paper”.

This kind of pragmatic approach may well mean that some duplicates (metadata record and/or full-text) get into the system by being held at more than one location.  Duplication/ close-to-duplication will have to become a non-issue. I cannot see that duplication can be completely avoided in future: it already happens.  As such, handling close and exact duplicates is an issue we cannot avoid and must solve in some way as it inevitably arises. That is not to say that the publisher’s version will automatically become the “official” record in the way that it tends to be used now. We do not know how versions/ variants/ dynamic developments of papers will be used and regarded by researchers: we are just at the start of a period of change in research communications. Therefore if a process offers solutions and benefits, associated risks of duplication are not sufficient to dismiss the process as impractical.

After all, what is the alternative?  If as repository managers we start deleting records when folks leave and have to create/import/ask the academic for a complete set of their outputs when they arrive at a new institution, I think we, and significantly the users of open access research, will very quickly get into a situation where we lose track of what is where.

Even if we try to create policies or systems to replace an old link (to the now-deleted full-text), with a new link (to the full-text in the new repository), I cannot see this working seamlessly and things will get lost.  In addition I think that subsequent moves by the author would create daisy chains of onward references which would be very fragile.

While the use of repository references in citation of materials relates to research practice and so is for resolution between researchers rather than between ourselves, I don’t think we should deliberately disrupt longer-term references to material. Rather, I would see the system building on existing stable records and all institutional repositories able to play a part in the system-wide provision of information as stable sources.

Therefore, I would suggest that repositories should continue to hold staff items after they have left, as this helps fulfil their role as institutional assets and records. Repositories can accept an academic’s back-catalogue, even if it has not been produced at the institution, as being anomalous but in line with our joint overall aim of providing access to research information. Adopting standard practices will help reassure each institution that other repositories can be trusted with access and curation and allow stable cross-linking. Once a repository has material openly accessible, then, given matching service levels, the whole system supports linking to that material, without anything but possible local needs for additional, duplicate copies.   Overall, repositories can follow their institutional self-interest and still create a robust networked system.

Bill