Publication metrics and their subversive effects
The MJA/AMPCo controversy gradually brought to light interesting and important additional concerns about scientific publications generally. First, it was the past conduct of Elsevier, next it was the business models of Elsevier and the other big publishing firms, and then it was the question of who actually owns the knowledge that researchers seek to publish. And now it is time to focus on an even bigger and broader concern tied to academic publishing – the way it shapes the production and distribution of new knowledge.
Publish or perish
As outlined in Weekly Report 16, we have, during our campaign, become more aware and better informed about a key factor playing into the hands of the major international academic publishers. Both here and abroad, academics are expressing concern over the ‘publish or perish’ syndrome that drives researchers to strive to publish, sometimes prematurely, and which provides profit-maximising major publishers with free material. This pressure leads to a vicious circle of ‘impact factor’ and ‘citation index’ statistics driving even more publications. This artificial publishing imperative has led directly and indirectly to the overload of systems for the processing of genuine, high-quality research articles, and to the publication of trivia, ‘salami-slicing’ and sometimes to fraudulent conduct by researchers. But this is not the only problem with publication metrics.
Most medical researchers will be aware of the original Science Citation Index (SCI) developed by Eugene Garfield in the early 1960s. They are less likely to be aware that the intention of the SCI was to track the spread of scientific ideas. It was not intended for rating researchers or ranking the journals in which they publish for their ‘impact factor’. Indeed Garfield warned against this misuse, noting that ‘impact is not the same as importance or significance’. Gradually data from SCI and from more sophisticated data bases (e.g., Elsevier’s Scopus, Thomson ISI’s Web of Science and Google Scholar) have been used for many other purposes, including creating university ranking tables, assessing research productivity and re-allocating resources between faculties, appointing and promoting faculty members, and awarding research grants and fellowships. Because these data bases produce numbers, it has been very attractive to assume that the quantitative data they generate equates, by some magical means, with the quality of the research.
The subversive effects of publication metrics
The gradual growth in the application of publication metrics has led to a number of undesirable effects. These include:
- distortion of authorship criteria (including publications with between 1,000 and 4,000 authors!);
- quantity before quality, leading to ‘salami slicing’, papers of little merit and papers never read;
- too many medical journals (print and on-line);
- fraudulent conduct, in response to the pressure of ‘publish or perish’;
- gaming of the system; and
- fake or orchestrated biased peer review.
It is also now clear that publication metrics such as impact factors have changed the behaviour, not only of researchers, but also university administrators, funding agencies and politicians. The changes in behaviour have taken place gradually, so that now, in taking stock, many of us are surprised at the outcomes. Colin Steele, Emeritus Fellow and erstwhile chief librarian at the Australian National University, and convenor of the National Scholarly Communications Forum (see http://www.humanities.org.au/About/AlliedOrganisations/NSCFArchive.aspx ) has closely observed these and related changes over decades and has written extensively in the field. His 2014 paper, entitled ‘Scholarly Communication, Scholarly Publishing and University Libraries. Plus ca change’, is highly recommended. It summarises the issues from an historical and an international perspective(https://digitalcollections.anu.edu.au/bitstream/1885/11944/1/Steele%20Scholarly%20Communication%202014.pdf). As Steele points out, scholarly communication has moved from an historically fairly open system to one with paywalls, especially for anyone who does not have access to an institutional library. And those institutional libraries are struggling to meet the ever-increasing cost of journals published by the major conglomerates.
The digital revolution delayed
In the same paper, Steele also points out that the advent of the Internet led academics and others to predict a revolution in the methods of sharing of academic outputs, but that, to date, this revolution has not arrived. Indeed, back in 1995, Steele asked whether ‘the academic community still needs commercial publishers?’ The failure of the revolution to arrive is probably partly related to researcher inertia and partly to the big publishers having the clout and the resources (to be able to lobby governments effectively) to stay one step ahead. Researcher inertia is multifactorial and might include lack of awareness of what is happening in the publishing world, a feeling of being locked into the publish or perish cycle, a level of comfort with a system one has grown up with, and the complexities of the digital age (including competing types of open access publication, copyright, embargo periods, widely differing article processing charges and the patchy development of university open access repositories). Because universities and granting bodies extensively employ publication metrics, researchers concerned about their advancement and research funding feel obliged to seek to publish in the most prestigious journals – and that prestige is based on impact factor/citation data. Researchers are judged on where and how much they publish, not on what they publish.
The situation is not helped by the fact that the big publishers continue to do deals with the publishing arms of the medical colleges and specialist societies. Recently, decision-makers for the Royal College of Pathologists of Australasia signed up their journal with Elsevier. The leaders of those colleges and societies probably do not appreciate what they are getting into, but once committed, are unlikely to bite the hand that feeds them.
Pressure for change is coming
It does appear that pressure for the revolution is building at last. It is difficult to identify a single key source of pressure for change or a single point in time when the pressure began to build but there are now several such sources. The following list is neither complete nor in any order of importance:
- Library budgets – the major publishers now command more than half the academic publishing internationally and have used this dominant position to increase charges to libraries. This alone is drawing attention as is the practice of some publishers who are ‘double dipping’ by charging institutions for subscriptions and for article processing for open access (access that is often delayed or embargoed for several months). The devaluation of the Australian dollar in the last 18 months will significantly affect Australian university library subscriptions;
- Research assessment role – in both the USA and the UK, there has been a backlash against the overemphasis on publication metrics in assessing research quality (see the following three publications listed below under Recommended Reading :‘The Metric Tide’, the ‘San Francisco Declaration on Research Assessment’ and ‘Bibliometrics: The Leiden Manifesto for research metrics’);
- The global movement towards open access for publicly funded research in countries such as UK, Europe and United States, and in some countries in Latin America;
- The move to university-based open access digital repositories – an alternate means of making research more widely available. The next Research Evaluation Framework in the UK will include a measurement which should promote the growth and use of digital repositories;
- Some funding agencies have become more alert to the problems of overemphasis on publication metrics and have modified their assessment criteria.
What may realistically happen from here?
There are two competing forces at work: the need for change versus the embedded use of metrics in so many aspects of academic life. Academic life is a complex system with built-in conservatism. It is difficult to see who there will push for change. Those at the top of their fields in medicine and science in the universities, research institutes and teaching hospitals have arrived there via the very system that needs to be changed. Those specialist societies and medical colleges which have entered into publishing arrangements with the big conglomerates will not now wish to ‘rock the boat’. Without public or researcher outcry, politicians will be reluctant to tackle a business model that makes money for shareholders and creates employment. Librarians understand the problems very well, but have relatively little influence within their universities to affect significant change. Nor are they able to act collectively, as universities are essentially competitive and act independently,
The most obvious way forward is for universities to change the reward system which encourages publication for its own sake and encourages publication in allegedly high impact journals run by the major publishers who now control 53 percent of the biomedical literature. Researchers should be encouraged by universities to only license their copyright to publishers, rather than hand it over for free and in perpetuity. These things are now happening at many universities world-wide and at several Australian universities, where resources have been put into the establishment of open access digital repositories.
If you want the system to change, here are some of the things you might do:
- Place your material in your university’s open access repository;
- Retain the copyright in your published articles;
- Talk to your institution’s librarians and offer support;
- Ensure you follow the guidelines on open access of the NHMRC and the Australian Research Council;
- Raise the problems of publication metrics, if you serve on committees involved in appointments, promotions or grants; and/or
- Sign up to an international boycott (see http://thecostofknowledge.com/ ).
Scholarly Communication, Scholarly Publishing and University Libraries. Plus ca change. Colin Steele. https://digitalcollections.anu.edu.au/bitstream/1885/11944/1/Steele%20Scholarly%20Communication%202014.pdf
The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. Higher Education Funding Council of England. http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html
San Francisco Declaration on Research Assessment. http://www.ascb.org/dora/wp-content/uploads/2015/07/SFDeclarationFINAL.pdf
Bibliometrics: The Leiden Manifesto for research metrics. D Hicks, P Wouters, L Waltman, S de Rijcke, I Rafols http://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351
Citation analysis guide. University of Michigan Library. http://guides.lib.umich.edu/citation
The Oligopoly of Academic Publishers in the Digital Era. V Larivière, S Haustein, P Mongeon. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0127502
Academics are being hoodwinked into writing books nobody can buy. http://www.theguardian.com/higher-education-network/2015/sep/04/academics-are-being-hoodwinked-into-writing-books-nobody-can-buy
Researcher as victim. Researcher as predator. Cameron Neylon http://cameronneylon.net/blog/researcher-as-victim-researcher-as-predator/
Editorial: Are Impact Factors corrupting truth and utility in biomedical research? A Suhrbier, G A Poland. Vaccine 2013; 31: 6041-42.
Ensuring open access for publicly funded research. P Suber. BMJ 2012; 345:e5184.
Access all areas – making publicly-funded research more accessible. Prof A Tickell
Aspects of Open Access for research libraries: a view from UCL
Publish or perish culture encourages scientists to cut corners. V Barbour
Austrian Open Access Agreement with Publisher Springer