Medical science publishing: A slow-motion train wreck

by | Last updated Nov 22, 2018 | Published on Nov 22, 2018 | Bias in Science |

I respectfully dedicate this commentary to the memory of Barney Carroll. Those close to him, and those who knew him ‘vicariously’ via the internet, have good reason to look on his legacy with pride, pleasure and admiration.

Bernard J Carroll (b 1940; q 1964; MD, PhD), died from cancer on 10 September 2018


This commentary traces influences that have shaped medical science publishing. It starts with the history of the initiation of the modern model of medical publishing, by the psychopathic fraudster Robert Maxwell, of Pergamon Press, back in the 1960s, and finishes with the current ‘pay-to-publish’ problem. The number of journals in medical sciences has increased greatly in the last few decades, to the point where even wealthy libraries cannot afford the subscriptions, and many researchers are denied access to papers for want of money. The number of suitable experts, with time to referee the papers offered for publication, has become less as career promotion and time pressures on academics have become steadily greater. The rules of supply-and-demand make it inevitable that the result is that standards of editorship, refereeing, and papers, have all decreased to the point where so many papers are so unremarkable that 50% of them never get cited. The accuracy and relevance of the bibliographies in papers has also decreased greatly; most referees no longer check citations. Behind-the-scenes manipulation of supposed knowledge continues unabated: ghost-writing has thrived, and ghost-management of the whole medical science knowledge-space can now be considered the norm. The key function, and obligation, of editors and referees to guard the quality and probity of the scientific literature, which was always problematical, is no longer being accomplished to an acceptable standard.

Despite talk about improvements in training, and auditing, for all of these facets of publishing, little has eventuated — and there is no assessment or over-sight of the ‘over-seers’. This situation continues to worsen, especially since the recent exponential increase in predatory pay-to-publish journals: these have been enthusiastically embraced, perhaps because they offer an easy route to the accumulation of the publications required for career progress.

It is time to ask, why still have journals? It is now logical and efficient for ‘papers’ to be archived by institutions, and their merit assigned post-archiving, both by various computer-generated algorithms, such as those that have been developed by Google, and by humans. Funding bodies that support ‘plan S’ should be supported. The changes suggested will achieve two things: first, it will place the responsibility for monitoring the quality of ‘published’ material directly onto the University itself, and thus be a direct indicator and measure of its perceived excellence and status; second, it will free-up billions of dollars paid to rich publishers, who add little of value to the scientific endeavour. It is time for academics to consider carefully whether it is smart to continue supporting the current model, and to what extent doing so betrays the values of democratic science.


Mundus vult decipi, ergo decipiatur. Petronius (attributed)

As an independent researcher who is outside of the University system, I have a different perspective from those who are captives of that system. In the decade since my retirement from clinical medical practice I have published a number of papers reviewing various aspects of neuro-pharmacology. As someone with an ‘H-index’ of 26, and more citations (>3,000) than many professors (a typical professor’s H-index is around 14, with total citations ~1,000 (1)), and one who has published in journals in many different disciplines (few of my reviews are in ‘psychiatric’ journals), I speak from a position of considerable breadth of experience, publication-wise.

NB. Citation statistics are that half of all published manuscripts are never cited, if you get 10+ citations you are in the top 24% and in the top 2% at 100+ (2).

If individuals had to pay a subscription for journals (that is what I used to do in my early days), the number of journals published would drop precipitously. Individuals rarely subscribe to journals anymore, and virtually nobody handles a ‘hard-copy’. Indeed, one might ask, why do we still have journals at all? precisely what purpose do they serve? what value do they add? is it time to move beyond journals?

A greater number of journals, greater cost, decreasing quality of contents — less value for money. To paraphrase ‘Parkinson’s law’, ‘The number of articles submitted for publication will expand to fill the total number of journals available to publish them.’

The cost of journals is such that even wealthy University libraries cannot afford the subscriptions their researchers would like; researchers are denied access, for want of money. Independent researchers (like me), who do not have access to University services, would be unable to obtain the material they need for research without being able to get colleagues to obtain PDFs for them as a favor (a tedious and time-consuming process). I would be unable to do my research if it were not for my ability to obtain material by means that the publishers describe as illegal. The typical price of a pdf copy is around $35 — 99% of the papers I need to scan actually turn out to be poor quality non-citable work. Thus, each useful paper I retrieved would cost me thousands of dollars. That makes research economically non-viable. It is hard to see why that same logic does not apply to all researchers, and the subscriptions that university libraries purchase.

In my opinion, academics should think very carefully indeed before providing refereeing assistance to greedy and bullying publishers like Elsevier — it is akin to providing the playground bully the stick with which to beat you. The two preceding links tell more of the story.

PubMed lists about 30,000 medical journals, and there are thousands more that are not listed. This explosive expansion of journals (mostly pay-to-publish, many are shams) has clearly been enthusiastically and complicity embraced because they offer an easy route to the accumulation of the publications required for career advancement, especially in less-advantaged countries — it is implausible to suppose that all the academics who publish in these sham journals are so naive as to think these are bona fide.

The argument is strong that journals are no longer necessary for the advancement of science — indeed they are probably now doing science more harm than good — their predominant purpose, in the current era, is to artificially expand the CVs of academics, and to act as a vehicle for pharmaceutical companies to advertise their wares (3).

No conspiracy theory is required to explain why there are more journals that are less-often read; that has simply evolved naturally from the basic building-blocks of the profit motive (in its naked self-regulation neo-liberal expression, see Lawson (4)), and the requirement for academics to ‘publish-or-perish’, and the arms-length nature of payment to publishers (most who read journals have no idea how much libraries pay for them*). The mechanisms described by Jay, which have biased western ‘democracies’ towards something more like plutocracies, are similar (5). Add-in the misconception of the supposed advantage and efficiency of goal-directed-industry-focused, short-term self-funding research, and presto! a perfect recipe for the decline of good science — and a precipitous decline it has been, and continues to be.

* This is the same as doctors prescribing drugs and having no idea (and often not a care) about how much they actually cost us all as taxpayers.


Thallus: The theatre isn’t what it used to be.

Aristarchus: No, and I’ll tell you something else. It never was what it used to be.

I Claudius, Robert Graves

Few doctors or academics know that the principles and foundation of the medical science publishing system we now ‘enjoy’ was established by a Second World War spy who reinvented himself after the war and was a spectacular psychopathic fraudster.

I refer to the founder of ‘Pergamon press’, the egregious Robert Maxwell. That was a carefully chosen pseudonym (to accompany his carefully cultivated ‘proper’ English accent — he was an accomplished linguist). His origin was Jewish-Czech, from a humble background and with little formal education. He escaped the Nazis, fought with the British (and was decorated), ended up in Berlin at the end of the war, and then came to London and became a British citizen. He was discharged as a captain, a ‘title’ he liked to use in civilian life. Class-conscious convention does not approve of a ‘mere captain’ using his rank in civilian life (‘not the done thing, old chap’). This was probably why he was often referred to as ‘Captain Bob’, it was a form of mockery (cf. Peter Cook). He established an insatiable taste for the good life, which he assuaged with his duplicitous ingenuity, and consolidated with his bullying and dishonesty. He was elected as a member of parliament, and owned a major newspaper and a football club. The banker Jacob Rothschild once said ‘I’ve shot that man 17 times and he will still not lie down.’

Accounts of his bare-faced dishonesty (as only a psychopath can pull-off)* can be found elsewhere. Suffice it to say that when he drowned ‘falling’ off his luxury yacht in 1991, after an extraordinary career and the accumulation of huge wealth, he was about to be questioned, or charged, with criminal offences, including war-crimes, and with the embezzlement of what now would be equivalent to many billions of dollars.

*This brief YouTube video strongly substantiates his severe psychopathy, and this gem about Peter Cook (of ‘Private Eye’) and Maxwell is both revealing and funny.

A witty ‘one-liner’ I saw called him ‘the bouncing Czech’ (his ‘Billy Bunter-like’ corpulence made that very apposite).

Our interest in this unsavoury character stems from his typically ingenious and deceitful pioneering and setting up of the modern model of medical publishing, which did not exist prior to his entrepreneurship — most science had previously been published by learned societies, and for their members (6).

In the early days of Pergamon Press, the 1960s, there were problems, and Maxwell was ousted from the board. At that time, he was described presciently, by British fraud investigators as, ‘unsuited to run a public company’; but he nevertheless won back control of Pergamon and continued to ‘get away with it’ for another 30 years.

The size of the beast

Science publishing is a mega-business with global revenues around $10 billion and enviable profit margins. Maxwell was 140 kg in his later years — bloated with profit.

Recently, Buranyi, in an article about Maxwell, quotes one eminent scientist as saying (7):

I have to confess that, quickly realising his predatory and entrepreneurial ambitions, I nevertheless took a great liking to him.

That sounds like a pretty virgin being inducted into a brothel which she believes is a beauty parlour. It is clear that he seduced a great many scientists.

There was a toxic combination of a charming ruthless psychopath manipulating academics, some of whom were naïve and compliant — it is hard not to think of the expression ‘like lambs to the slaughter’. Right from the start Maxwell was overwhelming these people with lavish gifts of wine, cigars, and luxury trips — a well-proven strategy.

When people learn that I write scientific articles and referee papers for scientific journals they say something like ‘that must be useful income in your retirement’: they are astonished when I say that nobody gets paid anything for such work. Therefore, without befuddling everyone with too many complexities, I need to indicate the basics of the business model that Maxwell was responsible for instituting, and which was enthusiastically emulated by other publishers, who were frantically playing catch up with him in the 1960s and ‘70s.

He smartened up the presentation and marketing, and did other clever and innovative things to speed up the processing of manuscripts and the production of the journals. Indeed, he may have been one of the first to introduce desktop publishing procedures and computers into the process. He took some of the key people, whom he appointed as editors and board members of the numerous new journals he created, on trips around the Greek islands whilst they ‘found the right strategy for the journal’, assisted by ‘leggy blonde secretaries’ (as one informant put it).

NB. His daughter, Ghislaine, was convicted of soliciting a minor for prostitution (for her lover, the infamous billionaire businessman Epstein). One wonders where she learnt that tactic! She was his favourite child [he named his yacht after her]: but, by all accounts, he was much less pleasant to his other numerous offspring, and also to his wife, (after whom he did not name his yacht). I wonder if Ghislaine ‘solicited’ for him, before Epstein?

Having improved the packaging, he then sold the material back to the scientists and research institutions, that had funded the work in the first place, via the libraries which all academic institutions maintain to provide educational material as part of their role. Almost all of the time and work necessary to achieve this was contributed free by the academics themselves.

In no time he had created a merry-go-round and could increase the price steadily. There were endless permutations and combinations which he ingeniously engineered to give leverage to his product. I will not go into those here, but I expect somebody doing a Ph.D. at a business school somewhere has written about it, because Maxwell was a clever and ruthless man. Many psychopathic characters are rather cowardly, but Maxwell was, I suspect, one of the less common breeds of those who actually had considerable physical courage, albeit tinged with recklessness and ruthlessness, as perhaps his incipient war-crimes charges might have revealed, had he lived to face them.

Another key element to understand, and this is something that Maxwell did understand, but that, in the early stages, the competition did not, is that the price moderating effect of competition does not come into play. This is because every time you create a new journal, like the ‘International Global Journal of recent advances in current Big Toe Surgery’ (pleonasm rules in titles), that is a new niche which creates further space for more unnecessary publications and further fuels the fire of publish-or-perish — it does not reduce the market for the previously existing journal of ‘Foot surgery’. However, the bonus is that it does mean academics in this field can feel more important (and be appointed as an editor or board member of one or more of the journals) and that they have their specialist status as ‘big toe surgeons’ agrandified. The library budget takes another hit!

Why journals?

It has now become pertinent to pose the question, ‘why do medical journals exist at all?’ There are good arguments that journals are no longer necessary for the advancement of science itself — they are doing science more harm than good by promoting quantity (profit), over quality. Their subverted purpose, in the current era, is to artificially expand the CVs of academics, and act as a vehicle for pharmaceutical companies to advertise their wares (3). The vast majority of them are not read by anyone.

The important function of the dissemination of real advances in scientific knowledge does not require journals — it can be achieved more cheaply and efficiently by other means. The fact that academics themselves are driving the expansion of third-rate pay-to-publish journals, in their eagerness to publish their work, illustrates how debased and meaningless the whole charade has become. People are spending too much time ‘manufacturing’ papers and too little time doing decent research, as Altman said 25 years ago (8).

If a system for promulgating quality science was now being designed from the ground up, one doubts the current system would even be considered? (see ‘sunk cost fallacy’ below).

Declining standards: Façade triumphs over substance

Sed quis custodiet, Ipsos custodes. Juvenal (9)

An essential duty of the editors and referees of journals is to guard the quality and probity of the scientific literature. One might consider that guardianship to be their only important function. The value of peer review rests on the assumption that it provides a valid measure of the quality of a manuscript — it does not. Solomon discusses how this might improve in future (10). However, there is almost no evidence that peer review improves the general quality of papers (11, 12).

Indeed, serotonin toxicity is a quintessential paradigm of this process of the decay of good science, and as an expert in this field I am peculiarly well placed to comment on the situation where the superficial appearance of scientific publications receives undue attention, whilst the scientific aspects of the text, and the accuracy of citations used to justify the rationale of the text, goes largely unexamined. It represents the triumph of façade over substance.

This is, without question, the elephant-in-the-room in relation to the glaring deficiencies of the peer-review process which is presided over largely by persons of uncertain suitability and expertise.

Referees: selection, training, auditing?

First, the answers to those easy questions.

Selection, random (now often computer-algorithm generated)

Training, none

Auditing, none

The same applies to the editors themselves — sed quis …

As some wag recently commented, there are few important jobs in society for which you need no special competence, no special experience, and no special qualifications; one is refereeing for journals, and the other is representing the people in Parliament (8, 13-20).

Outside of a vanishing percentage of ‘top journals’ refereeing is a joke — as Burns argues in ‘Academic journal publishing is headed for a day of reckoning’ (21). Only last year, it was shown that three of the top medical journals had each rejected every single one of the 14 top-cited articles of all time, in their discipline. Every single one. Epic fail.

The refereeing of grant proposals stifles and stultifies good original science: entertain yourself with the delightfully incisive comment from Morris on the subject of ‘Originality: who is to judge?’ (22, 23), and update that with the recent comments of Prof Ioannidis in Scientific American ‘Science Funding Is Broken’. As professor of medicine and biomedical data science at Stanford he is a respected authority on the probity and quality of science, and he explains how no funding ‘committee’ will ever give a grant for anything that is actually original. His solution is:

Use a lottery to decide which grant applications to fund (perhaps after they pass a basic review). This scheme would eliminate the arduous effort and expenditure that now goes into reviewing proposals and would give a chance to many more investigators.

When somebody of such eminence seriously suggests using a lottery, then you know that the system is …

Whilst the following examples are from my own experience, I doubt not that many authors would tell similar stories. My most highly cited paper, on TCAs (24), was submitted to an eminent journal in the relevant field (the British Journal of Pharmacology); so, one would hope for top quality refereeing. That paper is now a benchmark paper in the field and has been cited 400 times.

It was initially rejected out of hand. The two referees made only a few derisory lines of comment (‘not much used, nothing new’, quite unrelated to the science in the paper). After my discussion with the editor, two new referees — independent of drug companies — were recruited (that editor was fair, most editors would not even have bothered to reply to a protest such as the one I made). One of the new referees was succinct and simply said it was an excellent paper that should be published. The other referee started his comment with ‘whilst this is a good paper it suffers from a number of serious errors’. He then went on to list more than a page of what he considered to be punctuation errors and the like (some were ‘correct’*: but he made no sensible comment that was remotely scientific; perhaps he was a frustrated school-master). The psychiatrist in me laughed: it was rather sad, he likely suffered from obsessive-compulsive disorder. However, for people whose career depends on getting things published such capricious and poor refereeing is far from a joking matter. The driving force behind this is that referees frequently get an ego boost by being asked to give an ‘expert’ opinion. They therefore feel obliged to make some sort of comment, especially when their personality does not allow them to be gracious, as was the other referee.

*The ‘style’ of the journal is a misnamed quality, because it has nothing to do with style in the proper sense of that word: it has more to do with the ‘house rules’ on presentation, and saving space and money on printing costs, which are now increasingly irrelevant. It also has to do with an outmoded formality in scientific writing that hinders communication with non-scientists, and indeed makes scientists an object of derision in some quarters.

Mis-use of references

A crucially important aspect of any paper, apart from it being rational and logically coherent, is that the papers it cites, in support of the various ideas, facts, and points made, should be relevant and correctly interpreted; if they are not, then a deceit is being perpetrated on the reader. This is where, in my experience of publishing in many different disciplines, the system has broken-down badly (but perhaps, as Aristarcus said, ‘it never was what it used to be’). Few referees are both sufficiently diligent and sufficiently knowledgeable to carry out the important task of spotting misinterpreted and misrepresented references; indeed, few referees check the references at all (as an irrepressible maverick I cannot resist slipping in a few deliberate mistakes, just to test people). Also, the comments one sees of other referees make it plain that many of them have no idea what the appropriate references that should be given actually are: it is rare to see appropriate benchmark references suggested by referees when these have been omitted by the authors. Hence, I am confident in asserting that few referees check the references that are given.

A disgraceful example of this was dissected in detail on my website recently — that was a review paper from the supposedly prestigious Maudsley Hospital in London, to which one of their professors called David Taylor ‘put his name’* (25). The paper bears the stamp of somebody whose first language is clearly not English, and Taylor did not take enough interest in the manuscript to correct those errors, obvious from the most cursory scanning of the manuscript: how much did he really have to do with it? He, like so many other academics, should be ashamed of his deceitfulness. One referee of their paper, whom I criticized equally frankly for failing to correct their numerous serious errors, wrote me an indignant response in which he justified his lax standards by saying, ‘I accepted that review or that article just because I have feeling that everything on SS should be welcomed’ [sic]. So much for the standards and the probity of science: with gatekeepers like that, why have a gate at all? I need hardly say that this ‘scientific’ journal found themselves unable to publish a response. I dubbed this Werneke et al. paper, ‘an egregious example of ultracrepidarian bloviation’.

*One of my correspondants in a famous research Institute told me that if one of these people stopped and spoke to you in the corridor, then he expected his name to be on your next paper — that is how meaningless and dishonest the list of authors on papers has been historically.

When we cannot rely on material by professors at ‘Russell group’ establishments, that is unassailable proof that publishing has truly reached rock bottom.

Some might think that the accuracy and relevance of references is not consequential. Wrong. A central pillar of the validity of the scientific literature is that cited references are actually relevant and good papers. Referees are not making sure that good and appropriate references are used and thus the basis of the metrics used to assess authors and journals is becoming a complete nonsense.

Let me explain just a little more about this important point — again I have to use my own refereeing experience, because such information is usually in a confidential domain. One does not know what happens in other instances. Because I am a world expert in my particular (small) field of serotonin toxicity*, I know that much of the material published fails to cite the appropriate quality references. The fact that so many papers do not cite these key references means that these references do not attain the priority in the field that they merit. Contrariwise, many trivial papers, that should never have been cited at all, get cited. As a referee, one often gets the impression that references have been scattered through manuscripts like confetti, having been selected because their title looks vaguely relevant.

*An expert is a man who has made all the mistakes it is possible to make, but only in a very narrow field. Niels Bohr

You do not have to be a mathematician to understand that such practices rapidly make a complete farce of publication metrics. There are papers in my field that should have been cited ten times more frequently, and this failure is caused by sub-standard refereeing and sub-standard literature searches.

Here are other points from the cited papers:

Lack of agreement between reviewers

No checking of reviewers regarding financial conflict of interest

Failure to detect errors/fraud, lack of transparency, lack of reliability, potential for bias, potential for unethical practices, lack of objectivity

Lack of recognition and motivation of reviewers

No rating of reviewers’ performance.

One ex-editor (14) stated:

‘despite being central to the scientific process [refereeing] was largely unstudied until various pioneers—including Stephen Lock, former editor of the BMJ, and Drummond Rennie, deputy editor of JAMA— urged that it could and should be studied. Studies so far have shown that it is slow, expensive, ineffective, something of a lottery, prone to bias and abuse, and hopeless at spotting errors and fraud.’

Failure to do quality literature searches

A closely related problem is that these errors with references to published papers are often generated by the failure of the doctors who write these papers to do quality literature searches. Better (librarian assisted) searches would lead them more successfully to appropriate papers in the field. An important reason for these failures is the shortage of, and failure to use, professional librarians. It is well recognised that a unsophisticated search using a database like PubMed (which is all most doctors do) finds barely half of the relevant material — yet the great majority of published papers use that inadequate strategy (26).

Key principles of science abandoned

Key principles of science have been forgotten: first, the principal that everything must be open to refutation (and refutation goes hand-in-hand with replication). A Journal cannot describe itself as a ‘scientific’ journal if it does not publish criticism and refutation of previous material — yet that is what has happened, effectively, in much of scientific publishing. With surprisingly little comment. Many journals now have no facility for publishing comment on previous papers. If they do, they frequently refuse to publish responses unless submitted within an arbitrarily defined, and short, period of time after the publication in question. Furthermore, they frequently impose an arbitrary and short word limit on comments which debars detailed arguments (27).

Second, another key principle is that nothing is established in science, unless it has been replicated — preferably more than once. Journals discourage and reject papers replicating previous work (28-30). Such journals are not entitled to be called scientific journals. This subject has recently received much coverage in psychology, and is at least as important in medicine.

Thus, important mechanisms for highlighting errors, on the frequent occasions when these have not been detected by refereeing, are being eschewed. That is nothing less than a mortal blow to the heart of the integrity of science.

Editors: Unqualified and compromised

No, you do not get a prize for guessing that not even editors need any relevant qualifications (31-34). Some of them do not get paid at all, a few get paid a token part-time salary, even fewer are remunerated in the way you would expect for what should be a serious highly-skilled full-time job.

The ‘Retraction watch’ website estimate that ‘two-thirds of editors at prominent journals received some type of industry payment over the last few years — which, at many journals, editors are never required to disclose.

A widely aired criticism, 50 years ago, was that editors did not ensure there was statistical evaluation of research — astonishingly, that situation remains the same to this day: of 114 ‘top’ journals examined, only a third had any statistical review for accepted manuscripts.

You might well think that fact alone is all you need to know to decide that you agree with Prof Ioannidis, that ‘most medical research is wrong’ (35). Many years ago, the eminent Oxford statistician, Altman, stated ‘we need less research, and better research’ (8) — the exact opposite has eventuated.

Few editors have statistical knowledge, never mind expertise. Their role of vetting papers, before they are sent out to referees for an opinion, has largely been abandoned: I have been sent many papers from editors that any competent and diligent editor should have rejected out of hand ‘at a glance’, before imposing them on a referee’s time. It seems most of the editors just do not bother: if they did, one imagines they would be sacked.

Many journals, even those from ‘reputable publishers’ (when you have finished reading this you may come to share the view that the preceding phrase is an oxymoron) use a computer algorithm to generate suggested referees, which are then probably sent out automatically without the editor doing anything at all. As a psychiatrist I once published in an infectious diseases journal and I was soon getting requests to referee papers for other journals in the infectious diseases field. Any half-competent editor, who looked at my publication record, would have known that was inappropriate. One might think it inconsequential, because such inappropriate requests get refused; but that is wrong — doctors are seduced by being considered as an expert, just as they are seduced by a dinner invitation from a drug company. I receive so many of these inappropriate refereeing requests, and replying with ‘unsubscribe’ frequently does not stop them coming, that I started to routinely reply with imprecations: that seems to be more effective. I am obviously not alone in my frustration because in my researches I found a ‘paper’ entitled ‘Get me off Your Fucking Mailing List’. It was published by a predatory open-access journal, See:

I will not dwell on it further, but it is obvious that bogus refereeing is an epidemic (13).

Not only is there bogus refereeing, but there are bogus journals (be suspicious of any journal with a pleonastic title like ‘the international global journal of …’. Some have been funded by drug companies as a channel for getting dodgy papers published — a few of these have been uncovered, there are doubtless others still in circulation. There are now many thousands of journals that accept pay-to-publish papers; there has been a massive expansion in this category recently, and they cover the whole spectrum from dubious to bogus (6, 36-40).

Supply and demand

The problem is simple. Academics have multiple demands on their time, over and above their normal clinical work, involving; teaching, mentoring, sitting on committees, weeks wasted working up grant applications, doing their own research, and probably near the bottom of the list, doing refereeing for journals (unpaid and largely unrewarded).

On the other hand, the number of journals has proliferated exponentially over the last decade or two. Increased demand, reduced supply — result, inevitably decreasing standards. It is clear that even the best journals are struggling to find competent referees, which is precisely why many ask authors to suggest referees for their own papers. That opens yet another door for favoritism and cheating, which it is quite clear many people are marching through without a backward glance.

Beyond ghost-writing: ghost-managed medicine

Control and dominate not only medical journal content, but also medical education itself, and therein is immense power to set the agenda and shape and manage people’s thinking.


Back in 2009, the Institute of Medicine recommended the prohibition of ghost-writing: editors have not instituted systematic assessment of ghost-writing since then (41). I am sure if editors tried to, they would be sacked, because most drug-trial papers from drug companies are ghost-written, and that is where the profit is.

As yet another good editor (Barbour) stated: ‘it threatens the credibility of medical knowledge and medical journals’ (42). Good editors get sacked, like Kassirer and others, now they do not get appointed in the first place. Most of them seem little more than puppets or figureheads.

To parody Saki: ‘The editor was a good editor, as editors go; and as editors go, she went’.

See also:

Ghost marketing: pharmaceutical companies and ghost-written journal articles (43)

Legal remedies for medical ghost-writing: imposing fraud liability on guest authors of ghost-written articles (44)

Systematic review on the primary and secondary reporting of the prevalence of ghost-writing in the medical literature (45)

Ghost-writing revisited: new perspectives but few solutions in sight (46)

Ghost-managed medicine

Ghost-managed-medicine describes the behind-the-scenes manipulation of the whole agenda and knowledge-base of medical science and practice which continues unabated: agnoiology (the study of ignorance) and agnotology (culturally-induced ignorance, or the spreading of misinformation and doubt — not yet in the OED) are thriving following their seeding, and fertilizing with money and ‘think-tanks’, by ‘big tobacco’ decades ago (see (47)). As Sismondo’s new book (Ghost-Managed Medicine: Big Pharma’s Invisible Hands (48)) details, ‘contract research organisations’ and ‘publication planners’ populate the medical knowledge space and orchestrate most of its contents (41, 44, 45, 49-51). Many clinical pharmacology textbooks are now ghost-written too.

In this context one must also recognize the over-arching influence of the corporatization of all academia, especially medical education, which has been written about recently (52) ‘Higher Ed, Inc. How the university became a profit-generating cog in the corporate machine’ — this is an overview of that important aspect of the problem, which is complimentary to what is discussed herein.

Control journals and control education and … fait accompli.

In short: the ‘medical knowledge space’ is effectively managed by those with the money, largely of course for their own benefit — a direct analogy with plutocratic politics — the weight that opinions carry is proportional to the amount of money behind them.

I cannot suppress a wry smile when I hear medics criticizing the pseudo-knowledge of ‘alternative medicine’. How many of them realize that the overwhelming majority of what they absorb as ‘knowledge’, throughout their careers, is funded and directed by drug companies: they can be estimated to spend more money influencing doctors each week than all the medical schools in North America cost in a year (53).

Masochism: the self-imposed burden of publish-or-perish

Another way in which academics have tripped over on their own trousers and shot themselves in the foot is by creating the entirely self-imposed system whereby grants, career success, and promotion, are inexorably and masochistically bound-up with publishing; there are few points for being a good teacher. There are greater rewards for publishing a number of small papers (salami publishing), compared to one substantive work, further diminishing the whole endeavour.

A system has been created that rewards quantity rather than quality, and at the same time costs us all a fortune by catalysing the creation of yet more third-rate journals that have to be paid for — some academics may protest that the peer review system stops that happening, but the information herein clearly demonstrates that is a dangerous misconception. It merely fuels the production of yet more poor-quality papers covering different varieties of ‘big-toe surgery’.

It is painfully clear that academics are wasting an immense amount of time and resources producing third-rate publications which nobody is ever going to read, and which certainly are not going to have any impact on the world of science: and all because they create a rod for their own backs through the current ‘publish-or-perish’ mentality. It is time it was stopped. It fuels the fire and provides profits for publishers, who add no significant value to the process.

Furthermore, it may be noted that the corporate publishers have made quite a success of devolving even more of the work onto academics: for instance, a friend told me it took a whole working day to submit a paper to the online computerized system of a major journal (and that person was on the editorial board of that very journal), and that is just the start. I estimate that, for my last few reviews, only 10-20% of the total time taken was writing the actual paper: the rest was stuffing about with the journal and its procedures. Post-archiving assessment would save a great deal of time.

Post-archiving assessment

A post-publication, post-archiving, assessment of the value of work has already emerged automatically from the cross correlation of information that is compiled. It needs to be formalized, augmented, fine-tuned, and organized. Google know what you have looked at, and for how long, and in what part of a physique you prefer the curves or bulges in the objects of your adoration to be; so, it is possible to automatically register the expertise and the viewing predilections of any researcher or reader, and compute a metric to judge work, weighed according to their status and expertise. The citation index is merely a first crude and clumsy step in this direction: for instance, it does not factor in whether the paper was cited for good reasons or bad reasons, or self-cited. Neither does it take account of whether the citation was related to a substantial paper by a reputable researcher in the field, or just in a letter to a journal. It could be more sophisticated. Those deficiencies can be corrected and it is illogical to tolerate their continuance: corporate publishers have little motivation to foster such improvements, nor do they choose to pay for that expertise: why would they? There is a rapidly expanding literature dealing with this (2). It is rather too complex to discuss in detail here, but a recent Ph D. thesis provides much background, detail, and analysis (4).

Post-archiving assessment already exists and works well; it works for mathematics, physics, and computer science (54): they post pre- and post-review versions of their work on servers such as arXiv at about $10 per article. Systems of community post-archiving review can be added on top of any computer algorithms that are used. One major benefit of this system is that it focusses on the paper, not a journal. Indeed, it may be apparent that journals, as a physical entity, and as most people know and understand them, are teetering on the brink of obsolescence: they are redundant artefacts of ‘paper-processing’. Now that we know the genetic code of all organisms, the ‘tree-of-life’, based essentially on morphology, has been superseded. The same is so of journals. That is because the indexing systems, which can cope with every single word in the whole article (like reading the whole genetic code), represent a much more sophisticated categorisation of papers than their assignment to one or other journal in a somewhat arbitrarily defined ‘discipline’.

There is also a considerable time-saving for researchers in this kind of system because they will no longer have to spend huge amounts of time, writing and rewriting, formatting and reformatting, papers, as they go through the process of offering them to different journals and dealing with referees’ comments. Such savings extend to other aspects, such as referees time, grant applications, etc. Many parts of the current systems, inherited from hardcopy journals, are inefficient and time-wasting for researchers, especially because journals are adept at devolving work onto other people (i.e. academics) to save themselves money.

The whole process of getting good and original ideas aired and critiqued in the intellectual space is stuck in inefficient antediluvian mire.

It is not difficult: if I, as an expert on ST, spend an hour reading a paper relevant to ST, (and Google knows my H-index), then it will weigh that information proportionately, compared to a first-year pharmacy student who spends an hour reading the same paper. It would be simple to build-in more sophisticated ratings of the quality of material, which might be weighed in value using agreed algorithms to make them more discriminating. Universities etc. already hold repositories of work that they judge worthy of consideration for indexing. These would be examined by ‘bots’ like those used by Google. It does not take much thought to see how such a system would make standard journals largely, or completely, redundant. Apart from universities, grant agencies and funding charities could hold archives of whatever work they wish.

Such a system would have the advantage that fraudulent work probably would be more readily revealed and would disappear from the citation literature. Bad work would soon acquire the equivalent of the mark of the cross of the plague and few would see it or read it. This would improve on the flawed current system where many journals willfully refuse to correct or retract obviously faulty and fraudulent material, and universities refrain from disciplinary measures against those who infringe accepted rules, e.g. ghost-writing.

Contrary-wise, important work would ascend to its appropriate ranking more efficiently. All the computer software and protocols to achieve this already exist to a degree that would make fraud, plagiarism, and gaming the system, difficult.

The hundreds of millions, possibly billions, of dollars thus saved could be directed to librarian-run services (who would manage their establishments archives, and check their establishments papers were properly researched and referenced), and a few appropriate regulatory bodies could oversee the system in a democratic and transparent manner. It is difficult to see why this would not be much superior to the current system. The ‘sunk cost fallacy’ (in other words, do not throw good money after bad) is relevant, because continuing to ‘invest’ in journals is a bad policy.

Post-archiving assessment is cheaper, better, quicker, more transparent, and hard to cheat or ‘game’.

Eisen, a founder of PLoS, has said: ‘but my frustration lies primarily with leaders of the science community for not recognizing that open access is a perfectly viable way to do publishing’ … ‘The result is a profoundly anti-scientific discourse that undermines the very idea of scientific collaboration’. Academics themselves could institute such a system and it would be hugely empowering throughout the whole educational world, and it would make all research freely available to everyone. That is democratic.

The corporate publishers have not refrained from self-interested bullying tactics: a group of them Reed-Elsevier, Wiley-Blackwell, Springer, Taylor & Francis, American Chemical Society, Sage Publishing, formed the CRS to pressure scientist social networking site Researchgate into taking down 7 million ‘unauthorized’ copies of their papers. That prompted scientists to consider not publishing in Elsevier journals. That backlash caused a reversal of their position (or rather a postponement of further action, until the fuss died down*). Indeed, referees could chose to decline to assist bullying publishers like these, who use their power to protect their profits and inhibit access to science knowledge.

It is time for academics to send a clear message to all publisher’s that this greedy, anti-democratic behaviour will not be tolerated: it is close to reaching the stage where academics who assist these publishers might be regarded as betraying ‘democratic science’.

*Proof — since the first draft of this commentary further legal action has been instituted

But one Swedish ISP has just retaliated by blocking Elsevier’site!

The UK Select Committee on Science and Technology have supported mandating that publicly-funded research is placed in institutional repositories, and the Wellcome Trust have now mandated that all its funded research should be made open access (4).

As of Nov 2018 The Wellcome Trust and the Gates Foundation, and 15 other funders are backing Plan S, in Europe, at least, all science may be open access by 2022, for an up to date summary see here

It is uncertain how much archived research benefits from also being in a ‘scientific journal’.

And so, we return to the question, ‘why have journals at all?’ and why are academics agreeing to referee for bullies?

Summary and conclusions

Tempora mutantur et nos mutamur in illis

It is time for change.

If a new system for promulgating quality science was being designed from the ground up the current system would not be considered.

Many journals have descended to a level where they no longer qualify to be called ‘scientific’. The publication of most scientific papers now has little to do with advancing scientific knowledge.

Many journals are no longer ‘scientific’

The last few decades have proved beyond reasonable doubt that profit and good science journals are incompatible.

Whatever individual opinions and preconceptions may be, there is strong evidence that much of the journal publishing enterprise fails to fulfil the functions expected of it; these points emerge from the numerous sources and considerations herein:

It often fails to recognize benchmark papers

It is wildly inconsistent in its assessment of the merit of papers

Supply and demand are driving-down standards

It encourages salami publishing

It actively discourages papers replicating previous work

It fails to educate and audit the key players; i.e. editors and referees

Profit comes before science, so good editors get sacked

It represents poor value for money

It has been slow and inefficient in adopting new technology to ease the burden of formatting and referencing papers etc.

It is unnecessarily time-consuming for authors

It has failed to address the issue of vested interests of owners, editors, and referees

It fails to publish criticism and responses

It usually fails to detect fraud and plagiarism

Some journals do not retract fraudulent material

That list could be even longer, but that is quite enough to illustrate that the enterprise is beset with serious faults: so much so that only a tiny proportion of journals meet the definition for being ‘scientific’. The key obligations of editors and referees to guard the quality and probity of the scientific literature are not being fulfilled. It is costing a fortune, a disproportionate share of which goes into the pockets of major publishing houses who make huge profits without benefiting medicine or publishing much at all. This is in part the poisonous legacy of the psychopathic fraudster Robert Maxwell. Publishers add little to science, many might think of them as more akin to parasites on its body.

Archiving and post-hoc assessment

It is now probably, some would say definitely, more logical and efficient for papers to be archived by institutions, and their merit assigned after posting. If sub-standard work is archived by institutions, which is subsequently poorly rated by posterity, then the reputation and ranking of those same institutions will suffer. Their ‘institutional impact factor’ would be a more valued and nurtured metric than the distantly related impact factor of the journals in which their members now publish their work. Therefore, they should be strongly motivated to put in place mechanisms to screen material that is ‘archived’ — which effectively will be ‘pre-publication’ review.

Existing experience and research demonstrate that assigning merit after archiving can be achieved reliably and transparently (as already done in some disciplines) using various methods, including computer-generated algorithms. One major advantage of such techniques is that many of them will be automatically retro-active, and will thus benefit all previously published work. Improvements made as the system evolves will also be automatically retro-active.

Academics: nascent power

Academics themselves are the main stumbling block to the institution of new systems, as I suspect Eisen (above) would agree. What will give them the courage and motivation to do something? Will they get sick of being bullied and taken advantage of? The Wellcome trust, for example, have sufficient to gain from this to become more of a driving force than they already are; indeed, as I write, and as of Nov 2018 The Wellcome Trust and others are backing Plan S (55), in Europe, see here

Academics have a great deal of power (but not currently much will) to change this dysfunctional system: for example, referees might now seriously consider declining to assist publishers who do not abide by certain standards, or who engage in bullying, or like Elsevier, who use their power to protect their profits and inhibit access to science knowledge.

At the very least referees might consider a written statement to any publisher they do reviews for, stating clearly that they will not tolerate bullying and the sort of behaviours described above.

I am sure the publishers know that they are treading a fine line, and that they are near the edge of a steep cliff, at the bottom of which may soon lie the broken and bankrupt body of corporate publishing.

Indeed, an interim boycott of refereeing, or a ‘work-to-rule’, whilst a new system is established, would serve as a powerful stimulant for a radical rethink of all aspects of the publishing game. Why are academics agreeing to referee for greedy bullies?

Re-direction of resources

Change will free-up for redeployment large amounts of money, currently being paid to publishers who add little of value to the scientific endeavour. This money can be redirected to provide various benefits, like remunerating referees, and especially to expand and improve library services, and to reinforce the key position and role of librarians who are, or at least should be, an essential hub at the center of all good research.

With a little determination and imagination, the academic world could bring about tremendous and beneficial improvements in the democratic dissemination of knowledge. It is an exciting prospect.


1. Doja, A, Eady, K, Horsley, T, Bould, MD, et al., The h-index in medical education: an analysis of medical education journal editorial boards. BMC Med Educ, 2014. 14: p. 251.

2. Patience, GS, Patience, CA, Blais, B, and Bertrand, F, Citation analysis of scientific categories. Heliyon, 2017. 3(5): p. e00300.

3. Smith, RL, Medical Journals Are an Extension of the Marketing Arm of Pharmaceutical Companies. PLoS Med, 2005. 2: p. e138.

4. Lawson, Open Access Policy in the UK: From Neoliberalism to the Commons. Doctoral thesis submitted for completion of a PhD in English and Humanities at Birkbeck, University of London, 2018: p.

5. Jay, A, A New Great Reform Act. CPS, 2009: p.

6. Suzuki, K, Edelson, A, Iversen, LL, Hausmann, L, et al., A Learned Society’s Perspective on Publishing. J Neurochem, 2016. 139 Suppl 2: p. 17-23.

7. Buranyi, S, Is the staggeringly profitable business of scientific publishing bad for science? Guardian, 2017: p.

8. Altman, DG, The scandal of poor medical research. BMJ, 1994. 308(6924): p. 283-4.

9. Juvenal, Sed quis custodiet, ipsos custodes. Satire, c. 55-140: p. Satire VI line 347.

10. Solomon, DJ, The role of peer review for scholarly journals in the information age. Journal of Electronic Publishing, 2007. 10(1).

11. Jefferson, T, Wager, E, and Davidoff, F, Measuring the quality of editorial peer review. JAMA, 2002. 287(21): p. 2786-2790.

12. Jefferson, T, Rudin, M, Folse, SB, and Davidoff, F, Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database of Systematic Reviews, 2006(1).

13. Ferguson, C, Marcus, A, and Oransky, I, Publishing: The peer-review scam. Nature, 2014. 515(7528): p. 480-2.

14. Smith, R, The trouble with medical journals. J. R. Soc. Med., 2006. 99(3): p. 115-9.

15. Garcia-Larrea, L, Twenty years after: Interesting times for scientific editors. Eur J Pain, 2016. 20(1): p. 3-4

16. Tyrer, P, A handmaiden to science: the role of the editor in psychiatric research. Acta Psychiatr. Scand., 2015. 132(6): p. 428.

17. Lundh, A, Barbateskovic, M, Hrobjartsson, A, and Gotzsche, PC, Conflicts of interest at medical journals: the influence of industry-supported randomised trials on journal impact factors and revenue – cohort study. PLoS Med, 2010. 7(10): p. e1000354.

18. Ray, JG, Judging the judges: the role of journal editors. QJM, 2002. 95(12): p. 769-74.

19. Aisen, ML, Judging the judges: keeping objectivity in peer review. J. Rehabil. Res. Dev., 2002. 39(1): p. vii-viii.

20. Siler, K, Lee, K, and Bero, L, Measuring the effectiveness of scientific gatekeeping. Proc Natl Acad Sci USA, 2015. 112(2): p. 360-5.

21. Burns, P, Academic journal publishing is headed for a day of reckoning., 2017.

22. Collier, J and Vallance, P, Originality: who is to judge? Lancet, 1993. 342(8870): p. 510.

23. Morris, J, Originality: Who is to Judge? 1993, Department of Pathology, Lancaster Moor Hospital, Lancaster LA1 3JR, UK: Lancaster.

24. Gillman, PK, Tricyclic antidepressant pharmacology and therapeutic drug interactions updated. Br J Pharmacol, 2007. 151(6): p. 737-48.

25. Werneke, U, Jamshidi, F, Taylor, DM, and Ott, M, Conundrums in neurology: diagnosing serotonin syndrome–a meta-analysis of cases. BMC Neurology, 2016. 16(1): p.

26. Lasserre, K, Expert searching in health librarianship: a literature review to identify international issues and Australian concerns. Health Info Libr J, 2012. 29(1): p. 3-15.

27. Altman, DG, Unjustified Restrictions on Letters to the Editor. PLoS Med, 2005. 2(5): p. e126.

28. Real, IP, P-Hacker Confessions: Daryl Bem and Me. Skeptical Inquirer, 2016.

29. Diener, E and Biswas-Diener, R, The replication crisis in psychology. Noba Textbook Series: Psychology, eds Biswas-Diener R, Diener E (DEF Publishers, Champaign, IL), 2017.

30. Moher, D, Glasziou, P, Chalmers, I, Nasser, M, et al., Increasing value and reducing waste in biomedical research: who’s listening? The Lancet, 2016. 387(10027): p. 1573-1586.

31. Barbour, V, Competing interests in journal editors. BMJ, 2017. 359: p. j4819.

32. Galipeau, J, Cobey, KD, Barbour, V, Baskin, P, et al., An international survey and modified Delphi process revealed editors’ perceptions, training needs, and ratings of competency-related statements for the development of core competencies for scientific editors of biomedical journals. F1000Res, 2017. 6: p. 1634.

33. Moher, D, Galipeau, J, Alam, S, Barbour, V, et al., Core competencies for scientific editors of biomedical journals: consensus statement. BMC Med, 2017. 15(1): p. 167.

34. Shamseer, L, Moher, D, Maduekwe, O, Turner, L, et al., Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison. BMC Med, 2017. 15(1): p. 28.

35. Ioannidis, JP, Why most published research findings are false. PLoS Med, 2005. 2(8): p. e124.

36. Akers, KG, New journals for publishing medical case reports. Journal of the Medical Library Association: JMLA, 2016. 104(2): p. 146

37. Simons, MR, Morgan, MK, and Davidson, AS, Time to rethink the role of the library in educating doctors: driving information literacy in the clinical environment. J Med Libr Assoc, 2012. 100(4): p. 291-6.

38. Bjork, BC, Growth of hybrid open access, 2009-2016. PeerJ, 2017. 5: p. e3878.

39. Gadagkar, R, The ‘pay-to-publish’ model should be abolished. Notes Rec R Soc Lond, 2016. 70(4): p. 403-4.

40. Beall, J, Best practices for scholarly authors in the age of predatory journals. Ann. R. Coll. Surg. Engl., 2016. 98(2): p. 77-9.

41. Lacasse, JR and Leo, J, Ghostwriting at elite academic medical centers in the United States. PLoS Medicine, 2010. 7(2): p. e1000230.

42. Barbour, V, How ghost-writing threatens the credibility of medical knowledge and medical journals. Haematologica, 2010. 95(1): p. 1-2.

43. Moffatt, B and Elliott, C, Ghost marketing: pharmaceutical companies and ghostwritten journal articles. Perspect. Biol. Med., 2007. 50(1): p. 18-31.

44. Stern, S and Lemmens, T, Legal remedies for medical ghostwriting: imposing fraud liability on guest authors of ghostwritten articles. PLoS medicine, 2011. 8(8): p. e1001070.

45. Stretton, S, Systematic review on the primary and secondary reporting of the prevalence of ghostwriting in the medical literature. BMJ open, 2014. 4(7): p. e004777.

46. Editors, PM, Ghostwriting revisited: new perspectives but few solutions in sight. PLoS Medicine, 2011. 8(8): p. e1001084.

47. Michaels, D, Doubt is their product: how industry’s assault on science threatens your health. 2010.

48. Sismondo, A, Ghost-Managed Medicine: Big Pharma’s Invisible Hands. 2018: p.

49. Sismondo, S, Ghost management: how much of the medical literature is shaped behind the scenes by the pharmaceutical industry? PLoS Med, 2007. 4(9): p. e286.

50. Sismondo, S and Doucet, M, Publication ethics and the ghost management of medical publication. Bioethics, 2010. 24(6): p. 273-83.

51. Barbour, V, How ghost-writing threatens the credibility of medical knowledge and medical journals. 2010, Haematologica.

52. Perry, R and Katz, Y, Higher Ed, Inc. How the university became a profit-generating cog in the corporate machine. The Chronicle of Higher Education, 2018. Oct 18: p.

53. Avorn, J, Teaching clinicians about drugs–50 years later, whose job is it? N. Engl. J. Med., 2011. 364(13): p. 1185-7.

54. Van Noorden, R, Open access: The true cost of science publishing. Nature, 2013. 495(7442): p. 426-9.

55. Else, H, Radical open-access plan could spell end to journal subscriptions. 2018. p.


PsychoTropical is entirely dependent on donations to function. If you have gained useful value from this, then you should consider a donation. Visit the Donate page.

Pin It on Pinterest

Share This

Share this post and spread the word