Guidelines: problems aplenty

They fuck you up, the bloody guidelines.   

   They weren’t designed to, but they do.   

They’re filled with faults, then add

   egregious extras, just for you.

A parody: my apologies to Philip Larkin – Original here

Introduction

In a time of universal deceit, telling the truth is a revolutionary act.

George Orwell 1984

Guidelines are the ‘final common pathway’ communicating data, that are generated by randomised controlled trials (RCTs), to practicing doctors. This commentary assembles the evidence that most RCTs, and therefore guidelines, are seriously flawed, and are a negative influence on good clinical practice (1, 2). First and foremost, this is because so many RCTs are based on corrupted data and corrupted methodology, which emanates from the systematic errors, manipulations, and distortions, of clinical trial processes (mostly by ‘big pharma’), and the resulting ‘evidence’. A great majority of all trials published are paid for by ‘big pharma’ (3), so that distorts the greater part of all published, and unpublished, medical science.

As the Lancet editor, Richard Horton, so scathingly put it: ‘much published drug-trial research is McScience’; it is advertising — not science. He is among a number of journal editors and prominent researchers who have commented in relation to this, along with other ex-editors of leading medical journals (4-10). Horton has reviewed Kassirer’s (ex-NEJM editor) just-published book (11) and comments that ‘The best editors get fired’ [because making money and publishing good science are antithetical enterprises]. All these editors came to the realisation that they were being duped, manipulated and blackmailed into publishing misleading science, through the prestigious publications they were in charge of, which had been turned into cash-cows and, [Horton] ‘little more than information-laundering operations for industry’, and [Smith] ‘extensions of the marketing-arm of pharmaceutical companies’ (7, 12).

Another facet is the erosion of the fundamental pillar of the independence of medical editors. This erosion has resulted from commercial pressures, either directly from the publisher, or via withdrawal of advertising by pharmaceutical companies, or threatening not to purchase ‘reprints’ (13). There have been instances of the direct blocking of publication of research which was unfavourable to particular drugs (14).

Doctors, and other health-care professionals, generally have an insufficient appreciation of just how comprehensively corrupted are the data that are subjected to ‘meta-analysis(M-A) and ‘systematic review’, which are the back-bone of evidence-based medicine (EBM), and therefore how corrupted are the guidelines themselves — as Prof Ioannidis recently expressed it: ‘Few systematic reviews and meta-analyses [there are now hundreds of thousands] are both non-misleading and useful’ (15). The crazy position is that there are more systematic reviews and meta-analysespublished annually, than actual original trials.

They are all merely ‘re-digesting’ the same material, which is sometimes execrably poor, usually producing different ‘results’ and interpretations.

Almost none of these data, relied on by reviews, M-As and guidelines***, have been independently replicated. That which is not replicated is not science.

*** In this commentary, one can generally think of the terms, systematic-review, meta-analysis, EBM, and guidelines, as synonymous.

Big Pharma has played a major part in creating and steering both diagnostic practice and guidelines, which are the coup de grace in this sorry saga of the mutilation and abuse of science.

There has been, quite rightly, a fuss, and much writing, about the various issues relating to the deceit, fraud, bias etc. detailed and discussed in this commentary. On a web site called ‘International Network for the History of Neuropsychopharmacology’, some of the most famous names in the field from the last 60 years have aired opinions agreeing with what I say in this commentary.

However, right from the start, I want to emphasise how little difference this has made to the continuation and dominance of those very same improper and dishonest practices.

Many of the apparent improvements that have been claimed, concerning the probity of science research and publishing, constitute a charade and are no more than a splash of new paint on the facade of a decrepit building.

Guidelines magnify these mis-truths and imbalances and promote a narrow perspective on health-care which puts excessive emphasis on drugs over other non-drug interventions, or no intervention at all.

Independent replication is the corner-stone of all science: if you cannot inspect the original ‘raw’ data (see below), you cannot know if it is sound data, nor whether anyone has replicated it. Most medical research is not independently replicated, ergo, it is not science. It is that simple. No rationalisations or excuses can alter that.

Either you are doing science, or you are not.

Epidemic of meta-analyses: RCTs & ‘coprophagia’

Guidelines have proliferated like rabbits over recent decades, and they are the dominant influence over the treatments chosen by practicing doctors. The ‘meta-analyses’ and ‘systematic reviews’ on which they are based have proliferated even more than guidelines, and proliferated to a farcical extent, rabbits (coprophages) cannot compete — in a recent paper, ‘The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses’, Professor Ioannidis has detailed how more systematic reviews of trials are published annually than actual new randomized trials. For antidepressant drugs alone there have been 185 meta-analyses published between 2007 and 2014 (15). Professor Ioannidis concludes:

‘The production of systematic reviews and meta-analyses has reached epidemic proportions. Possibly, the large majority of produced systematic reviews and meta-analyses are unnecessary, misleading, and/or conflicted’.

Many meta-analyses are indeed industry initiated, organised, sponsored, and conflicted (16).

How better to describe this activity than as ‘intellectual coprophagia’?

The meetings of senior doctors, to craft both the Diagnostic & Statistical Manual (DSM)*** and most guidelines, have been heavily funded by drug companies. Large numbers of eminent American doctors were handsomely remunerated to attend resorts in Palm Springs and like venues, to thrash out guidelines for the use of SSRIs, Xanax, Risperdal, Seroquel … you name it.

*** The DSM is the very profitable product of the ‘American Psychiatric Association’: I have to say that most of my colleagues in USA, those who have retained their probity, regard the APA as a corrupted organisation.

Guidelines now unjustifiably impose themselves on doctors who may not agree with them. That is ‘intellectual imperialism’ (17). I suggest ‘intellectual fascism’ is a more accurate term.

By the way, there are multiple guides to guidelines — honestly (18-22).

Which set of guidelines do you then choose to follow? One might facetiously ask, ‘is there an evidence base for deciding which guideline has the best evidence base’?

Guidelines are contaminated by having expert panel-members who have financial ties to drug companies, even though the Institute of medicine long ago recommended that no such people should be on the guideline-panels (23, 24). Even if panel-members are truly independent, their main currency is still corrupted RCT data, and no-one can overcome that problem, any more than can the statistical legerdemain of meta-analysis — garbage in, garbage out (see below).

There are other good reasons, in addition to the problems with RCTs, to suppose that the evidence-based medicine (EBM) enterprise is diseased from the roots to the shoots (17, 25-28).

Guidelines have morphed. They may well have been intended by some proponents, as exactly that, guides: the sort of kind advice that a senior colleague might give about a difficult case. But they have been seized on by the simple-minded, the lazy, the authoritarian, the managers, the media, and even politicians, as if they were diktats — and that is how one sees them being applied to many patients.

This is a complex topic to deal with and understand. It involves an understanding of history, how businesses work, how medicine works (I refer particularly to the vested interests of specialists and experts), and much else besides. That understanding can only be attained through wide-ranging experience of medicine and life, and extensive reading. Few doctors have the time to do that, except for those like me who are enjoying a comfortable retirement in the sun, which is setting on the age of the polymath.

The books I have listed are in my view the indispensable background to enabling people to see and understand the big picture. I will simply add that as a pharmacologist with a sceptical attitude, I am absolutely certain that vast numbers of people are being treated with expensive drugs that produce little or no benefit, but have many poorly documented and unpublicised ill effects.

Do we need to be reminded that adverse reactions to drugs, and drug-drug interactions, are among the leading causes of hospital admissions and deaths (29-33)?

I am reminded of Shaw’s words:

‘When a stupid man is doing something he is ashamed of, he always declares that it is his duty.’

I expect my readers can translate that into ‘guideline-speak’.

Recommended books

This is a good point at which to recommend books relevant to this subject: I recommend these because they are all written by scientists giving an informed view of the subject. These individuals continue to attract a considerable degree of opprobrium: powerful groups do not like the truth being told.

Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients’’ (Faber and Faber, 2013) Ben Goldacre; Senior Clinical Research Fellow, Centre for Evidence-Based Medicine, University of Oxford.

Pharmageddon. Professor David Healy. Hergest Unit, Bangor, Wales(the best pun title I can remember).

Deadly Medicines and Organised Crime: How Big Pharma Has Corrupted Healthcare Professor Peter C. Gøtzsche. Danish physician, medical researcher, and leader of the Nordic Cochrane Center at Rigshospitalet in Copenhagen, Denmark. He co-founded, and has written numerous reviews in, the Cochrane collaboration.

Psychiatry Under the Influence: A Case Study of Institutional Corruption. Professor Lisa Cosgrove and Mr. Robert Whitaker (a medical writer, director of publications at Harvard Medical School), both are fellows at Edmond J. Safra Center for Ethics, Harvard.

The Truth About the Drug Companies: How They Deceive Us and What to Do About It (Random House, 2005) by Marcia Angell, M.D., (former editor of the New England Journal of Medicine). Marcia Angell, M. D., is a Corresponding Member of the Faculty of Global Health and Social Medicine at Harvard Medical School and Faculty Associate in the Center for Bioethics. She stepped down as Editor-in-Chief of the New England Journal of Medicine on June 30, 2000. The only one of this list that I have not read myself.

Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. Professor Naomi Oreskes, Erik M. Conway. This is a more general historical overview covering tobacco and climate denial etc. which gives a better impression of the enormity and persistence of these big-business tactics.

I do not make a habit of reading books like this, since I have already made most of these points myself. And continuing to read literature which simply agrees with what you already think is not a priority, reading material which disagrees with what you think is what good scientists do. However, when I decided I would write a commentary about guidelines it was necessary to read or re-read these texts.

Seriously corrupted data — a core problem

The serious persistent problems surrounding confliction and evidence relating to guidelines are undeniable, as evidenced by many recent reviews (20, 34-36), which indicate, as suggested above, that little has changed over the last decade or two, despite all the kerfuffle.

These problems are related to, first and foremost, the appropriation, hiding and distorting of patient data by ‘big-pharma’ (see below), as well as the conflicted handling, and then mis-used third-rate science, that underpins most of the clinical-trial base, and thus of the ‘evidence-based medicine’ enterprise.

This has been fuelled by the massive financial power imbalance in the medical system (pharmaceutical companies have all the money). It has been powerfully catalysed by the weak acquiescence of the medical profession in allowing drug companies to take over the whole trial process, including the actual data — that is, incidentally, a glaring ethical betrayal of patients: but few seem to have commented on that, or even noticed it.

Allowing the partisan drug industry to sequester the data, and refuse to let (even their own) expert ‘authors’ examine it, was a serious tactical error (possession is 9/10 of the law).

It is hard to respect the medical professionals who have colluded in this process, a proportion of whom are undoubtedly wicked, greedy, self-aggrandising and dishonest, even if they convince themselves otherwise.

What do I mean by seriously corrupted data? The concise answer is this: data that any seasoned observer has good reason to suspect are unreliable, and which are not subject to examination and checking by others, nor reproducible (see e.g. Pharmageddon).

If I report that a patient I have assessed was ‘suicidal’ this means little if I do not record exactly what I asked the patient, and what they replied. Needless to say, it means even less if I refuse to show my case records to anybody else, and simply justify my opinion by saying ‘because I say so’. But that is what big Pharma is still getting away with.

There is rather more to it than that: for instance, if the patient does not have a relationship with me, and does not trust me, then they are unlikely to answer truthfully about suicide, for fear of being locked-up.

I am going to have to give a few examples relating to corrupted data here, because, although endless examples and details are in the references and books cited, many will not get around to looking at them. Since these are crucial evidential material, I will give details on one or two, because I can hear some of my colleagues saying, ‘come on Ken, surely you are overstating the case here, it's only a few bad apples etc.’ If only …

Blumsohn was the doctor at Sheffield who lost his job for attempting to insist on seeing the raw data for the tables in the ghost-written paper he was presented with as ‘author’. He was not prepared, like most are, to just ‘sign-off’ on it. His Dean, Eastwell, another co-author, did sign-off on it, and subsequently appeared before the General Medical Council because he said he had seen the ‘raw data’ when he had not*** (37); The reference has all the details.

***An explanatory comment on this is mandatory. Eastwell’s (successful) defence was that he had seen the data, but what he was referring to was the coded data, not the original ‘raw data’. Let us assume that he was not being disingenuous (which may be the case), that still leaves him guilty of being a naive and bad scientist. If scientists did not insist on dealing with original ‘raw’ data then we would all believe in ghosts.It all goes to show how many doctors do not understand science.

Documents revealed in another court case showed a senior company executive commenting on the [established fact of] hiding of adverse-event data from a drug trial, saying in an internal company email, ‘if this comes out I don't know how I will be able to face my wife and children’ — one imagines this was a rather superficial and self-serving mea culpa, but no less revealing for that.

It seems that many companies have been keeping research documentation ‘offshore’, which impedes accesses to them by legal processes — but such data could only be incriminating if they revealed ‘misrepresentation’, lying, cheating, whatever. It is an instance of ‘excusatio non petita accusatio manifesta’ [he who excuses himself, accuses himself].

I have previously commented on the chicanery involved in the Risperdal trials, and would only repeat here that the much-cited meta-analysis by Leucht (38) failed to cite the classic Huston paper that dissected the deceit pervading Risperdal trials (39): I asked Leucht why he had not cited Huston and he replied that they simply did not know about it. How hard did they look? If I can find it, in my disadvantaged and isolated situation here in tropical North Queensland, how come a professor at a major European University cannot find it. You see what you want to see, and forget what you do not want to remember.

Anyway, the extensive dishonesty involved with Risperdal (and, of course, many other drugs (40)) is well documented elsewhere and I have lost track of the number of successful legal actions against them in relation to this. They must have paid out more than a billion, by now. Ah well, it's just the cost of doing business. Google it, it will astonish you.

Look at the references for a myriad of further examples: when commenting on this sort of thing one feels like a hawk attacking a flock of starlings, there are so many targets that there is a danger of not killing any of them. I have to add here the observation that those are precisely the tactics Big Pharma sometimes uses. Flood the literature with ‘your stuff’ and the contrary view simply gets snowed-under and lost — that is exactly how they dealt with Barry Marshall (Heliobacter, Nobel prize, remember?) in order to maintain sales of the billion-dollar block-buster anti-ulcer drugs that still had a while to run under patent. Eventually the more effective, life-saving, long-term cure, anti-biotics, displaced them: eventually, after many more deaths — it seems co-lateral damage is acceptable not only in the military.

RCT: gold-standard or fools-gold?

‘Where the outcome at issue is at all substantial then not only is randomisation unnecessary, so also is the use of any formal statistical test of significance’.

Sir Austin Bradford Hill 1965.

Control RCTs. Control clinical practice

Control RCTs and the data they generate, keep it to yourself (preferably off-shore along with your tax shelf-companies) and you control clinical guidelines and clinical practice: talk about a fait accompli.

The dogma of RCTs as the gold standard has been made to over-shadow other forms of evidence: therefore, controlling clinical practice*** has become, pretty much, a one-step process.

*** In case there is anyone who does not get it — controlling clinical practice means drug companies can make sure all the new expensive drugs are featured prominently as first-line treatment recommendations in the guidelines, and thus maximise their profits.

Therefore, key questions for the incisive analyst become: do RCTs have any special epistemic validity, excellence, or superiority? how much value should we place in RCTs? how are their results demonstrably relevant and beneficial to the typical patient? do they translate into reliable and meaningful short-term or long-term treatment decisions?

These are crucial questions which have never been well addressed, even though they were raised by eminent statisticians right at the start of the whole EBM/RCT endeavour. As Worrall discusses (41), proponents of EBM advance an exaggerated view of the epistemic virtues of RCTs — here, we might note that Hill himself made a point of endorsing Claude Bernard's view that there is ‘no qualitative epistemic difference between experiment and (properly scientific) observation’ [i.e. clinical experience].

The eminent Australian professor, Gordon Parker, argued, some time ago, that there are major ‘limitations to ‘level 1’ evidence derived from randomised controlled trials … which are no longer producing meaningful clinical results’ (42), and that paper, and others (43-47), are entirely consonant with the major points raised herein.

One could make more of the epistemological, methodological, and statistical faults and problems concerning RCTs (1, 45, 48) [see especially Feinstein, 1997], but that is not the prime purpose of this commentary, other than to raise awareness and persuade readers that there are indeed very serious problems which should have a major influence on how ordinary doctors regard the results of RCTs, and therefore, guidelines.

No lesser authority than Hill himself pointed out that you need neither randomisation, nor statistics to analyse the results, unless the treatment effect is very small (49). Remember that.

Anti-depressant — a meaningless term

There is another point to be borne in mind. The degree of symptom improvement that a drug must exhibit, in order to be approved and officially labelled as an ‘antidepressant’, is minimal. Bearing in mind that such drugs are assessed for effectiveness using the poor and antiquated ‘Hamilton rating scale for depression’, one can easily see how small changes of symptoms, that have nothing to do with the core pathology of depressive illness (anergia and anhedonia), are sufficient to get almost any drug with sedative or anxiolytic properties over that hurdle (e.g. see my commentary on quetiapine), even if it has absolutely no effectiveness on the core changes that constitute the illness .

Look at this online version of the HRSD to see what I mean. Qs 4, 9,10,11 & 12 might all be improved by any anxiolytic/sedative — a one gradation change in each of those produces a 5-point improvement of your score, more than double that needed to get a drug approved by the FDA as an AD. Yes, incredible as that may seem to outside observers, it really is that silly (50).

Also, note there is not one single question in HRSD assessing the key core symptom of anhedonia, and precious little for anergia either — absurd, totally absurd.

That is not science.

Deliberately dishonest coding

The data gathered in clinical trials are inevitably subject to interpretation and uncertainty. Responses to a series of questions, artificially and rigidly constructed, asked by someone (unknown to the patient), paid for by a drug company, to go around asking questions from a clipboard!

For the purposes of analysis, they are coded by someone. Doctors have abrogated the responsibility for their lead-role in trials, so this someone is rarely the doctor who had responsibility for clinical care of the patient, but a technician at drug company central office — in fact, they pay separate ‘clinical trials’ companies, set up specially to manage these things, to do this. Having an arm's-length-separation facilitates plausible deniability. A recent painstaking re-analysis of the infamous paroxetine study 329 illustrates many of these points (51).

Furthermore, we know that coding sometimes has been incorrect (or deliberately dishonest), so that suicidal thoughts and feelings and intentions were coded, during the analysis of results, as something different (51-55) — and read ‘Pharmageddon’ for further details and references. Therefore, when the results were presented, and written-up for publication by ‘ghost-writers’, who had nothing at all to do with the actual drug trial — they were probably not even on the same continent — no one at these meetings, neither the presenters nor the attendees, had any idea what had really happened to actual patients.

Such practices have nothing to do with good science and the many doctors that associate themselves with such practices have either been duped, or are dishonest, and are traitors to science.

The medical colleges and authorities have abandoned their ethical principles. That is highlighted by the fact that the ‘famous’ KOL doctors who have allowed their names to be used as authors (front-men) of these kinds of papers have not been struck off the medical register for dishonesty or corruption.Are we so inured to such behaviour that we have lost our capacity to be outraged by it?

It is routine practice for the doctors, who participate in these trials from various different centres, to be refused access to the original aggregated data, they only get to see the data after its coded by somebody else. There are now numerous documented examples that this is done misleadingly, erroneously, or dishonestly, and that the practice continues (56, 57). What has recently been put on the Internet in the name of ‘transparency’ is a token: because the data shared is not the original data, it is the coded data. Not the same thing.

That is a mockery of science.

An illustration

The way the pharmaceutical industry presented the benefits and side effects of SSRIs is an illustration of several of the above points concerning misleading manipulation of data and misclassification of side-effects. A major therapeutic effect (it is not a ‘side effect’) of all SSRIs is to inhibit the pathways that lead to sexual climax (no RCT needed there. cf. Hill). The minor effects on anxiety and mood are small by comparison (barely a 2-point difference between drug and placebo on the HDRS).

See my note on citalopram from nearly twenty years ago. [it has been on the site, but not attached to a menu — it is now: which is a reminder to use the search facility]. There are a number of bullet points at the end, one of which points out that the average practitioner would not have been able to discern the difference between those on placebo, vs those on citalopram, at 'endpoint'.

Anyway, the trials of these drugs claimed that inhibition of sexual climax was an uncommon occurrence — well, I was using clomipramine to help premature ejaculation in the 1980s, before ‘Prozac’ even existed. That is how well-know the SRI-effect on ejaculation was. I shall not dwell on this here, but if there is anybody out there who still doubts how the relative prominence of SEs vs benefits has been turned on its head, they might be persuaded if they read the relevant section of Prof Healy’s book Pharmageddon.

That exemplifies well the methodology that was developed for maximising the trivial effects on ‘mood’, by using large numbers of patients to get a marginally significant statistical result (cf. citalopram, above, and more recently of course see Kirsch (58, 59)) whilst at the same time failing to ask appropriate questions to elicit side-effects, or ‘mis-coding’ them (51, 60).

And long-term side effects — not our problem, it is licenced now, up to someone else to do all that.

That is bad science and it is deceitful science; it simply does not, and cannot get more, how can one put it: incorrect, erroneous, false, fallacious, duplicitous, mistaken, inaccurate, shoddy, corrupt, double-dealing, deceptive, deceitful, crooked, untrustworthy, fraudulent, misleading.

In short: it is as wrong as the parrot was dead. For those interested in rhetoric that is an amusing example of ‘pleonasm’.

Statistics

The adage: ‘lies, dammed lies, and statistics’ has a long history going back to at least the 19th century. In, The Life and Letters of Thomas Henry Huxley, is his account of a meeting of the X Club, which was a gathering of eminent thinkers who aimed to advance the cause of science, especially Darwinism: ‘Talked politics, scandal, and the three classes of witnesses — liars, damned liars, and experts’. Even more apposite for our time.

I start with this old adage because it has withstood the test of time, which is telling, and because modern information-laundering, in this post-truth world, has re-invigorated its potency and influence.

Here is a tiny sample of many references I could give, by eminent researchers, discussing misuse of statistics in a great proportion of medical studies. Hardly surprising then that almost all published medical studies turn out to be wrong, as history indisputably demonstrates (61-67). One recent review by a group of eminent statisticians (68) stated [of the use of such tests]definitions and interpretations that are simply wrong, sometimes disastrously so — and yet these misinterpretations dominate much of the scientific literature’.

The ASA has commented ‘Statisticians and others have been sounding the alarm about these matters for decades, to little avail’ (69).

I am not a statistician, so I will merely content myself with pointing out the above references and mentioning that two of the prominent culprits are p-values and the procedure called meta-analysis, which is invariably applied in a pseudo-scientific manner. I have previously described meta-analysis as ‘the phrenology of the third millennium’. I have recently become aware that an eminent researcher from Yale pre-empted me by decades, with a better analogy. He compared it to alchemy, and his detailed criticism of it remains essential reading, two decades later (1, 70). [The 1997 ref is an exemplar of prescience and a ‘must-read’].

Meta-analysis forms the backbone of guidelines, where it reaches its pseudo-scientific zenith. Elsewhere I have quoted Charles Babbage on this subject (GIGO — garbage in, garbage out).

A researcher, whose name is well-known in this field, recently said to me in a private email:

‘I have rather gone off 'meta-analysis' as it is mostly selective/rubbish data in - spurious certainty or continuing uncertainty out, whatever the sophistication of the statistical methods. I include myself in this criticism by the way’.

The trials included in M-A have multiple problems (71, 72), they exclude most of the patients that we treat in every-day practice — e.g. the young, the old, those with mild, or particularly serious illness, those with multiple conditions, and those on multiple drugs, and, craziest of all, patients who are suicidal. They may solicit subjects by advertisement, and now many of them are conducted in totally different cultures and settings in China, India, Asia and Africa (some 80% of Chinese trials are thought to be ‘fabricated’). Shi-min Fang (73) exposed scientific misconduct in his native China, for which he won the inaugural Maddox prize in 2012.

RCTs represent an atypical fraction of the real-world treatment population (20).

Methodology and heterogeneity

But, as if all that was not enough, it is not valid to extrapolate from the averaged result of a non-homogeneous group (70), and then apply it to individuals not from that group, but who share a somewhat arbitrary descriptive similarity (a score on a rating scale).

I defy anybody to produce even a skerrick of evidence that the group of patients, defined as MDD by DSM, is at all likely to represent a patho-physiologically homogenous group.

Drawing conclusions from, or extrapolating from, RCTs involving groups that, prima facie, cannot be assumed to be, or demonstrated to be, patho-physiologically homogeneous, is incorrect. It is invalid science. Black and white. End of story. No argument.

This is such a fundamentally important scientific fact, that an understandable analogy is required.

Lots of people enjoy gardening: so, let us pretend that the patients are represented by the vegetables (non-homogeneous) in your garden; root vegetables? green vegetables? ‘fruity’ vegetables? etc. (define a vegetable, define depression — there is much mileage in this analogy).

Now then, you have got some super new fertiliser from the garden-centre (organic and terribly expensive) and you want to know if it improves the yield of your vegetable garden — for aficionados of statistics, that is exactly why the famous statistician Fischer, of ‘Fisher's-exact-test’ fame, developed his analysis of variance test. It was to help measure the effect of fertilisers on crops at the Rothamsted agricultural research station in the UK — would you just scatter it around the garden, then see if your basket of mixed vegetables was a little heavier than before? or would you test the fertiliser on each separate type of plant, even though some of them look almost the same?

I hope it is obvious that, if the weight of your basket of mixed vegetables was only slightly greater on the new fertiliser, that would not prove all the plants were improving. It might well be only one of them was being helped a lot, and the rest not at all. Indeed, it might be that one or two were poisoned by it, because it was the wrong balance of nitrogen and phosphorus, or too concentrated. Whatever.

I trust that makes the point clear. No qualifications in rocket-science are needed here.

RCTs, as they are generally conceived and executed, represent science at an astonishingly incompetent level, yet that is what dominates drug research in psychiatry (and much of medicine), and ‘informs’ guidelines. It is hardly any better than the evidence for ‘Alt-Med’.

Presentation is the key: ghost authorship the solution

Despite all the fuss, ghost authorship in industry-initiated trials (i.e. most trials) is still common, perhaps even the rule (74-78).

The commissioning, timing, and placement of these ‘papers’ is orchestrated by …

… the marketing and sales divisions.

Because? Timing, presentation, and placement (key journals) are the sine qua non to optimal marketing and sales.

The medical-writing companies ghost-write and orchestrate it all, lastly, they get key authors on board, and presto …

PLoS Med and the NY Times got a raft of such documents [to do with medical-writing companies] made public (see here), in a court case: Ginny Barbour, editor in chief of PloS Medicine, said she was taken aback by the systematic approach [to generating ghost-written papers] of the [medical writing] agency. ‘I found these documents quite shocking, … They lay out in a very methodical and detailed way how publication was planned’ [before the ‘authors’ ever got involved] (79).

Many doctors routinely take the credit for articles written this way.

Such doctors, let us not mince words, are frauds, cheats, and liars.

But let us start this most serious of issues with something amusing.

A real ghost-author!

In the revealing Wilmshurst-case-saga — a man of probity — made well-known because of Simon Singh and the UK libel-tourism story, it was revealed, when he withdrew from authorship because they refused to give him the original data, that, included in the official list of authors of the published paper, was Anthony Rickards.

Anthony Rickards had died before the research was even conducted.

These unprincipled and unpleasant people then sued Wilmshurst for remarks he had made in academic good faith, about the limitations of the conclusions in the paper. This gives everyone a bit of insight into the threatening and bullying which has a major spill-over effect on the willingness of most academics to take on these kinds of people. It is a very insidious influence and totally antithetical to the scientific endeavour.One can see the power of the self-censorship and self-selection effect here: why would a decent, mild-mannered, industrious, conscientious researcher want to get involved in that kind of thing? those who do get involved may be a ‘different sort’ of person.

The bottom-line

At the end of the day, all of the details substantiating the frequency, poor quality and dishonestly, of ghost-written material, are contained in references given herein.

What I would highlight is this, ‘the big picture’: one only has to look at the blossoming of these specialist medical-writing companies, to whom the pharmaceutical companies farm-out their ghost-writing tasks, in order to understand the mega-dollars involved and how common it must thus be, in order to sustain so many profitable enterprises.

Next, look at the number of papers published under the name of doctors (KOLs (80)***) associated with these drugs. You will find there are many academics who have been publishing papers ridiculously frequently (dozens per year), over prolonged periods of time. You cannot possibly write ‘proper’ scientific papers at that rate — so that tells anyone of perspicacity that these people are making a minimal, possibly negligible, contribution to either the research work, or the papers, that bear their prostituted imprimatur.

It is simple. You do not have to be Einstein to work it out.

*** From Moyinihan (80): quoting a drug company source ‘Key opinion leaders were salespeople for us, and we would routinely measure the return on our investment, by tracking prescriptions before and after their presentations, … If that speaker didn’t make the impact the company was looking for, then you wouldn’t invite them back.’

The medical establishment has done nothing to call ghost-writing doctors to account. This is the most astonishing ethical failure, and betrayal of patients, perpetrated by my generation of doctors — we should be profoundly ashamed.

The next step

Another step in this deceitfully orchestrated enterprise is the unscientific manipulation of data using the statistical metric of the p-value and other statistical peregrinations. I will not here describe what that means for non-scientists. Many prominent names in science agree with me (66, 81-83), I could have inserted one hundred references there, just from the last decade. Yet, unbelievably, doctors have colluded with it and swallowed all this in an uncritical and naïve manner.

It is also relevant to remind ourselves that statistical analysis is only really needed to ‘show’ a difference when the treatment effect is small; we did not need statistics to realise that penicillin and chlorpromazine were effective drugs. If complex statistics, and conflation of trials via M-A, are needed to show small treatment-effects of drugs — that covers all drugs in psychiatry in recent times — then the effects are of minimal significance or usefulness, no matter what blandishments may be offered to contradict this. Again, it is that simple.***

Lest anyone think I am going beyond my expertise in asserting this (being ultracrepidarian), I would refer them to the paper by Sir Austin Bradford Hill — he of the smoking-lung-cancer fame, and also the instigator of the first RCT ever carried out — who said of RCTs (49): ‘Where the outcome at issue is at all substantial then not only is randomisation unnecessary, so also is the use of any formal statistical test of significance’.

Do RCTs translate usefully to everyday practice?

Contrary to what is strongly contended by many, there is no sound reproducible science that would allow reliable conclusions that RCTs usefully predict everyday efficacy or long-term outcomes (84). They most certainly do not predict long-term side effects.

The EBM approach, based on an insufficiently critical assessment of RCTs, promotes unjustified over-generalization by accepting that the outcomes of RCTs apply generally — unless there is a compelling reason to believe otherwise (71, 72, 85). However, that is turning science on its head, and would certainly not have been accepted by Popper (86).

RCT evidence does not allow us to predict which particular small percentage of patients will experience these slight benefits — revisit the vegetable analogy above.

Generalizations (i.e. guidelines) that certain drugs ‘should be used’ in a large but ill-defined target population are an invitation for poor clinical practice and over-prescribing (2, 84).

Algorithm-guidelines, nurse practitioners

If doctors are pressured and constrained to practice within these guidelines, as they increasingly are, by their colleagues, health service managers and insurance companies, and fear of litigation, then why have doctors at all? Is all you need is managers and nurse practitioners checking that everyone is given the computer-generated-algorithm-guidelines that dictate treatment: in no time at all you will be able to dispense even with the nurse practitioners and get your treatment ‘instructions’ online and take your algorithm-generated script straight to the pharmacist.After all, most people only get a 10-minute ‘medication-management’ appointment anyway.

Incidentally, a bit of history: this is not a revolutionary idea, but a return to the past. The concept of prescription-only drugs is relatively new in the history of medicine.

And perhaps most sinister of all is the fact that patients worry that, if they did not accept the guideline recommended treatment, they will be refused reimbursement for any other treatment — now that really is medical fascism.

Bye-by, it was nice meeting you. May I leave you in the care of ‘Siri’ for psych —anyone remember ‘Eliza’.

My personal experience

My personal experience, my understanding of common practice, the published literature, and the requests I get for opinions on treatment from around the world, all lead me to the opinion that doctors continue to become more proscriptive and prescriptive. Proscriptive (i.e. dogmatic about following guidelines) and prescriptive (authoritarian and unwilling to consider the preferences of patients). It is as if they have come to regard themselves bound to guidelines —slaves to them, or guardians of them? A bit of both perhaps.

There is a disturbingly prominent vein of authoritarianism present that is in no way justified by the quality or certainty of the evidence and which does not admit discussion, options, choice, preferences, and flexibility (87, 88).

.

This is abhorrent ‘medical fascism’ and good doctors should have no truck with it, but some will lose their jobs because they try to stand up to it.

The very existence and prominence of guidelines magnifies this authoritarianism because guidelines provide a deceptive aura of authority and certainty. This is mediated by a fundamentally flawed system of narrowly focussed ‘pseudo-evidence’ (sponsored clinical trials and their associated methodological flaws) digested via the non-scientific medium of the statistical procedure that agrandifies itself with the epithet ‘meta-analysis’. Armoured with this false shield, our shining-medical-knights sally forth to do battle with mythical disease-dragons — many have shunned DSM, and the disease-mongering that has accompanied it (89-91).

I referred above to the fact that this was an immense and complex topic. I must indicate what I mean by ‘mythical disease-dragons’. Many informed commentators have noted how the internationally influential American Psychiatric Association manual, called DSM, has over the last few decades served to expand the definition of mental illness to encompass a very substantial proportion of the population, thus legitimising and enabling the ‘medicalisation’ and insurance-remunerated administration of drugs to vast numbers of people (for instance, I think recent figures indicate something like 10% of all American children are on medication for ADHD).

Issue of long-term therapy

A doubtful inference that is made from RCTs (and amplified via guidelines) is that modest short-term treatment effects over a few weeks, even if you accept those are meaningful, extend to long-term treatment and meaningful real-life outcomes (like a reduction in the suicide rate) — as opposed to a small change on a rating scale, which is merely an interim proxy measure. Indeed, if anything, the evidence points in the opposite direction: for instance, lithium has the least effect on short-term scores on the HRSD, but the greatest long-term reduction of suicide and hospitalisation (92-94).

These drugs are almost always given over a period of many months, often years. Indeed, other types of evidence, and this speaks to the almost complete lack of external validity of guidelines, suggests that long-term treatment with most antidepressants (and antipsychotics) does not reduce long-term illness manifestations. Statistics concerning such questions are complex, not always reliable, and much disputed. Disability, the hospital readmission rate, the suicide rate, may be reduced, little, if at all (95-102). All those things are fairly powerful evidence that the drugs have questionable long-term benefit for patients. The long-term side-effects are not in doubt though. It seems reasonable to suppose that there are substantive benefits for carefully selected severely ill patients: however widespread use of these drugs probably means that a large proportion of people who are being given them, are being exposed to risks without benefit.

One should note here that this not to say that ‘antidepressant’ drugs are ineffective for everyone. Experience clearly demonstrates that severely depressed patients experience major benefits from various particular antidepressants. And, even if SSRIs are not really ‘antidepressants’, that is not to say they do not benefit some symptoms in some people.

It is difficult to construct a sound argument that the evidence strengthens the case for using RCTs to guide treatment for the general population. Much evidence supports the opposite point of view.

To listen and advise

Doctors are there to listen and advise, not to dictate and direct with insufficient real evidence, explanation, and discussion. For me at least, it is a fundamental precept of medical practice that we listen and advise and resort to paternalism and authoritarianism as little as possible.

There are few circumstances in clinical medicine in which the underlying science is sufficiently good to confidently dictate one particular form of treatment over another. In psychiatry, there are no circumstances in which the underlying science is sufficiently good to dictate one particular treatment over another.

Guidelines are furthering and fostering medical rigidity and authoritarianism. They must avoid being prescriptive, and their creators need to accept responsibility for how they are used and abused. Then there are other issues, like their period of validity, a clear statement about when they are due for revision (sometimes missing) and what kinds of new evidence might invalidate them. There is no established mechanism for questions and discussions with those who promulgate such ‘edicts’, nor criteria for judging who has valid authority and expertise to participate in issuing such edicts (1).My inner atheist is smiling as it contemplates the myriad of parallels between religious texts and guidelines: who decides which texts become an accepted part of the holy book? who are the anointed priests who determine these things? and should we hold a ‘council of Trent’? The parallels just go on and on, but we must leave it there, despite the rich comic and satirical possibilities.

The creators of guidelines have a clear obligation to get out there and engage in dialogue with the people who are actually expected to use them. At the moment, the whole process bears too close a resemblance to a papal edict. The world of guidelines is rife with schisms, just like the world of religions.

The various churches have generally adopted the wisdom of claiming that their redemptive truths can only be verified in the [anticipated] afterlife.

Summary and conclusion

This commentary has looked at the various problems plaguing RCTs & M-A & EBM & guidelines. The foremost of these problems, that practicing doctors will benefit from an awareness of, are that the data behind RCTs/guidelines are seriously contaminated by secrecy, corruption, distortion, bias, and misapplied and poor science.

Major problems, of a directly science-related nature — epistemological, methodological, and statistical — are flagged, but not analysed in detail: to do that would require a book. A prominent one is the invalidity of extrapolating from patho-physiologically non-homogeneous trial groups: this afflicts almost all RCTs and it hugely diminishes their value.

A rather doubtful inference is made from RCTs (and amplified via guidelines): that modest short-term treatment effects over a few weeks, often only estimated by proxy measures, extend to long-term treatment and meaningful real-life outcomes.

Guidelines are a gift to the intellectually lazy, and are increasingly treated as inerrant texts with a quasi-religious authority that relieves doctors of their duty for personal thought, judgement, responsibility, and individual consideration — follow the guidelines, and that will relieve you of necessity to make decisions for yourself.

Despite clear statements in the introductions to many guidelines about their ‘advisory’ nature, and the responsibility of the individual doctor to assess and treat each patient on their merits, this frequently just does not happen: intellectually laziness supervenes. Guideline creators should be obligated to take full responsibility for all aspects and problems created by their ‘product’: how they are presented, promulgated, and updated, and how they are abused and misused, and more.

Furthermore, those who now have an increasing influence on the delivery of health care, be they politicians, or managers of health care delivery organisations, or insurance companies etc. make simplistic assumptions and interpretations in relation to what guidelines actually recommend, and use them for their own ends. That can mean not giving treatment-cost re-imbursement to patients, or sacking a doctor, for not following ‘the guidelines’. Even if that is infrequent, that does not alter the fact that many doctors who contact me justify not using particular drugs on the basis that ‘it is not recommended in the guidelines, and I will get in trouble’.

Add all these factors together and you have a considerable potential, much of it already realised, for misapplication and patient harm.

We already have a generation of doctors who have not developed clinical experience and expertise in utilising non-standard treatments. Therein lies a downside for progress in clinical practice, because so many advances actually come from observations quite unrelated to clinical trials and purpose-directed research.

However good the intentions might have been, in those who initiated the notion of guidelines, it is well to remember that, as the old saying goes, 'The road to hell is paved with good intentions'.

The ‘gold-standard’ of RCT guideline evidence, when assayed, is found to contain a disconcerting percentage of fools-gold.

We might end by reminding ourselves of one or two simple observations, discussed above, which suggest that these expensive new treatments have achieved little. The expenditure on drugs for psychiatric illnesses has increased exponentially, by close to 100 times over the course of my career. The suicide rate has decreased little, if all at, and the number of psychiatric patients on disability benefit is much increased in most western countries. The number of patients hospitalised and harmed, by adverse drug reactions, is now among the leading causes of morbidity and mortality.

Massive expenditure, minimal if any advance, much harm.

Assigning greater weight to other possible research methodologies, and to experience and clinical judgement (2), as opposed to ‘RCTs’, is of great, but presently neglected, importance.

A final point to emphasise for those not familiar with scientific literature is that a majority of the references below are authors who are eminent. The papers have been published in the most prestigious journals, like Nature, BMJ, Lancet, JAMA, PLoS Medicine etc. We are not talking about authors on the fringe of medicine publishing in dubious and obscure journals.

References

1.         Feinstein, AR and Horwitz, RI, Problems in the “evidence” of “evidence-based medicine”. The American journal of medicine, 1997. 103(6): p. 529-535.

2.         Sniderman, AD, LaChapelle, KJ, Rachon, NA, and Furberg, CD, The necessity for clinical reasoning in the era of evidence-based medicine. Mayo Clin. Proc., 2013. 88(10): p. 1108-14.

https://www.ncbi.nlm.nih.gov/pubmed/24079680

3.         Lundh, A, Lexchin, J, Mintzes, B, Schroll, JB, et al., Industry sponsorship and research outcome. Cochrane Database Syst Rev, 2017. 2: p. MR000033.

http://onlinelibrary.wiley.com/doi/10.1002/14651858.MR000033.pub3/full

4.         Angell, M, The truth about drug companies: How they deceive us and what to do about it. New York: Random House, 2005: p. 336.

5.         Angell, M, Drug Companies & Doctors: A Story of Corruption. New York Rev Books, 2009. 56: p. http://www.metododibella.org/cms-web/upl/doc/Documenti-inseriti-dal-2-11-2007/Truth About The Drug Companies.pdf.

6.         Smith, R, Travelling but never arriving: reflections of a retiring editor. Br. Med. J., 2004. 329(7460): p. 242-244.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=15284125

7.         Smith, RL, Medical Journals Are an Extension of the Marketing Arm of Pharmaceutical Companies. PLoS Med, 2005. 2: p. e138.

http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020138

https://doi.org/10.1371/journal.pmed.0020138

8.         Horton, R, The Dawn of McScience. New York Rev Books, 2004. 51: p. 7-9.

http://www.nybooks.com/articles/2004/03/11/the-dawn-of-mcscience/

9.         Kassirer, JP, On the take: How medicine's complicity with big business can endanger your health. 2004: Oxford University Press.

10.        Kassirer, JP, Commercialism and medicine: an overview. Camb. Q. Healthc. Ethics, 2007. 16(4): p. 377-86; discussion 439-42.

https://www.researchgate.net/profile/Jerome_Kassirer/publication/5828009_Commercialism_and_Medicine_An_Overview/links/0fcfd5114f8dbdc2ee000000.pdf

11.        Horton, R, The best editors get fired. Lancet, 2017. 390.

https://sci-hub.bz/http://dx.doi.org/10.1016/S0140-6736(17)32363-2

12.        Horton, R, Memorandum by Richard Horton (PI 108). The pharmaceutical industry and medical journals. UK Parliament: Select Committee on Health. Minutes of Evidence, 2004: p. https://publications.parliament.uk/pa/cm200405/cmselect/cmhealth/42/4121604.htm.

http://www.parliament.the-stationery-office.co.uk/pa/cm200405/cmselect/cmhealth/42/4121604.htm

13.        Kassirer, JP, Joint ownership: the shared responsibilities of journal editors and publishers. Md Med, 2007. 8(1): p. 10-2.

https://www.ncbi.nlm.nih.gov/pubmed/17472146

14.        Lexchin, J and Light, DW, Commercial influence and the content of medical journals. BMJ, 2006. 332(7555): p. 1444-7.

https://www.ncbi.nlm.nih.gov/pubmed/16777891

15.        Ioannidis, JP, The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses. Milbank Q., 2016. 94(3): p. 485-514.

https://www.ncbi.nlm.nih.gov/pubmed/27620683

16.        Fava, GA, Meta-analyses and conflict of interest. CNS Drugs, 2012. 26(2): p. 93-6.

https://www.ncbi.nlm.nih.gov/pubmed/22149196

17.        Greenhalgh, T, Intuition and evidence--uneasy bedfellows? Br. J. Gen. Pract., 2002. 52(478): p. 395-400.

18.        Schunemann, HJ, Woodhead, M, Anzueto, A, Buist, AS, et al., A guide to guidelines for professional societies and other developers of recommendations: introduction to integrating and coordinating efforts in COPD guideline development. An official ATS/ERS workshop report. Proc Am Thorac Soc, 2012. 9(5): p. 215-8.

https://www.ncbi.nlm.nih.gov/pubmed/23256161

19.        Mercuri, M, Sherbino, J, Sedran, RJ, Frank, JR, et al., When guidelines don't guide: the effect of patient context on management decisions based on clinical practice guidelines. Acad. Med., 2015. 90(2): p. 191-6.

https://www.ncbi.nlm.nih.gov/pubmed/25354075

20.        do Prado-Lima, PAS, The surprising blindness in modern psychiatry: do guidelines really guide? CNS Spectr, 2017. 22(4): p. 312-314.

https://www.ncbi.nlm.nih.gov/pubmed/27866506

21.        Cabarkapa, S, Perera, M, McGrath, S, and Lawrentschuk, N, Prostate cancer screening with prostate-specific antigen: A guide to the guidelines. Prostate Int, 2016. 4(4): p. 125-129.

https://www.ncbi.nlm.nih.gov/pubmed/27995110

22.        Waters, DD and Boekholdt, SM, An Evidence-Based Guide to Cholesterol-Lowering Guidelines. Can. J. Cardiol., 2017. 33(3): p. 343-349.

https://www.ncbi.nlm.nih.gov/pubmed/28034582

23.        Chuang, YC, Chuang, HY, Lin, TK, Chang, CC, et al., Effects of long-term antiepileptic drug monotherapy on vascular risk factors and atherosclerosis. Epilepsia, 2012. 53(1): p. 120-8.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=22085257

24.        Lenzer, J, Why we can’t trust clinical guidelines. BMJ, 2013. 346(58): p. f3830.

https://www.ncbi.nlm.nih.gov/pubmed/23771225

25.        Goodman, NW, Who will challenge evidence-based medicine? J. R. Coll. Physicians Lond., 1999. 33(3): p. 249-51.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=10402573

26.        Goodman, NW, Criticizing evidence-based medicine. Thyroid, 2000. 10(2): p. 157-160.

27.        Ioannidis, JP, Hijacked evidence-based medicine: stay the course and throw the pirates overboard. J. Clin. Epidemiol., 2016. 73: p. 82-86.

https://doi.org/10.1016/j.jclinepi.2017.02.001

28.        Fava, GA, Evidence-based medicine was bound to fail: a report to Alvan Feinstein. J. Clin. Epidemiol., 2017. 84: p. 3-7.

29.        Pedros, C, Quintana, B, Rebolledo, M, Porta, N, et al., Prevalence, risk factors and main features of adverse drug reactions leading to hospital admission. Eur. J. Clin. Pharmacol., 2014. 70(3): p. 361-7.

https://www.ncbi.nlm.nih.gov/pubmed/24362489

30.        Robb, G, Loe, E, Maharaj, A, Hamblin, R, et al., Medication-related patient harm in New Zealand hospitals. N. Z. Med. J., 2017. 130(1460): p. 21-32.

https://www.ncbi.nlm.nih.gov/pubmed/28796769

31.        Parameswaran Nair, N, Chalmers, L, Bereznicki, BJ, Curtain, C, et al., Adverse Drug Reaction-Related Hospitalizations in Elderly Australians: A Prospective Cross-Sectional Study in Two Tasmanian Hospitals. Drug Saf, 2017. 40(7): p. 597-606.

https://www.ncbi.nlm.nih.gov/pubmed/28382494

32.        Boileau, I, Houle, S, Rusjan, PM, Furukawa, Y, et al., Influence of a low dose of amphetamine on vesicular monoamine transporter binding: a PET (+)[11C]DTBZ study in humans. Synapse, 2010. 64(6): p. 417-20.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=20169578

33.        Benard-Laribiere, A, Miremont-Salame, G, Perault-Pochat, MC, Noize, P, et al., Incidence of hospital admissions due to adverse drug reactions in France: the EMIR study. Fundam. Clin. Pharmacol., 2015. 29(1): p. 106-11.

https://www.ncbi.nlm.nih.gov/pubmed/24990220

34.        Cosgrove, L, Krimsky, S, Wheeler, EE, Peters, SM, et al., Conflict of Interest Policies and Industry Relationships of Guideline Development Group Members: A Cross-Sectional Study of Clinical Practice Guidelines for Depression. Account Res, 2017. 24(2): p. 99-115.

https://www.ncbi.nlm.nih.gov/pubmed/27901595

35.        Bastian, H, Nondisclosure of Financial Interest in Clinical Practice Guideline Development: An Intractable Problem? PLoS Med, 2016. 13(5): p. e1002030.

https://www.ncbi.nlm.nih.gov/pubmed/27243232

36.        Campsall, P, Colizza, K, Straus, S, and Stelfox, HT, Financial Relationships between Organizations That Produce Clinical Practice Guidelines and the Biomedical Industry: A Cross-Sectional Study. PLoS Med, 2016. 13(5): p. e1002029.

https://www.ncbi.nlm.nih.gov/pubmed/27244653

37.        Blumsohn, A, Authorship, ghost-science, access to data, and control of the pharmaceutical scientific literature: who stands behind the word? AAAS Professional Ethics Report, 2006. 19: p. 1-4.

https://www.aaas.org/sites/default/files/migrate/uploads/per46.pdf

38.        Leucht, S, Kissling, W, and Davis, JM, Second-generation antipsychotics for schizophrenia: can we resolve the conflict? Psychol Med, 2009. 39(10): p. 1591-602.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=19335931

39.        Huston, P and Moher, D, Redundancy, disaggregation, and the integrity of medical research. Lancet, 1996. 347(9007): p. 1024-6.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=8606568

40.        Spielmans, GI and Kirsch, I, Drug approval and drug effectiveness. Annu Rev Clin Psychol, 2014. 10: p. 741-66.

https://www.ncbi.nlm.nih.gov/pubmed/24329178

41.        Worrall, J, Causality in medicine: getting back to the Hill top. Prev. Med., 2011. 53(4-5): p. 235-8.

https://www.ncbi.nlm.nih.gov/pubmed/21888926

42.        Parker, G, Evaluating treatments for the mood disorders: time for the evidence to get real. Aust NZ J Psychiatry, 2004. 38(6): p. 408-14.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=15209831

43.        Mulder, R, Singh, AB, Hamilton, A, Das, P, et al., The limitations of using randomised controlled trials as a basis for developing treatment guidelines. Evid Based Ment Health, 2017.

https://www.ncbi.nlm.nih.gov/pubmed/28710065

44.        Bothwell, LE, Greene, JA, Podolsky, SH, and Jones, DS, Assessing the Gold Standard--Lessons from the History of RCTs. N. Engl. J. Med., 2016. 374(22): p. 2175-81.

https://www.ncbi.nlm.nih.gov/pubmed/27248626

45.        Naudet, F, Falissard, B, Boussageon, R, and Healy, D, Has evidence-based medicine left quackery behind? Intern Emerg Med, 2015. 10(5): p. 631-4.

https://www.ncbi.nlm.nih.gov/pubmed/25828467

46.        Naudet, F, Boussageon, R, Palpacuer, C, Gallet, L, et al., Understanding the Antidepressant Debate in the Treatment of Major Depressive Disorder. Therapie, 2015. 70(4): p. 321-7.

https://www.ncbi.nlm.nih.gov/pubmed/25679188

47.        Shorter, E, A brief history of placebos and clinical trials in psychiatry. Can. J. Psychiatry., 2011. 56(4): p. 193-7.

https://www.ncbi.nlm.nih.gov/pubmed/21507275

48.        Thompson, RP, Causality, mathematical models and statistical association: dismantling evidence-based medicine. J. Eval. Clin. Pract., 2010. 16(2): p. 267-75.

https://www.ncbi.nlm.nih.gov/pubmed/20367846

49.        Hill, AB, The Environment and Disease: Association or Causation? Proc. R. Soc. Med., 1965. 58: p. 295-300.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=14283879

50.        Moncrieff, J, Antidepressants: misnamed and misrepresented. World Psychiatry, 2015. 14(3): p. 302-3.

https://www.ncbi.nlm.nih.gov/pubmed/26407780

51.        Le Noury, J, Nardo, JM, Healy, D, Jureidini, J, et al., Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence. BMJ, 2015. 351: p. h4320.

http://www.bmj.com/content/351/bmj.h4320.full

52.        Sharma, T, Guski, LS, Freund, N, and Gotzsche, PC, Suicidality and aggression during antidepressant treatment: systematic review and meta-analyses based on clinical study reports. BMJ, 2016. 352: p. i65.

https://www.ncbi.nlm.nih.gov/pubmed/26819231

53.        Moncrieff, J, Misrepresenting harms in antidepressant trials. BMJ, 2016. 352: p. i217.

https://www.ncbi.nlm.nih.gov/pubmed/26823531

54.        Dubicka, B, Cole-King, A, Reynolds, S, and Ramchandani, P, Paper on suicidality and aggression during antidepressant treatment was flawed and the press release was misleading. BMJ, 2016. 352: p. i911.

https://www.ncbi.nlm.nih.gov/pubmed/26883639

55.        Gotzsche, PC, Author's reply to Dubicka and colleagues and Stone. BMJ, 2016. 352: p. i915.

https://www.ncbi.nlm.nih.gov/pubmed/26884436

56.        Healy, D, Clinical trials and legal jeopardy. Bulletin of medical ethics, 1999(153): p. 13-18.

57.        Jureidini, JN, Amsterdam, JD, and McHenry, LB, The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance. Int J Risk Saf Med, 2016. 28(1): p. 33-43.

http://www.ncbi.nlm.nih.gov/pubmed/27176755

58.        Kirsch, I, Deacon, BJ, Huedo-Medina, TB, Scoboria, A, et al., Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS Med, 2008. 5(2): p. e45.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=18303940

59.        Kirsch, I and Moore, TJ, The Emperor's New Drugs: An Analysis of Antidepressant Medication Data Submitted to the U.S. Food and Drug Administration. Prevention & Treatment, 2002. 5.

http://journals.apa.org/prevention/volume5/pre0050033r.html

60.        Locher, C, Koechlin, H, Zion, SR, Werner, C, et al., Efficacy and Safety of Selective Serotonin Reuptake Inhibitors, Serotonin-Norepinephrine Reuptake Inhibitors, and Placebo for Common Psychiatric Disorders Among Children and Adolescents: A Systematic Review and Meta-analysis. JAMA psychiatry, 2017.

http://jamanetwork.com/journals/jamapsychiatry/article-abstract/2652447

61.        Allison, DB, Brown, AW, George, BJ, and Kaiser, KA, Reproducibility: A tragedy of errors. Nature, 2016. 530(7588): p. 27-9.

https://www.ncbi.nlm.nih.gov/pubmed/26842041

62.        Fountoulakis, KN, McIntyre, RS, and Carvalho, AF, From Randomized Controlled Trials of Antidepressant Drugs to the Meta-Analytic Synthesis of Evidence: Methodological Aspects Lead to Discrepant Findings. Curr Neuropharmacol, 2015. 13(5): p. 605-15.

https://www.ncbi.nlm.nih.gov/pubmed/26467410

63.        Fountoulakis, KN, Samara, MT, and Siamouli, M, Burning issues in the meta-analysis of pharmaceutical trials for depression. J Psychopharmacol, 2014. 28(2): p. 106-17.

https://www.ncbi.nlm.nih.gov/pubmed/24043723

64.        Tendal, B, Nuesch, E, Higgins, JP, Juni, P, et al., Multiplicity of data in trial reports and the reliability of meta-analyses: empirical study. BMJ, 2011. 343: p. d4829.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=21878462

65.        Siontis, KC, Evangelou, E, and Ioannidis, JP, Magnitude of effects in clinical trials published in high-impact general medical journals. Int. J. Epidemiol., 2011. 40(5): p. 1280-91.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=22039194

66.        Mansournia, MA and Altman, DG, Invited commentary: methodological issues in the design and analysis of randomised trials. Br. J. Sports Med., 2017.

https://www.ncbi.nlm.nih.gov/pubmed/28756393

67.        Altman, D and Bland, JM, Confidence intervals illuminate absence of evidence. Br. Med. J., 2004. 328(7446): p. 1016-7.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=15105337

68.        Greenland, S, Senn, SJ, Rothman, KJ, Carlin, JB, et al., Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations. Eur. J. Epidemiol., 2016. 31(4): p. 337-50.

https://www.ncbi.nlm.nih.gov/pubmed/27209009

69.        Wasserstein, RL and Lazar, NA, The ASA's Statement on p-Values: Context, Process, and Purpose. The American Statistician, 2016. 70(2): p. 129-133.

http://dx.doi.org/10.1080/00031305.2016.1154108

70.        Feinstein, AR, Meta-analysis: statistical alchemy for the 21st century. J. Clin. Epidemiol., 1995. 48(1): p. 71-9.

https://www.ncbi.nlm.nih.gov/pubmed/7853050

71.        Fuller, J, Rhetoric and argumentation: how clinical practice guidelines think. J. Eval. Clin. Pract., 2013. 19(3): p. 433-41.

https://www.ncbi.nlm.nih.gov/pubmed/23692224

72.        Fuller, J, Rationality and the generalization of randomized controlled trial evidence. J. Eval. Clin. Pract., 2013. 19(4): p. 644-7.

https://www.ncbi.nlm.nih.gov/pubmed/23368415

73.        White, J, Fraud fighter: 'Faked research is endemic in China' New Scientist, 2012(2891): p. http://www.newscientist.com/article/mg21628910.300-fraud-fighter-faked-research-is-endemic-in-china.html.

74.        Gotzsche, PC, Hrobjartsson, A, Johansen, HK, Haahr, MT, et al., Ghost Authorship in Industry-Initiated Randomised Trials. PLoS Med, 2007. 4(1): p. e19.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=17227134

75.        Sismondo, S, Ghost management: how much of the medical literature is shaped behind the scenes by the pharmaceutical industry? PLoS Med, 2007. 4(9): p. e286.

https://www.ncbi.nlm.nih.gov/pubmed/17896859

76.        Wislar, JS, Flanagin, A, Fontanarosa, PB, and Deangelis, CD, Honorary and ghost authorship in high impact biomedical journals: a cross sectional survey. BMJ, 2011. 343: p. d6128.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=22028479

77.        Lexchin, J, Those Who Have the Gold Make the Evidence: How the Pharmaceutical Industry Biases the Outcomes of Clinical Trials of Medications. Sci Eng Ethics, 2011.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=21327723

78.        Ross, JS, Hill, KP, Egilman, DS, and Krumholz, HM, Guest authorship and ghostwriting in publications related to rofecoxib: a case study of industry documents from rofecoxib litigation. JAMA, 2008. 299(15): p. 1800-12.

https://www.ncbi.nlm.nih.gov/pubmed/18413874

79.        Barbour, V, How ghost-writing threatens the credibility of medical knowledge and medical journals. Haematologica, 2010. 95(1): p. 1-2.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=20065074

80.        Moynihan, R, Key opinion leaders: independent experts or drug representatives in disguise? BMJ, 2008. 336(7658): p. 1402-3.

https://www.ncbi.nlm.nih.gov/pubmed/18566074

81.        Dechartres, A, Trinquart, L, Atal, I, Moher, D, et al., Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study. BMJ, 2017. 357: p. j2490.

https://www.ncbi.nlm.nih.gov/pubmed/28596181

82.        Ioannidis, J, Lies, Damned Lies, and Medical Science. Atlantic, 2010. November 17th.

http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/

83.        Dechartres, A, Altman, DG, Trinquart, L, Boutron, I, et al., Association between analytic strategy and estimates of treatment outcomes in meta-analyses. JAMA, 2014. 312(6): p. 623-630.

http://jamanetwork.com/journals/jama/fullarticle/1895246

84.        Steel, N, Abdelhamid, A, Stokes, T, Edwards, H, et al., A review of clinical practice guidelines found that they were often based on evidence of uncertain relevance to primary care patients. J. Clin. Epidemiol., 2014. 67(11): p. 1251-7.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4221610/

85.        EveryPalmer, S and Howick, J, How evidencebased medicine is failing due to biased trials and selective publication. J. Eval. Clin. Pract., 2014. 20(6): p. 908-914.

86.        Popper, K, The Demarcation between Science and Metaphysics. Conjectures and Refutations: The Growth of Scientific Knowledge (1963), 1963. Ch 11.

87.        Joiner, K and Lusch, R, Evolving to a new service-dominant logic for health care. Innovation and Entrepreneurship in Health, 2016.

88.        Hoffmann, TC, Montori, VM, and Del Mar, C, The connection between evidence-based medicine and shared decision making. JAMA, 2014. 312(13): p. 1295-1296.

89.        Moncrieff, J and Thomas, P, The pharmaceutical industry and disease mongering. Psychiatry should not accept so much commercial sponsorship. Br. Med. J., 2002. 325(7357): p. 216; author reply 216.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=12143863

90.        Moynihan, R, Heath, I, and Henry, D, Selling sickness: the pharmaceutical industry and disease mongering. Br. Med. J., 2002. 324(7342): p. 886-91.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=11950740

91.        Moynihan, R and Henry, D, The Fight against Disease Mongering: Generating Knowledge for Action. PLoS Med, 2006. 3(4): p. e191 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=16597180.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=16597180

92.        Tiihonen, J, Tanskanen, A, Hoti, F, Vattulainen, P, et al., Pharmacological treatments and risk of readmission to hospital for unipolar depression in Finland: a nationwide cohort study. Lancet Psychiatry, 2017. 4(7): p. 547-553.

https://www.ncbi.nlm.nih.gov/pubmed/28578901

93.        Jones, H, Geddes, J, and Cipriani, A, Lithium and Suicide Prevention, in The Science and Practice of Lithium Therapy, G Malhi, JM Masson, and F Bellivier, Editors. 2017, Springer. p. 223-240.

94.        Wingard, L, Boden, R, Brandt, L, Tiihonen, J, et al., Reducing the rehospitalization risk after a manic episode: A population based cohort study of lithium, valproate, olanzapine, quetiapine and aripiprazole in monotherapy and combinations. J Affect Disord, 2017. 217: p. 16-23.

https://www.ncbi.nlm.nih.gov/pubmed/28364619

95.        Baldessarini, RJ and Tondo, L, International suicide rates versus adequate treatments. The British Journal of Psychiatry, 2017. 210(4): p. 298-299.

96.        Shah, A, Bhat, R, Zarate-Escudero, S, DeLeo, D, et al., Suicide rates in five-year age-bands after the age of 60 years: the international landscape. Aging Ment Health, 2016. 20(2): p. 131-8.

https://www.ncbi.nlm.nih.gov/pubmed/26094783

97.        Curtin, SC, Warner, M, and Hedegaard, H, Increase in Suicide in the United States, 1999-2014. NCHS Data Brief, 2016(241): p. 1-8.

https://www.ncbi.nlm.nih.gov/pubmed/27111185

98.        Viola, S and Moncrieff, J, Claims for sickness and disability benefits owing to mental disorders in the UK: trends from 1995 to 2014. BJPsych Open, 2016. 2(1): p. 18-24.

https://www.ncbi.nlm.nih.gov/pubmed/27703749

99.        Tiihonen, J, Lonnqvist, J, Wahlbeck, K, Klaukka, T, et al., 11-year follow-up of mortality in patients with schizophrenia: a population-based cohort study (FIN11 study). Lancet, 2009. 374(9690): p. 620-7.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=19595447

100.      Brown, J, Hanlon, P, Turok, I, Webster, D, et al., Mental health as a reason for claiming incapacity benefit—a comparison of national and local trends. Journal of public health, 2008. 31(1): p. 74-80.

101.      Whitaker, R and Cosegrove, L, Psychiatry under the influence. 2015: Macmillan.

102.      Gotzsche, PC, Young, AH, and Crace, J, Does long term use of psychiatric drugs cause more harm than good? BMJ, 2015. 350: p. h2435.

https://www.ncbi.nlm.nih.gov/pubmed/25985333