Guidelines: problems aplenty

They fuck you up, the bloody guidelines.   

   They weren’t designed to, but they do.   

They’re filled with faults, then add

   egregious extras, just for you.

A parody: apologies to Philip Larkin – Original here

Introduction

In a time of universal deceit, telling the truth is a revolutionary act.

George Orwell 1984

(No prize will be awarded for the correct guess as to whose name has most frequently been associated with this quotation in the last year)

Guidelines are the ‘final common pathway’ for data that is generated by randomised controlled trials (RCTs), and this commentary argues that they are considerably flawed, and are a negative influence on good clinical practice. First and foremost, this is because RCTs are based on corrupted data, which emanates from the systematic manipulation and distortion of clinical trial processes and their results, by ‘big pharma’.

Much published drug-trial research is advertising — not science. It is ‘McScience’, as Horton so scathingly put it. He is among the host of distinguished researchers who have commented in relation to this, along with other ex-editors of leading medical journals (1-5). One might suppose they acquired special knowledge of how they were being duped and manipulated, along with the publications they were in charge of.

Doctors may have an insufficient appreciation of how comprehensively corrupted are the data, and therefore the guidelines themselves, which they expect to be trustworthy, in every sense of that word. Data, relied on by guidelines, have generally not been independently replicated. So, it is not science.

Big Pharma has played a major part in creating and steering both diagnostic practice and guidelines, which are the coup de grace in this sorry saga of the slow death of science.

There has been, quite rightly, a fuss, and much writing, about the various issues relating to undue influence, fraud, bias etc. discussed in this commentary. However, right from the start, I want to emphasise how little difference this has made to those very same improper and dishonest practices.

Many of the apparent improvements that have been made, concerning the probity of science research and publishing, constitute little more than a splash of new paint on the facade of a decrepit building.

Guidelines magnify the imbalances and promote a narrow perspective on health-care which puts excessive emphasis on drugs over other non-drug interventions, or no intervention at all.

Independent replication is the corner-stone of all science. If you cannot inspect the original data, you cannot know whether you have replicated it. Most medical research is not independently replicated, ergo, it is not science. It is that simple. No rationalisations or excuses can alter that.

Either you are doing science, or you are not.

Guidelines have proliferated over recent decades and they are the dominant influence over the treatments chosen by practicing doctors. One suspects that is exactly what some intended right from the start. The meetings of senior doctors, to craft DSM and most guidelines, have been heavily funded by drug companies. Large numbers of eminent American doctors were handsomely remunerated to attend resorts in Palm Springs and like venues, to thrash out guidelines for the use of SSRIs, Xanax, Risperdal, Seroquel … you name it.

Guidelines now unjustifiably impose themselves on doctors who may not agree with them. That is a sort of intellectual bullying.

By the way, there are multiple guides to guidelines — honestly (6-10).

Which set of guidelines do you then choose to follow? I might facetiously ask, ‘is there an evidence base for deciding which guideline has the best evidence base’?

Guidelines are contaminated by having expert panel-members who have financial ties to drug companies, even though the Institute of medicine long ago recommended that no such people should be on the guideline-panels (11, 12). Even if panel-members are truly independent, their main currency is still corrupted RCT data, and no-one can overcome that problem, any more than can the statistical procedure of ‘meta-analysis’ (M-A)’ — garbage in, garbage out (see below).

There are good reasons to suppose that the evidence-based medicine (EBM) enterprise is diseased from the roots to the shoots.

Guidelines have morphed. They may well have been intended by some proponents, as exactly that, guides: the sort of kind advice that a senior colleague might give about a difficult case. But they have been seized on by the simple-minded, the lazy, the authoritarian, the managers, the media, and even politicians, as if they were diktats — and that is how I see them being applied to many of the people who contact me.

This is a complex topic to deal with and understand. It involves an understanding of a lot of history, how businesses work, how medicine works (I refer to the vested interests of specialists and ‘experts’), and much else besides. That understanding can only be attained through wide-ranging experience of medicine and life, and extensive reading. Few doctors have the time to do that, except for those like me who are enjoying a comfortable retirement in the sun, which is setting on the age of the polymath.

The books I have listed are in my view the indispensable background to enabling people to see and understand the big picture. I shall say no more about that, otherwise this commentary will be 10,000 words before we know it. I will simply add that as a long-time pharmacologist, with a sceptical attitude, I am absolutely certain that vast numbers of people are being treated with drugs that produce little or no benefit, but have many poorly documented and unpublicised ill effects.

Do we need to be reminded that adverse reactions to drugs are among the leading causes of hospital admissions and deaths (13-16)?

I am reminded of Shaw’s words:

‘When a stupid man is doing something he is ashamed of, he always declares that it is his duty.’ 

I expect you can translate that into ‘guideline-speak’ yourself.

Recommend books

This is a good point at which to recommend books relevant to this subject: I recommend these because they are all written by scientists giving an informed view of the subject. These individuals continue to attract a considerable degree of opprobrium: powerful groups do not like the truth being told.

Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients’’ (Faber and Faber, 2013) Ben Goldacre; Senior Clinical Research Fellow, Centre for Evidence-Based Medicine, University of Oxford.

Pharmageddon. Professor David Healy. Hergest Unit, Bangor, Wales(the best pun title I can remember).

Deadly Medicines and Organised Crime: How Big Pharma Has Corrupted Healthcare Professor Peter C. Gøtzsche. Danish physician, medical researcher, and leader of the Nordic Cochrane Center at Rigshospitalet in Copenhagen, Denmark. He co-founded, and has written numerous reviews in, the Cochrane collaboration.

Psychiatry Under the Influence: A Case Study of Institutional Corruption. Professor Lisa Cosgrove and Mr. Robert Whitaker (a medical writer, director of publications at Harvard Medical School), both are fellows at Edmond J. Safra Center for Ethics, Harvard.

The Truth About the Drug Companies: How They Deceive Us and What to Do About It (Random House, 2005) by Marcia Angell, M.D., (former editor of the New England Journal of Medicine).  Marcia Angell, M. D., is a Corresponding Member of the Faculty of Global Health and Social Medicine at Harvard Medical School and Faculty Associate in the Center for Bioethics. She stepped down as Editor-in-Chief of the New England Journal of Medicine on June 30, 2000. The only one of this list that I have not read myself.

Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. Professor Naomi Oreskes, Erik M. Conway.  This is a more general historical overview covering tobacco and climate denial etc. which gives a better impression of the enormity and persistence of these big-business tactics.

I do not make a habit of reading books like this, since I have already made most of these points myself. And continuing to read literature which simply agrees with what you already think is not a priority, reading material which disagrees with what you think is what good scientists do. However, when I decided I would write a commentary about guidelines it was necessary to read or re-read these texts.

Seriously corrupted data — the core problem

The serious persistent problems surrounding confliction and evidence relating to guidelines are undeniable, as evidenced by many recent reviews (8, 17-19), which indicate, as suggested above, that little has changed over the last decade or so, despite all the kerfuffle.

These problems are related to, first and foremost, the appropriation, hiding and distorting of patient data by ‘big-pharma’ (see below), as well as the conflicted handling, and then mis-used third-rate science, that underpins most of the clinical-trial base, and thus of the ‘evidence-based medicine’ enterprise.

This has been fuelled by the massive financial power imbalance in the medical system (pharmaceutical companies have all the money). It has been powerfully catalysed by the weak acquiescence of the medical profession in allowing drug companies to take over the whole trial process, including the actual data — that is, incidentally, a glaring ethical betrayal of patient confidence: but few seem to have commented on that, or even noticed it.

Allowing the partisan drug industry to sequester the data, and refuse to let (even their own) expert ‘authors’ examine it, was a serious tactical error (possession is 9/10 of the law).

It is hard to respect the medical professionals who have colluded in this process, a proportion of whom are undoubtedly wicked, greedy, self-aggrandising and dishonest, even if they convince themselves otherwise.

What do I mean by seriously corrupted data? The concise answer is this: data that is not reproducible, nor subject to examination and checking by others (see Pharmageddon for chapter and verse). The longer answer is, study the references given.

If I report that a patient I have assessed was ‘suicidal’ this means little if I do not record, accurately, exactly what I asked the patient, and what they replied. Needless to say, it means even less if I refuse to show my case records to anybody else, and simply justify my opinion by saying ‘because I say so’. But that is what big Pharma is still getting away with.

There is rather more to it than that: for instance, if the patient does not have a relationship with me, and does not trust me, then they are unlikely to answer truthfully, for fear of being locked-up, or whatever.

I am going to have to give a few examples relating to corrupted data here because, although endless examples and details are in the references and books cited, many will not get around to looking at them. Since these are crucial evidential material, I will give details on one or two, because I can hear some of my colleagues saying, ‘come on Ken, surely you are overstating the case here, it's only a few bad apples etc.’ If only …

Blumsohn was the doctor at Sheffield who lost his job for attempting to insist on seeing the raw data for the tables in the ghost-written paper he was presented with as ‘author’. He was not prepared, like most are, to just ‘sign-off’ on it. His Dean, Eastwell, another co-author, subsequently appeared before the General Medical Council because he said he had seen the ‘raw data’ when he had not*** (20); The reference has all the details.

***An explanatory comment on this is mandatory. Eastwell’s (successful) defence was that he had seen the data, but what he was referring to was the coded data, not the original ‘raw data’. Let us assume that he was not being disingenuous (which may be the case), that still leaves him guilty of being a naive and bad scientist. If scientists did not insist on dealing with original data then we would all believe in ghosts.It all goes to show how many doctors do not understand science.

Documents revealed in another court case showed a senior company executive commenting on the [established fact of] hiding of adverse-event data from a drug trial, saying in an internal company email, ‘if this comes out I don't know how I will be able to face my wife and children’ — one imagines this was a rather superficial and self-serving mea culpa, but no less revealing for that.

It seems that many companies have been keeping documents ‘offshore’ which impedes their accesses by legal processes — but such data could only be incriminating if they revealed ‘misrepresentation’, lying, cheating, whatever. It is an instance of ‘excusatio non petita accusatio manifesta’ [he who excuses himself, accuses himself].

I have previously commented on the chicanery involved in the Risperdal trials, and would only repeat here that the meta-analysis by Leucht (21) failed to cite the classic Huston paper that dissected the deceit pervading Risperdal trials (22): when I asked Leucht why he had not cited Huston, he said they simply did not know about it. How hard did they look? If I can find it, in my disadvantaged and isolated situation here in tropical North Queensland, how come a professor at a major European University cannot find it. You see what you want to see, and forget what you do not want to remember.

Anyway, the extensive dishonesty involved with Risperdal (and, of course, many other drugs (23)) is well documented elsewhere and I have lost track of the number of successful legal actions against them in relation to this. They must have paid out more than a billion, by now. Ah well, it's just the cost of doing business. Google it, it will astonish you.

Look at the references for a myriad of further examples: when commenting on this sort of thing one feels like a hawk attacking a flock of starlings, there are so many targets that there is a danger of not killing any of them.

RCT: gold-standard or fools-gold?

Control RCTs. Control clinical practice

Control RCTs and the data they generate, keep it to yourself (preferably off-shore along with your tax shelf-companies) and you control clinical guidelines and clinical practice: talk about a fait accompli.

The dogma of RCTs as the gold standard has been made to over-shadow other forms of evidence: therefore, controlling clinical practice has become, pretty much, a one-step process.

Therefore, a key question becomes: how much value should we really place in RCTs? and how are their results demonstrably relevant and beneficial to the man in the street?  In other words, do they translate into reliable and meaningful treatment decisions? That is a crucial question which has not been well addressed. The eminent Australian professor, Gordon Parker, argued, some time ago, that there are major ‘limitations to level I evidence derived from randomised controlled trials … which are no longer producing meaningful clinical results’, and that paper is entirely consonant with the major points raised herein (24). Others have made similar points, but that paper is an authoritative exposition of the relevant arguments.

Anti-depressant — a meaningless term

There is another key point to be borne in mind. The degree of symptom improvement that a drug must exhibit, in order to be approved as an ‘antidepressant’, is minimal. Bearing in mind that such drugs are assessed for effectiveness using the poor and antiquated ‘Hamilton rating scale for depression’, one can easily see how small changes of symptoms, that have nothing to do with the core pathology of depressive illness (anergia and anhedonia), are sufficient to get almost any drug with sedative or anxiolytic properties over that hurdle (e.g. see my commentary on quetiapine), even if it has absolutely no effectiveness on the core changes that constitute the illness. Look at this online version of the HRSD to see what I mean. Qs 4, 9,10,11 & 12 might all be improved by any anxiolytic/sedative — a one gradation change in each of those produces a 5-point improvement of your score, more than double that needed to get a drug approved by the FDA as an AD. Yes, it really is that silly.

Also, note there is not one single question assessing the key core symptom of anhedonia — absurd, totally absurd.

That is not science.

Deliberately dishonest coding

The data gathered in clinical trials are inevitably subject to interpretation and uncertainty. Responses to a series of questions, artificially and rigidly constructed, asked by someone (unknown to the patient), paid for by a drug company, to go around asking questions from a clipboard!

For the purposes of analysis, they are coded by someone. Doctors have abrogated the responsibility for their lead-role in trials, so this someone is not the doctor who had responsibility for clinical care of the patient, but a technician at drug company central office — in fact, they pay separate ‘clinical trials’ companies, set up specially to manage these things, to do this. Having an arm's-length-separation facilitates plausible deniability. A recent painstaking re-analysis of the infamous paroxetine study 329 illustrates many of these points (25).

Furthermore, we know that coding has been incorrect (deliberately dishonest?) in many instances, so that suicidal thoughts and feelings and intentions were coded, during the analysis of results, as something different (25)— and read ‘Pharmageddon’ for further details and references. Therefore, when the results were presented, and written-up for publication by ‘ghost-writers’, who had nothing at all to do with the actual drug trial — they were probably not even on the same continent — no one, neither the presenters nor the attendees, had any idea what had really happened to actual patients.

Such practices have nothing to do with good science and any doctors that associate themselves with such practices have either been duped, or are dishonest and traitors to science.

The medical colleges and authorities have pretty much forgotten their ethical principles. That is highlighted by the fact that the ‘famous’ doctors who have allowed their names to be used as authors (front-men) of these kinds of papers have not been struck off the medical register for dishonesty or corruption.Are we so inured to such behaviour that we have lost our capacity to be outraged by it?

It is routine practice for the doctors, who participate in these trials from various different centres, to be refused access to the original aggregated data, they only get to see the data after its coded by somebody else. There are now numerous documented examples that this is done erroneously or dishonestly, and that the practice continues (26, 27). What has recently been put on the Internet in the name of ‘transparency’ is a token: because the data shared is not the original data, it is the coded data. Not the same thing.

That is a mockery of science.

An illustration

The way the pharmaceutical industry presented the benefits and side effects of SSRIs is an illustration of several of the above points concerning misleading manipulation of data and misclassification of side-effects. A major therapeutic effect (it is not a ‘side effect’) of all SSRIs is to inhibit the pathways that lead to sexual climax, in both sexes. The minor effects on anxiety and mood, are small by comparison (barely a 2-point difference between drug and placebo on the HDRS).

See my note on citalopram from nearly twenty years ago. [it has been on the site, but not attached to a menu — it is now: which is a reminder to use the search facility]. There are a number of bullet points at the end, one of which points out that the placebo patients' degree of improvement was just as good after 4 weeks as the citalopram patients' was after 3 weeks, and that the average practitioner would not have been able to discern the difference between those on placebo or citalopram at 'endpoint'.

Anyway, none of the trials of these drugs indicated that inhibition of sexual climax was anything other than a rare occurrence — I was using clomipramine to help premature ejaculation in the 1980s, before ‘Prozac’ even existed! That is how well-know the SRI-effect on ejaculation was. I shall not dwell on this here, but if there is anybody out there who still doubts how the relative prominence of SEs vs benefits has been turned on its head, they might be persuaded if they read the relevant section of Prof Healy’s book Pharmageddon.

That exemplifies well the methodology that was developed for maximising the trivial effects on mood, by using large numbers of patients to get a marginally significant statistical result (cf. citalopram, above, and more recently of course see Kirsch (28, 29)) whilst at the same time failing to ask appropriate questions to elicit side-effects, or ‘mis-coding’ them (25).

And long-term side effects — not our problem, it is licenced now, up to someone else to do all that.

That is bad science and it is deceitful science; it simply does not, and cannot get more, how can one put it: incorrect, erroneous, false, inaccurate, fallacious, mistaken, duplicitous, shoddy, corrupt, double-dealing, deceptive, deceitful, crooked, untrustworthy, fraudulent, misleading.

In short: it is as wrong as the parrot was dead. For those interested in rhetoric that is an amusing example of ‘pleonasm’.

Statistics

The sentiment behind the adage: ‘lies, dammed lies, and statistics’ has a long history going back to at least the 19th century. In The Life and Letters of Thomas Henry Huxley, is his account of a meeting of the X Club, which was a gathering of eminent thinkers who aimed to advance the cause of science, especially Darwinism: ‘Talked politics, scandal, and the three classes of witnesses — liars, damned liars, and experts.’  Even more apposite for our time.

I start with this old adage because it has withstood the test of time, and because modern information-laundering, in this post-truth world, has given it a new potency and influence.

Here is a tiny sample of many references I could give, by eminent researchers, discussing misuse of statistics in a great proportion of medical studies. Hardly surprising then that almost all published medical studies turn out to be wrong, as history indisputably demonstrates (30-36).

I am not a statistician, so I will merely content myself with pointing out the above references and mentioning that two of the prominent culprits are p-values and the procedure called meta-analysis, which is invariably applied in a pseudo-scientific manner. It forms the backbone of guidelines, where it reaches its pseudo-scientific zenith. Elsewhere I have quoted Charles Babbage on this subject (GIGO — garbage in, garbage out).

A researcher whose name is well-known in this field recently said to me in a private email:

‘I have rather gone off 'meta-analysis' as it is mostly selective/rubbish data in - spurious certainty or continuing uncertainty out, whatever the sophistication of the statistical methods. I include myself in this criticism by the way.’

The trials included in M-A have multiple problems (37, 38), they exclude most of the patients that we treat in every-day practice (e.g. pregnant females, the young, the old, those with mild, or particularly serious illness, those with multiple conditions, and those on multiple drugs). They may solicit subjects by advertisement, and now many of them are conducted in totally different cultures and settings in China, Asia and Africa (some 80% of Chinese trials are thought to be ‘fabricated’).

RCTs represent an atypical fraction of the real-world treatment population (8).

Methodology and heterogeneity

But, as if all that was not bad enough, it is not valid to extrapolate from the averaged result of a non-homogeneous group, and then apply it to individuals not from that group, but who share some arbitrary descriptive similarity (a score on a rating scale).

I defy anybody to produce evidence that the group of patients, defined as MDD by DSM, is at all likely to represent a patho-physiologically homogenous group.

Drawing conclusions from, or extrapolating from, RCTs involving groups that cannot be demonstrated to be patho-physiologically homogeneous is incorrect. It is rubbish science. Black and white. End of story. No argument.

This is such a fundamentally important scientific fact, that an understandable analogy is required.

Lots of people enjoy gardening, so let us pretend that the patients are represented by the vegetables (non-homogeneous) in your garden; root vegetables? green vegetables? ‘fruity’ vegetables? etc. (define a vegetable, define depression — there is much mileage in this analogy!)

Now then, you have got some super new fertiliser from the garden centre (organic and terribly expensive) and you want to know if it improves the yield of your vegetable garden — for aficionados of statistics, that is exactly why Fischer, of Fisher's-exact-test fame, developed his analysis of variance test. It was to help measure the effect of fertilisers on crops at the Rothamsted agricultural research station in the UK — so, would you just scatter it around the garden, then see if your basket of vegetables was heavier than before? or would you test the fertiliser on each separate type of plant, even though some of them look almost the same?

I hope it is obvious that, if the weight of your basket of vegetables was only slightly higher on the new fertiliser, that would not prove all the plants were improving. It might well be the only one of them was being helped a lot, and the rest not at all. Indeed, it might that one or two were poisoned by it, because it was the wrong balance of nitrogen and phosphorus, or too concentrated. Whatever.

I trust that makes the point clear.

RCTs, as they are generally executed, represent science at an astonishingly incompetent level, yet that is what dominates drug research in psychiatry, and ‘informs’ guidelines. It is hardly any better than the evidence for ‘Alt-Med’.

Presentation is the key: ghost authorship the solution

Despite all the fuss, ghost authorship in industry-initiated trials (i.e. most trials) is still common, perhaps even the rule (39-43).

The commissioning, timing and placement of these ‘papers’ is orchestrated by …

… the marketing and sales divisions.

Because? Timing, presentation and placement (key journals) are the keys to optimal marketing and sales.

So, the medical-writing companies ghost-write and orchestrate it all, get key authors on board, and presto …

PLoS Med and the NY Times got a raft of such documents [to do with medical-writing companies] made public (see here), in a court case: Ginny Barbour, editor in chief of PloS Medicine, said she was taken aback by the systematic approach [to generating ghost-written papers] of the [medical writing] agency. ‘I found these documents quite shocking, … They lay out in a very methodical and detailed way how publication was planned [before the ‘authors’ ever got involved’] (44).

Many doctors routinely take the credit for articles written this way.

Such doctors are, let us not mince words, frauds, cheats and liars.

But let us start this most serious of issues with something amusing.

A real ghost-author!

In the revealing Wilmshurst-case-saga — a man of probity — made well-known because of Simon Singh and the UK libel-tourism story, it was revealed, when he withdrew from authorship because they refused to give him the original data, that, included in the final list of authors of the published paper, was Anthony Rickards.

Anthony Rickards had died before the research was even conducted.

These unprincipled and unpleasant people then sued Wilmshurst for remarks he had made in academic good faith, about the limitations of the conclusions in the paper. This gives everyone a bit of insight into the threatening and bullying which has a major spill-over effect on the willingness of most academics to take on these kinds of people. It is a very insidious influence and totally antithetical to the scientific endeavour.One can see the power of the self-censorship and self-selection effect here: why would a decent, mild-mannered, industrious, conscientious researcher want to get involved in that kind of thing? those who do get involved may be a ‘different sort’ of person.

The bottom-line

At the end of the day, all of the detailed references substantiating the frequency, poor quality and dishonestly, of ghost-written material, is contained in references given herein.

What I would highlight is this, ‘the big picture’: one only has to look at the blossoming of these specialist medical-writing companies, to whom the big pharmaceutical companies farm-out their ghost-writing tasks, in order to understand the mega-dollars involved and how common it must thus be, in order to sustain so many profitable enterprises.

Next, look at the number of papers published under the name of doctors (KOLs (45)***) associated with these drugs. You will find there are many academics who have been publishing papers ridiculously frequently (dozens per year), over prolonged periods of time. You cannot possibly write ‘proper’ scientific papers at that rate — so that tells anyone of any perspicacity whatsoever that these people must be making a minimal contribution to the papers that bear their name.

*** From Moyinihan: quoting a drug company source ‘Key opinion leaders were salespeople for us, and we would routinely measure the return on our investment, by tracking prescriptions before and after their presentations, … If that speaker didn’t make the impact the company was looking for, then you wouldn’t invite them back.’

The medical establishment has done almost nothing to call such doctors to account. It is that simple. You do not have to be Einstein to work it out.

The next step

Another step in this deceitful enterprise is the unscientific manipulation of data using the statistical metric of the p-value and other statistical peregrinations. I will not here describe what that means for non-scientists. Many prominent names in science agree with me (35, 46-48), I could have inserted one hundred references there, just from the last decade. Yet, unbelievably, doctors have colluded with it and swallowed all this in an uncritical and naïve manner. My writing is peppered with comment and references demonstrating the poor science with which we are presented.

It is also relevant to remind ourselves that statistical analysis is only really needed to ‘show’ a difference when the treatment effect is small; we did not need statistics to realise that penicillin and chlorpromazine were effective drugs. If complex statistics, and conflation of trials via M-A, are needed to show small treatment-effects of drugs — that covers all drugs in psychiatry in recent times — then the effects are of minimal significance or usefulness, no matter what blandishments may be offered to contradict this. Again, it is that simple.

Do RCTs translate usefully to everyday practice

Contrary to what is strongly contended by many, there is no sound reproducible science that would allow reliable conclusions that RCTs usefully predict everyday efficacy or long-term outcomes. They most certainly do not predict long-term side effects!

The EBM approach, based on RCTs, promotes unjustified over-generalization by accepting that the outcomes of RCTs apply generally, unless there is a compelling reason to believe otherwise (37, 38). However, that is turning science on its head, and would certainly not have been accepted by Popper.

RCT evidence does not allow us to predict which particular small % of patients will experience benefit — revisit the vegetable analogy above.

Generalizations (i.e. guidelines) that certain drugs ‘should be used’ in a large target population are an invitation for poor clinical practice and over-prescribing.

Algorithm-guidelines, nurse practitioners

If doctors are pressured and constrained to practice within these guidelines, as they increasingly are, by their colleagues, health service managers and insurance companies, and fear of litigation, then why have doctors at all? Is all you need is managers and nurse practitioners checking that everyone is given the computer-generated-algorithm-guidelines that dictate treatment: in no time at all you will be able to dispense even with the nurse practitioners and get your treatment ‘instructions’ online and take your algorithm-generated script straight to the pharmacist.After all, most people only get a 10-minute ‘medication-management’ appointment anyway.

Incidentally, a bit of history: this is not a revolutionary idea, but a return to the past. The concept of prescription-only drugs is relatively new in the history of medicine.

And perhaps most sinister of all is the fact that patients worry that, if they did not accept the guideline recommended treatment, they will be refused reimbursement for any other treatment — now that really is medical fascism.

Bye-by, it was nice meeting you! Now I leave you in the care of ‘Siri’ for psych! —anyone remember ‘Eliza’

My personal experience

My personal experience, my understanding of common practice, the published literature, and the requests I get for opinions on treatment from around the world, all lead me to the opinion that doctors continue to become more proscriptive and prescriptive. Proscriptive (i.e. dogmatic about following guidelines) and prescriptive (authoritarian and unwilling to consider the preferences of patients). It is as if they have come to regard themselves bound to guidelines —slaves to, or guardians of them? A bit of both perhaps.

There is a disturbingly prominent vein of authoritarianism present that is in no way justified by the quality or certainty of the evidence and which does not admit discussion, options, choice, preferences and flexibility.

This is abhorrent ‘medical fascism’ and good doctors should have no truck with it, but some will lose their jobs because they try to stand up to it.

The very existence and prominence of guidelines magnifies this authoritarianism because guidelines provide a deceptive aura of authority and certainty. This is mediated by a fundamentally flawed system of narrowly focussed ‘pseudo-evidence’ (sponsored clinical trials and their associated methodological flaws) digested via the non-scientific medium of the statistical procedure that agrandifies itself with the epithet ‘meta-analysis’. Armoured with this false shield, our shining-medical-knights sally forth to do battle with mythical disease-dragons — I have shunned DSM from the start, and the disease-mongering that has accompanied it (49-51).

I referred above to the fact that this was an immense and complex topic. I must indicate what I mean by ‘mythical disease dragons’. Many informed commentators have noted how the internationally influential American Psychiatric Association manual, called DSM, has over the last few decades served to expand the definition of mental illness to encompass a very substantial proportion of the population, thus legitimising and enabling the ‘medicalisation’ and insurance-remunerated administration of drugs to vast numbers of people (for instance, I think recent figures indicate something like 10% of all American children are on medication for ADHD).

Issue of long-term therapy

A doubtful inference that is made from RCTs (and amplified via guidelines) is that modest short-term treatment effects over a few weeks, even if you accept those are meaningful, extend to long-term treatment and meaningful real-life outcomes (like a reduction in the suicide rate) — as opposed to a small change on a rating scale, which is merely an interim proxy measure. Indeed, if anything, the evidence points in the opposite direction: for instance, lithium has the least effect on short-term scores on the HRSD, but the greatest long-term reduction of suicide.

These drugs are almost always given over a period of many months, often years. Indeed, other types of evidence, and this speaks to the almost complete lack of external validity of guidelines, suggests that long-term treatment with most antidepressants (and antipsychotics) does not reduce long-term illness manifestations. Indeed, disability, the hospital readmission rate, the suicide rate, are generally not reduced (52). All those things are fairly powerful evidence that the drugs have little long-term benefit for most patients. The long-term side-effects are not in doubt though.

One should note here that this not to say that antidepressant drugs are ineffective for everyone. Experience clearly demonstrates that severely depressed patients experience major benefits from various particular antidepressants. And, even if SSRIs as a class barely deserve to be called antidepressants, that is not to say they do not benefit some symptoms in some people.

It is difficult to construct a sound argument that the evidence strengthens the case for using RCTs to guide treatment for the general population. Much evidence supports the opposite point of view.

To listen and advise

Doctors are there to listen and advise, not to dictate and direct with insufficient real evidence, explanation and discussion. For me at least, it is a fundamental precept of medical practice that we listen and advise and resort to paternalism and authoritarianism as little as possible.

There are few circumstances in clinical medicine in which the underlying science is sufficiently good to confidently dictate one particular form of treatment over another. In psychiatry, there are no circumstances in which the underlying science is sufficiently good to dictate one particular treatment over another.

Guidelines are furthering and fostering medical rigidity and authoritarianism. They must avoid being prescriptive, and their creators need to accept more responsibility for how they are used and abused. Then there are various other issues, like their period of validity, a clear statement about when they are due for revision (sometimes missing) and what kinds of new evidence might invalidate them. There is no established mechanism for questions and discussions with those who promulgate such edicts.

The creators of guidelines need to get out there and engage in dialogue with the people who are actually expected to use them. At the moment, the whole process bears too close a resemblance to a papal edict of infallibility. That is the exact opposite of science.

Conclusion

This commentary has summarised the host of, sometimes poorly, acknowledged problems plaguing guidelines. The first and foremost of these is that they are contaminated to varying extents by corruption, bias, and misapplied and poor science. They are increasingly treated, especially by intellectually lazy doctors, as a blueprint substituting for thought, judgement, responsibility, and individual consideration.

Despite clear statements in the introductions to many guidelines about their ‘advisory’ nature, and the responsibility of the individual treating doctor to assess and treat each patient on their merits, this does not necessarily happen. Guideline creators need to take greater responsibility for how they are presented, and how they are abused and misused.

Furthermore, those who now have an increasing influence on the delivery of health care, be they politicians, or managers of health care delivery organisations, or insurance companies etc. are increasingly prone to make simplistic assumptions and interpretations in relation to what guidelines actually recommend, and use them for their own ends. That can even mean sacking a doctor for not following ‘the guidelines’. Even if that is infrequent, that does not alter the fact that many doctors who contact me justify not using particular drugs on the basis that ‘it is not recommended in the guidelines, and I will get in trouble’.

Add all these factors together and you have a considerable potential, much of it already realised, for misapplication and patient harm.

In the longer term, we will soon have a generation of doctors who have not developed clinical experience and expertise in utilising non-standard treatments. Therein may lie a major downside for progress in clinical practice, because so many advances actually come from observations quite unrelated to clinical trials and purpose-directed research.

However good the intentions might have been, in those who initiated the notion of guidelines, it is well to remember that, as the old saying goes, 'The road to hell is paved with good intentions'.

The ‘gold-standard’ of RCT guideline evidence, when fully assayed, may be found to contain an embarrassing percentage of fools-gold.

I suppose we should close by highlighting one or two simple observations which would tend to suggest that these new expensive treatments have achieved very little. The expenditure on drugs for psychiatric illnesses has increased exponentially, by close to 100 times in the past 40 years. The suicide rate has not decreased, and the number of psychiatric patients on disability benefit is much increased in most western countries. The number of patients hospitalised and harmed, by adverse drug reactions, is now among the leading causes of morbidity and mortality.

It is difficult to see how that squares with, or justifies, such a massive expenditure on drugs.

Assigning significant weight to other research methodologies and to experience and clinical judgement — which is valuably informed by Bayesian reasoning — compared to ‘RCTs’, clearly remains important and should not be diminished or belittled.

A final point to emphasise for those not familiar with scientific literature is that a substantial proportion of the references below are authors who are eminent. The papers have been published in the most prestigious journals, like Nature, BMJ, Lancet, JAMA, PLoS Medicine etc. We are not talking about authors on the fringe of medicine publishing in dubious and obscure journals.

References

1.               Angell, M, The truth about drug companies: How they deceive us and what to do about it. New York: Random House, 2005: p. 336.

2.               Angell, M, Drug Companies & Doctors: A Story of Corruption. New York Rev Books, 2009. 56: p. http://www.metododibella.org/cms-web/upl/doc/Documenti-inseriti-dal-2-11-2007/Truth About The Drug Companies.pdf.

3.               Smith, R, Travelling but never arriving: reflections of a retiring editor. Br. Med. J., 2004. 329(7460): p. 242-244.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=15284125

4.               Smith, RL, Medical Journals Are an Extension of the Marketing Arm of Pharmaceutical Companies. PLoS Med, 2005. 2: p. e138.

http://www.plosmedicine.org/perlserv/?request=get-document&doi=10.1371/journal.pmed.0020138

5.               Horton, R, The Dawn of McScience. New York Rev Books, 2004. 51: p. 7-9.

6.               Schunemann, HJ, Woodhead, M, Anzueto, A, Buist, AS, et al., A guide to guidelines for professional societies and other developers of recommendations: introduction to integrating and coordinating efforts in COPD guideline development. An official ATS/ERS workshop report. Proc Am Thorac Soc, 2012. 9(5): p. 215-8.

https://www.ncbi.nlm.nih.gov/pubmed/23256161

7.               Mercuri, M, Sherbino, J, Sedran, RJ, Frank, JR, et al., When guidelines don't guide: the effect of patient context on management decisions based on clinical practice guidelines. Acad. Med., 2015. 90(2): p. 191-6.

https://www.ncbi.nlm.nih.gov/pubmed/25354075

8.               do Prado-Lima, PAS, The surprising blindness in modern psychiatry: do guidelines really guide? CNS Spectr, 2017. 22(4): p. 312-314.

https://www.ncbi.nlm.nih.gov/pubmed/27866506

9.               Cabarkapa, S, Perera, M, McGrath, S, and Lawrentschuk, N, Prostate cancer screening with prostate-specific antigen: A guide to the guidelines. Prostate Int, 2016. 4(4): p. 125-129.

https://www.ncbi.nlm.nih.gov/pubmed/27995110

10.             Waters, DD and Boekholdt, SM, An Evidence-Based Guide to Cholesterol-Lowering Guidelines. Can. J. Cardiol., 2017. 33(3): p. 343-349.

https://www.ncbi.nlm.nih.gov/pubmed/28034582

11.             Kung, J, Miller, RR, and Mackowiak, PA, Failure of clinical practice guidelines to meet institute of medicine standards: Two more decades of little, if any, progress. Arch Intern Med, 2012. 172(21): p. 1628-33.

https://www.ncbi.nlm.nih.gov/pubmed/23089902

12.             Lenzer, J, Why we can’t trust clinical guidelines. BMJ, 2013. 346(58): p. f3830.

13.             Robb, G, Loe, E, Maharaj, A, Hamblin, R, et al., Medication-related patient harm in New Zealand hospitals. N. Z. Med. J., 2017. 130(1460): p. 21-32.

https://www.ncbi.nlm.nih.gov/pubmed/28796769

14.             Parameswaran Nair, N, Chalmers, L, Bereznicki, BJ, Curtain, C, et al., Adverse Drug Reaction-Related Hospitalizations in Elderly Australians: A Prospective Cross-Sectional Study in Two Tasmanian Hospitals. Drug Saf, 2017. 40(7): p. 597-606.

https://www.ncbi.nlm.nih.gov/pubmed/28382494

15.             Peter, JV, Varghese, GH, Alexander, H, Tom, NR, et al., Patterns of Adverse Drug Reaction in the Medical Wards of a Teaching Hospital: A Prospective Observational Cohort Study. Curr Drug Saf, 2016. 11(2): p. 164-71.

https://www.ncbi.nlm.nih.gov/pubmed/26916785

16.             Benard-Laribiere, A, Miremont-Salame, G, Perault-Pochat, MC, Noize, P, et al., Incidence of hospital admissions due to adverse drug reactions in France: the EMIR study. Fundam. Clin. Pharmacol., 2015. 29(1): p. 106-11.

https://www.ncbi.nlm.nih.gov/pubmed/24990220

17.             Cosgrove, L, Krimsky, S, Wheeler, EE, Peters, SM, et al., Conflict of Interest Policies and Industry Relationships of Guideline Development Group Members: A Cross-Sectional Study of Clinical Practice Guidelines for Depression. Account Res, 2017. 24(2): p. 99-115.

https://www.ncbi.nlm.nih.gov/pubmed/27901595

18.             Bastian, H, Nondisclosure of Financial Interest in Clinical Practice Guideline Development: An Intractable Problem? PLoS Med, 2016. 13(5): p. e1002030.

https://www.ncbi.nlm.nih.gov/pubmed/27243232

19.             Campsall, P, Colizza, K, Straus, S, and Stelfox, HT, Financial Relationships between Organizations That Produce Clinical Practice Guidelines and the Biomedical Industry: A Cross-Sectional Study. PLoS Med, 2016. 13(5): p. e1002029.

https://www.ncbi.nlm.nih.gov/pubmed/27244653

20.             Blumsohn, A, Authorship, ghost-science, access to data, and control of the pharmaceutical scientific literature: who stands behind the word? AAAS Professional Ethics Report, 2006. 19: p. 1-4.

https://www.aaas.org/sites/default/files/migrate/uploads/per46.pdf

21.             Leucht, S, Kissling, W, and Davis, JM, Second-generation antipsychotics for schizophrenia: can we resolve the conflict? Psychol Med, 2009. 39(10): p. 1591-602.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=19335931

22.             Huston, P and Moher, D, Redundancy, disaggregation, and the integrity of medical research. Lancet, 1996. 347(9007): p. 1024-6.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=8606568

23.             Spielmans, GI and Kirsch, I, Drug approval and drug effectiveness. Annu Rev Clin Psychol, 2014. 10: p. 741-66.

https://www.ncbi.nlm.nih.gov/pubmed/24329178

24.             Parker, G, Evaluating treatments for the mood disorders: time for the evidence to get real. Aust NZ J Psychiatry, 2004. 38(6): p. 408-14.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=15209831

25.             Le Noury, J, Nardo, JM, Healy, D, Jureidini, J, et al., Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence. BMJ, 2015. 351: p. h4320.

26.             Healy, D, Clinical trials and legal jeopardy. Bulletin of medical ethics, 1999(153): p. 13-18.

27.             Jureidini, JN, Amsterdam, JD, and McHenry, LB, The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance. Int J Risk Saf Med, 2016. 28(1): p. 33-43.

http://www.ncbi.nlm.nih.gov/pubmed/27176755

28.             Kirsch, I, Deacon, BJ, Huedo-Medina, TB, Scoboria, A, et al., Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS Med, 2008. 5(2): p. e45.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=18303940

29.             Kirsch, I and Moore, TJ, The Emperor's New Drugs: An Analysis of Antidepressant Medication Data Submitted to the U.S. Food and Drug Administration. Prevention & Treatment, 2002. 5: p. http://journals.apa.org/prevention/volume5/pre0050033r.html.

30.             Allison, DB, Brown, AW, George, BJ, and Kaiser, KA, Reproducibility: A tragedy of errors. Nature, 2016. 530(7588): p. 27-9.

https://www.ncbi.nlm.nih.gov/pubmed/26842041

31.             Fountoulakis, KN, McIntyre, RS, and Carvalho, AF, From Randomized Controlled Trials of Antidepressant Drugs to the Meta-Analytic Synthesis of Evidence: Methodological Aspects Lead to Discrepant Findings. Curr Neuropharmacol, 2015. 13(5): p. 605-15.

https://www.ncbi.nlm.nih.gov/pubmed/26467410

32.             Fountoulakis, KN, Samara, MT, and Siamouli, M, Burning issues in the meta-analysis of pharmaceutical trials for depression. J Psychopharmacol, 2014. 28(2): p. 106-17.

https://www.ncbi.nlm.nih.gov/pubmed/24043723

33.             Tendal, B, Nuesch, E, Higgins, JP, Juni, P, et al., Multiplicity of data in trial reports and the reliability of meta-analyses: empirical study. BMJ, 2011. 343: p. d4829.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=21878462

34.             Siontis, KC, Evangelou, E, and Ioannidis, JP, Magnitude of effects in clinical trials published in high-impact general medical journals. Int. J. Epidemiol., 2011. 40(5): p. 1280-91.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=22039194

35.             Mansournia, MA and Altman, DG, Invited commentary: methodological issues in the design and analysis of randomised trials. Br. J. Sports Med., 2017.

https://www.ncbi.nlm.nih.gov/pubmed/28756393

36.             Altman, D and Bland, JM, Confidence intervals illuminate absence of evidence. Br. Med. J., 2004. 328(7446): p. 1016-7.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=15105337

37.             Fuller, J, Rhetoric and argumentation: how clinical practice guidelines think. J. Eval. Clin. Pract., 2013. 19(3): p. 433-41.

https://www.ncbi.nlm.nih.gov/pubmed/23692224

38.             Fuller, J, Rationality and the generalization of randomized controlled trial evidence. J. Eval. Clin. Pract., 2013. 19(4): p. 644-7.

https://www.ncbi.nlm.nih.gov/pubmed/23368415

39.             Gotzsche, PC, Hrobjartsson, A, Johansen, HK, Haahr, MT, et al., Ghost Authorship in Industry-Initiated Randomised Trials. PLoS Med, 2007. 4(1): p. e19.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=17227134

40.             Sismondo, S, Ghost management: how much of the medical literature is shaped behind the scenes by the pharmaceutical industry? PLoS Med, 2007. 4(9): p. e286.

https://www.ncbi.nlm.nih.gov/pubmed/17896859

41.             Wislar, JS, Flanagin, A, Fontanarosa, PB, and Deangelis, CD, Honorary and ghost authorship in high impact biomedical journals: a cross sectional survey. BMJ, 2011. 343: p. d6128.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=22028479

42.             Lexchin, J, Those Who Have the Gold Make the Evidence: How the Pharmaceutical Industry Biases the Outcomes of Clinical Trials of Medications. Sci Eng Ethics, 2011.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=21327723

43.             Ross, JS, Hill, KP, Egilman, DS, and Krumholz, HM, Guest authorship and ghostwriting in publications related to rofecoxib: a case study of industry documents from rofecoxib litigation. JAMA, 2008. 299(15): p. 1800-12.

https://www.ncbi.nlm.nih.gov/pubmed/18413874

44.             Barbour, V, How ghost-writing threatens the credibility of medical knowledge and medical journals. Haematologica, 2010. 95(1): p. 1-2.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=20065074

45.             Moynihan, R, Key opinion leaders: independent experts or drug representatives in disguise? BMJ, 2008. 336(7658): p. 1402-3.

https://www.ncbi.nlm.nih.gov/pubmed/18566074

46.             Dechartres, A, Trinquart, L, Atal, I, Moher, D, et al., Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study. BMJ, 2017. 357: p. j2490.

https://www.ncbi.nlm.nih.gov/pubmed/28596181

47.             Ioannidis, J, Lies, Damned Lies, and Medical Science. Atlantic, 2010. November 17th.

http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/

48.             Dechartres, A, Altman, DG, Trinquart, L, Boutron, I, et al., Association between analytic strategy and estimates of treatment outcomes in meta-analyses. JAMA, 2014. 312(6): p. 623-630.

49.             Moncrieff, J and Thomas, P, The pharmaceutical industry and disease mongering. Psychiatry should not accept so much commercial sponsorship. Br. Med. J., 2002. 325(7357): p. 216; author reply 216.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=12143863

50.             Moynihan, R, Heath, I, and Henry, D, Selling sickness: the pharmaceutical industry and disease mongering. Br. Med. J., 2002. 324(7342): p. 886-91.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=11950740

51.             Moynihan, R and Henry, D, The Fight against Disease Mongering: Generating Knowledge for Action. PLoS Med, 2006. 3(4): p. e191 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=16597180.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=16597180

52.             Tiihonen, J, Lonnqvist, J, Wahlbeck, K, Klaukka, T, et al., 11-year follow-up of mortality in patients with schizophrenia: a population-based cohort study (FIN11 study). Lancet, 2009. 374(9690): p. 620-7.

http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=19595447