Psyche logo

Psychiatry and Real Life

Problems with "Evidence-based Medicine"

By Nick Birthday Published 3 years ago 6 min read
Like

Psychiatry is optimistic about antidepressants. The debate has been won, it is said, as more evidence from better trials comes out in support of their use. I’m not so sure. I support their use very reservedly, and have some thoughts about translational problems from clinical trials into real-life prescribing. The gap is larger than us doctors seem to realise.

In trials a patient is more “involved” in the therapy. Extensive efforts are made to ensure full compliance with medication, to better measure the true effect. The thought of being a part of a clinical trial - Contributing To Science - can have large and various effects in the mind of a person afflicted by absence of meaning.

In clinical practice, a less stringently ordered environment than a clinical trial, compliance and safety monitoring is not as reliable and all sorts of random effects can ensue.

You get patients not turning up for the routine tests needed to monitor drug safety. Adherence to long term medication of any kind is estimated at about 50%, and the figure is almost certainly lower for psychiatric medications. Not taking medication properly will likely reduce therapeutic effects more than side effects, given that there are more ways for something to go wrong than to go right. We can correct certain flaws in the evidence, such as publication bias, lack of experimental rigour, without really getting much closer to knowing what effects our prescribing is having in the real world.

The best answer given to this is that expert clinical intuition manages the risks of adverse effects, detects individual differences in medication response and adjusts accordingly. But it is post-hoc, does not account for waste of time and money, requires a sophistication most doctors don’t have in attaching complicated value judgements to the probability of a given drug response.

An analogy: biologists oppose nature vs nurture simplifications to the point of vocal cord exhaustion, telling us that heritability measures only hold true in the environments they were measured in. Obesity is something like 80% heritable, but note the measurement from a society where it is acceptable to fill a shop with brightly packaged junk food in all the most visible places. There are many similar limitations in evidence - based psychiatry. For example, the threshold (i.e. effectiveness in "mild" vs "moderate" depression) for prescribing an antidepressant is a function of a patient’s unwillingness to honestly self-reflect.

There is a crucial but largely ignored question here, whether it is ethical to prescribe a problem-laden medication when a more sustainable cure is achievable. There are other examples of how prescribing policy proceeds from confused heuristics. A psychiatrist considers treatment options for a depressed patient. Initially believing therapy to be the best option, he comes to realise that the patient is not “psychologically minded” and so he commences an antidepressant. Presumably such a patient is likely to give a less accurate account of their troubles, at least if you think that on average the more you can explore with a patient the better calibrated your treatment will be. We're left with a deeply problematic habit of intervening in a complex system because of an *increase* in the uncertainty.

Now, you could say it’s at least worth a try. Give it a few weeks, stop the drug if it isn’t working. In practice this becomes a drawn out process of trying multiple different antidepressants for longer-than-expected durations due to variables like missed or unavailable follow-up appointments, or sometimes forgetting to take the medication. One of the possible psychological effects is the patient defers participancy in his recovery: the awaited appointment is where things will be set to right. I will stop conjuring scenarios now because the fundamental point is that ipso facto, intervention in a complex (i.e. unpredictable) system produces complex (and unpredictable) effects, with no intrinsic “out of the woods” time boundary. And the set of possible unwanted effects is vastly larger than that of desired outcomes.

The problem takes on a different complexion when there is risk to life. If a patient is suicidal, we need to act. I am in favour of this so far as it goes, but there remain big questions about the efficacy of SSRIs in suicide prevention. The Maudsley guidelines’ 12th edition (13th is the most recent) claims “for the most part, suicidality is greatly reduced by the use of antidepressants” (p244). That they would say this is astonishing when you look at the three papers offered in citation. To deal briefly with the limitations of these papers, let’s just look at the conclusions of each.

Conclusion of the first paper: “Available data do not indicate a significant increase in risk of suicide or serious suicide attempt after starting treatment with newer antidepressant drugs.” Completely different from showing they reduce suicidality. Regarding suicide prevention, there is no information here which distinguishes the effects of SSRIs from the administration of chocolate to suicidal people.

The second paper collected observations of suicidal behaviour retrospectively, and its sample size prevented measuring the effect on completed suicides, surely the statistic of greater interest. Probably unaware of the ex officio promises its findings would be used to support, the authors give the following straight-faced disclaimer: “There is no placebo arm; so, the reduction in suicidal behaviour may be due to nonspecific treatment factors.”

The third paper also sits squarely in the correlational basket. No control group. It tells us “most suicidal patients treated with antidepressants became non-suicidal after treatment, *independent of diagnosis, treatment type or dose.* Notice a pattern? To a casual observer of course, such an outcome would be an encouraging sign that the drugs were doing something. As a professional on the other hand, to infer from this a cause and effect relationship between the drugs and suicide reduction is the sort of blasé non-judgment you make when you are safely insulated from the results of your decisions and don’t care to know what the “evidence” you invoke consists of.

As a thought exercise, consider an analogy between psychiatric and medical prescriptions. Think of the hidden vulnerabilities when you need any medication: there are events of low probability which could disrupt the supply - the concerns among British Diabetics about the supply of insulin from the factories in mainland Europe, after Brexit - and there are side effects that might go undetected in population studies for want of adequate statistical power or enough time spent looking for them. You need your car to reliably carry you to the pharmacist, or nice friends to help if it doesn’t; among many things you must hope your doctor knows a faulty trial when she sees it.

There are enough benefits to make these risks worth it most of the time. But that has only come to be after about 3 centuries of compounding development and application of the scientific method, used in less complex systems than the mind where you can set narrower therapeutic goals. Across the history of medicine taken as a whole, the main event has been iatrogenesis. This is a model to be adopted hesitantly.

medicine
Like

About the Creator

Nick Birthday

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.