I won a bottle of wine this week. At least in theory. I haven't posted to this blog in a long while, but the opportunity to brag about this has overcome inertia.
A few years ago I was at a cookout with some friends and colleagues, and got into a discussion with one of them, an emergency physician and sepsis expert, about activated protein C.
Many modifiers of the immune response to sepsis have been studied in an attempt to lower the large mortality rate of severe sepsis, but almost none have shown any benefit. Then, in 2001, a study of activated protein C in severe sepsis was published in the NEJM, and showed a substantial reduction in mortality.
I was suspicious from the beginning, and commented that there had been enough trials in sepsis that we were bound to get a positive result by chance alone. My skepticism, not surprisingly, had little effect on the rapid rise of activated protein C and its manufacturer Lilly. It became a central part of the management of sepsis.
In the original trial there was some suggestion that activated protein C worked better in patients at high risk for death than in those at lower risk. This was somewhat counter-intuitive, though certainly possible. To answer the question and expand the indications for the drug, a study was conducted to look at patients at lower risk of death. This study, published again in the NEJM in 2005, found no suggestion of benefit in patients with severe sepsis at lower risk of death and was stopped early for futility. Additionally, patients treated with activated protein C were at increased risk for serious bleeding. My skepticism about the drug grew.
Then, in 2007, a study was published in the Lancet, showing no benefit of activated protein C in children with severe sepsis.This study, too, was stopped early for futility and an increase in CNS bleeding.
And so, in the setting of these trials, I found myself debating the merits of activated protein C with a sepsis expert. He took what was, up until last week, the pretty standard position of critical care physicians that what this sequence of studies showed was that activated protein C was only effective in adults with severe sepsis who were at high risk for mortality; it was ineffective or harmful in adults at lower risk for mortality from sepsis and for all children with sepsis.
In contrast, I took the position that the first trial had been a statistical blip and that, in reality, activated protein C was useless or harmful in treating sepsis. At the time of our debate, a very large international trial was getting under way to answer this question, and so we wagered a bottle of wine on the results.
These two ways of looking at trial results come up in all sorts of contexts. One option, when trials seem to disagree with each other, is to try to find the thread that resolves the conflict: the specific patient population that benefited in one trial, or the specific dose that made all the difference, or the specific type of location that explains how a drug could show benefits in one trial and be useless in another. As a web of trials is put together, people will often argue that the "evidence-based" approach is to look to the trials as trees and find, for instance, those medium height oaks that are sure to benefit from a particular therapy that is of no benefit for shorter or taller oaks, or for maples of any height.
Occasionally this approach does make sense, but more often, looking at the forest, it becomes clear that a single positive trial result amongst a network of negative trials (or vice-versa) may be better explained by chance or errors in the conduct of the study than by looking to the individual trees.
Researchers get things wrong, by being unlucky, by making errors, by being foolish, by fabricating data, and for any number of other reasons. When a study result feels unbelievable for good reason, it often should not be believed. In an earlier post I argued that XMRV would not turn out to be the cause of chronic fatigue syndrome. It now appears that lab error was responsible for the early results suggesting such a connection, but I certainly had no training or special knowledge to have predicted what might have gone wrong; I had only the clinical knowledge that I thought finding any single pathogenic cause of CFS was extremely unlikely. When neutrinos were found moving faster than the speed of light, scientists were appropriately even more skeptical. (The medical equivalent of this, to my mind, are the occasional studies suggesting benefits from homeopathy.)
With activated protein C, the best way to explain the data seemed to be to assume that the one positive trial was incorrect while awaiting the results of the large international trial. Those results have not been published in final form, but the trial was stopped early for futility this week and activated protein C was withdrawn from the market. My friend the sepsis expert owes me a bottle of wine. I'd certainly trade in that bottle of wine, though, if I could figure out a way to get expert clinicians and clinical epidemiologists to step back from apparently conflicting clinical trials and think about what the results as a whole are suggesting, rather than trying to believe that being fair to the evidence involves taking each individual trial result as true. This latter approach does not sufficiently consider the probability that when trials seem to conflict, some of them are likely to simply be wrong.