« Proof, Plausibility, and a Tale of Two Viruses | Main | Interpreting Confidence Intervals and Grading Recommendations »

Dec 01, 2009

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a0120a692721d970b012875fcf1a1970c

Listed below are links to weblogs that reference HIV Vaccines, p values, and Proof:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I'm not sure I would trust an HIV vaccine until it was 1000000% sure it wouldn't give me the disease. The flu? If the vaccine gives me the flu, the worst I get is a miserable few days. But HIV...that's something that can kill.

Robert:
Among the many potential problems that might be associated with an HIV vaccine, getting HIV from the vaccine itself is not one of them. No vaccine being tested in humans contains all of the components HIV needs in order to work, and no vaccine being tested in humans contains whole virus at all. (The vaccine could theoretically increase your susceptibility to infection, but only if you were also exposed some other way, like having sex without condoms with someone who had HIV, or sharing needles with someone who had HIV.)

More importantly,
http://www.cdc.gov/Flu/keyfacts.htm
http://www.kff.org/hivaids/upload/3029-071.pdf
Though the numbers are from different years, this year, it is likely true that more people will die of flu than will die of AIDS in the United States. At any rate, it's the same order of magnitude.

David:
If we think the rate of superinfection is not that high--and if it was very high, then more gay HIV+ men in San Francisco should be failing their raltegravir-enfuvirtide-maraviroc-hydroxurea regimens, and more sexually active HIV+ people in general should be failing their regimens more often, I think--then the likelihood of a successful HIV vaccine, generically speaking, remains uncertain and difficult to determine.

That said, I don't disagree with your overall argument, because that likelihood seems lower all the time for other reasons.

Good post. I think you make too much an issue of the multiple analyses, however. The data are the data are the data. Why should our inference depend on what we intended to do before the experiment? Sure, multiple analyses were undertaken, obviously not independent, but they all led to approximately the same inference: the vaccine was a little more effective than placebo. All of the analyses provide just about the same confidence interval for the difference between vaccine and placebo. The only reason this becomes an issue is the accursed p-value.

Suppose, for whatever reason, I believe the analysis with the p-value of 4%. Then, if you ask me how effective the vaccine is, I would answer that it has about 31% lower risk of HIV infection (if I understood the study correctly). On the other hand, if I believed one of the "non-significant" analyses and you asked me how effective the vaccine is, should I say zero? That doesn't make any sense. Why is it that, if a p-value is a little larger than 5%, we let our estimate of treatment effect snap back to zero?

Further, it is not at all uncommon for us to believe, and act upon, analyses that are unplanned and multiple. It especially happens when we consider the safety of treatments. We stand on our heads, in talking about effectiveness, to define primary and secondary objectives, pre-hoc and post-hoc analyses, etc., but when it comes to safety we let data exploration rule the day. We are willing to believe the difference in effectiveness between vaccine and placebo only if we see a statistically significant difference in the outcome we defined in advance, using the analysis we defined in advance. But if we see more vaccine receivers than placebo receivers experiencing hiccups we take that as gospel and an effect of the vaccine, even though nobody had a thought about hiccups beforehand.

The logic we bring to bear on RCTs needs re-examination, and I respect you and the others who are trying to do so. Thanks.

Tom Spradlin's point is a fair one. The issue to my thinking, though, is not that these multiple analyses somehow unfairly skew our view of reality, but rather that the act of bobbing for p values (sticking with my bites at the apple metaphor) is relevant because of the importance that journals and reviewers place on the holy p = 0.05.

In the HIV paper, the various analyses achieved p values of 0.08, 0.16, and 0.04, all with efficacy point estimates in the 25-30% range. Had the paper reported a p value of 0.16, the journals (and the media reporting on the results) would have said the study showed no benefit with the vaccine. By getting that 0.04 p value out there, the spin was completely different.

For me, I was pointing out why I'm dubious that the vaccine worked. Had the authors been stuck with a p value of 0.16, I wouldn't even have needed to point out my skepticism, since everyone would be saying the trial was negative. That this, too, is an inaccurate interpretation of results and p values is, of course, correct.

I many time visited this site and here something is constant new. I will come once again

And you could paint this theme more in detail, it seems to me that here something does not suffice

these HIV vaccines really works ? I have several doubts about that ...

I very much like this blog. Yet time I will come here

The author has very much tried. I support the majority of commentators

The best blog which I saw before. Hope to vissit it again

The comments to this entry are closed.