I am breaking my recent vow to post just one blog a week because I have finished reading a pre-publication draft of “Health Insurance and Mortality in US Adults” by five Harvard MDs (Amer. J of Pub Health, December 2009) and that paper has really got my dander up. Some of the authors of this paper also gave us the “half of all bankruptcies are due to health spending” paper. I have a paper and blog forthcoming on the latter claim but the journal has placed a pre-publication embargo on it.
This time the doctors from Harvard claim that lack of insurance causes 44,000 additional deaths annually. Let me stipulate that being uninsured in the U.S. stinks. We need to find a way to cover as many Americans as possible. Moreover, the figure of 44,000 additional deaths from lack of insurance might be correct. So might a figure of 10,000 or less. Or 70,000 or more. As far I can tell, all of these figures lie within the confidence interval in the Harvard study.
As it turns out, a figure of 0 additional deaths might also be correct. That is because the Harvard study uses a methodology that is inherently biased. (Note to the authors of the study: I did not say that you were inherently biased. I said that your methodology is biased. On the off chance that you see this blog and care to respond, let’s keep the discussion centered on methodology. I have had enough of your personal attacks.)
Now I have to get a bit technical. In regression and related analyses, a critical assumption is that the unobservable characteristics of the “control” and “experimental” groups are uncorrelated with the observables. Translation in this case – if the regression model does not include all possible factors that might predict mortality, and just one of these omitted factors is correlated with insurance status, then the reported coefficient on insurance status is biased. This is an onerous requirement for sure, but it must be met if bias is to be avoided. Without this full set of variables, and in the absence of a randomized experimental design, it is still possible to avoid bias by using advanced statistical techniques such as “instrumental variables” regression. But the Harvard study does not use this technique.
The implication is that their results are biased. We can even guess at the direction of the bias. They started with a few control variables and the estimated impact of being uninsured was much bigger than 44,000 lives lost. When they added additional control variables, the estimated impact fell. It is plausible to suppose that with additional controls, the estimated impact would shrink further, perhaps to the point of statistical insignificance. I don’t know that for sure. No one does.
In academia, the numbers matter. Politicians use our numbers as if they actually mean something. We must strive to use research methods that generate unbiased results. The Harvard study does not meet this simple, unobjectionable criterion.