CEPR - Center for Economic and Policy Research


En Español

Em Português

Other Languages

Home Publications Blogs Beat the Press Honest Piece by Casey Mulligan on Medicaid Expansion

Honest Piece by Casey Mulligan on Medicaid Expansion

Wednesday, 26 June 2013 05:15

Some applause please for Casey Mulligan. Mulligan has been a strong opponent of the Affordable Care Act and the expansion of Medicaid provided under the act. However he used his column today to dispel a misunderstanding of a study of the health impact of increased Medicaid enrollment in Oregon.

The study was written up in an article in the New England Journal of Medicine which noted that the study found no statistically significant impact of Medicaid enrollment on health care. However Mulligan makes the point that the study actually did find that the people enrolled in Medicaid had improved health by several important measures. While the improvements were not large enough to meet standard tests of statistical significance this does not mean that they were not important. As Mulligan notes, given the limited number of people in the study and the relatively short time-frame (2 years), it would have been highly unlikely that it could have found statistically significant gains in health outcomes.

Mulligan deserves credit for clarifying this point, especially when the implications seem to be directly at odds with his view of the policy. It would be great if debates on economic policy were always like this. 



I'm glad to see that I have people knowledgeable about statistics reading this blog. Since I guess I was too quick in my post and folks apparently did not read the Mulligan piece or the study, let me be a bit clearer. The study had very little power. There were not enough people in it. As a result you had relatively few people with any specific condition, which meant that it would be almost impossible to find statistically significant results.

To see this point, suppose we chose 100 people at random for a study to determine if drug X was effective in preventing heart attacks. We gave 50 people drug X and the other 50 got a placebo. After a year, 2 people in the placebo group got a heart attack but only one person in the treatment group. Okay, this is a nice result, but almost certainly not statistically significant. Since we had not selected people with heart conditions and heart attacks are relatively infrequent in the population as a whole, it would have been almost impossible to have a statistically significant finding.

That is the story of the Oregon study. It had some encouraging results. They were not statistically significant, but it would have been almost impossible given the design of the study to have statistically significant results. That was the point of Mulligan's piece -- and he is 100 percent right.

Comments (18)Add Comment
written by xteeth, June 26, 2013 6:32
I am sorry to see you make this kind of statistical blunder. It is never the case that one can attribute support to a statistical result that doesn't meet the criteria set before the study is done. That is scientific cheating and denigrates not only the study done but all attempts at scientific proof. Shame.
written by MacCruiskeen, June 26, 2013 6:47
"While the improvements were not large enough to meet standard tests of statistical significance this does not mean that they were not important."

Actually, that is what that means. But that's okay, we don't really expect economists to understand how science works.
Commenters mistaken
written by BH, June 26, 2013 7:26
The study found what it found. They were unable to determine if the likelihood of the result being due to chance was less than 5 percent, but the study was too small to possibly meet that threshold. It is still the case that positive effects were found nevertheless.
Sad, Low-rated comment [Show]
written by Ryan, June 26, 2013 8:35
Brad DeLong obsessed over this some time ago, fairly entertaining. I should think a subsequent study, of longer time frame and with more people, would be something all could agree to, if only to increase the power of the study.
written by skeptonomist, June 26, 2013 8:54
The study did find that Medicaid reduced "financial strain" on participants, so it has apparently had some demonstrable economic effect. The summary does not give the statistics on that (papers is behind a paywall). If Medicaid reduced the incidence of emergency-room visits (not reported) it probably improved the overall efficiency of the health-care system. The long-term health parameters that were reported - selected because they are easy to measure - are only a part of the intended benefits of Medicaid. Probably its main purpose is to extend care for major acute and chronic illnesses without causing bankruptcy or resort to the emergency room.
experimental design
written by this is me being generous, June 26, 2013 9:21
A little but insufficient knowledge is a dangerous thing, as some of these comments show. The Oregon study's statistical inferences are worthless crap. Because the study ignored the basic principles of experimental design, the data was not powerful enough to draw a conclusion from, for various reasons. Doctors do this too frequently. How many times have I heard a statistician complain about a doctor collecting data that can't be cleaned enough to answer the question that the doctor spent hunreds of thousands, even millions, on to answer? But the article got published in a top journal, so now that turd looks respectable in a tux.

Commenters: first read "How to lie with statistics", you got taken by one of the oldest tricks in the book
critical error is "encouraging results"
written by pete, June 26, 2013 9:34
There are no encouraging results, unless one is encouraged that Medicaid did no harm. Certainly there is no evidence that it did good. That is the point of the study, I think. Trying to get blood from this turnip just ain't going to work.

I hate presenters who state "I get the right sign but it is not statistically significant." This is a clear logical error...there is no "sign," it is not different than 0! Maybe this is why Dean does no peer reviewed research...
Non-statistical significance does not mean failure
written by Jennifer, June 26, 2013 10:23
It is absolutely true that the study was underpowered, so nobody should be taking definitive policy directions from it. It is perfectly reasonable to take the outcomes as suggestions, and there were several promising things in this study. Improvements in mental health and blood pressure were something. Most importantly, the fact that having Medicaid helped financially is important, as you could argue this is the fundamental reason why anybody gets any kind of insurance. This is a good summary of the study from TIE, which is a blog anybody interested in health care policy issues should be reading.
written by skeptonomist, June 26, 2013 1:03
Actually the study reported on things that can be measured on everyone, including "blood-pressure, cholesterol, and glycated hemoglobin levels", so it is not unreasonable to expect a significant difference with a sample of thousands - the difference was just below significance, which is a result of sorts. They did not really try to evaluate uncommon conditions, even those as frequent as heart attacks - these things require larger samples. As it turned out, the results were not significant, but again Medicare was never really intended to improve cholesterol level or even necessarily to reduce the incidence of heart attacks, but to insure that everyone got basic care without financial disaster.

Probably they did get data on heart attacks, but did not mention this in the preview because the results were far from significant.
A relatively meaningless result can be more significant than an encouraging one
written by David M, June 26, 2013 2:10
The case of Oomph vs. Precision may be an issue here,
written by MacCruiskeen, June 26, 2013 3:37
re: your addendum: if that is all there was to it, then you were still wrong to say the results were important.
written by NWsteve, June 26, 2013 10:49
it would appear from all of these mostly sincere but still didactic statements that:
1.: "statistical-significance" is one measure.
2.: "significance", at any other level, is quite another measure.
both have the potential for validity, or not...usage, definition, and context become quite important...

thank you very much to **DAVID M** for his very appropriate link--it was most educational, humorous, and right-on-point for this whole discussion...recommended reading for all...

Dr. Baker: incomplete dissenters notwithstanding: thank you for your "Addendum"...
(suggestion for future addenda: would it be possible to "time-stamp" them...
written by watermelonpunch, June 26, 2013 11:18
The results were probably important to the winners who got to go to the doctor without having to short their landlord on rent. In a sense, I would say that some landlords & grocers were bit of winners of the Oregon Medicaid lottery.

Even really crappy access is better than no access at all. And even the most crappy insurance is better than complete financial ruin.

If you were stranded on a deserted island - would you prefer to have access to an emotionally questionable surgeon with father issues... or no doctor at all?
Enough said.

Has Medicaid perhaps led one woman to finally get an appointment with a gynecologist and find out if she has ovarian cancer? Perhaps early enough to treat it? Or at least give her some peace that she's not in immediate danger of dying young like her grandmother did?

Did it save one 50 year old man who paid for private insurance for 20 years, only to lose insurance in the recession... to avoid getting a $26,000 bill for a life-saving appendectomy that would've cost $10,000 or less in much of Europe?

Did the Oregon lottery provide one 12 year old with a $50 tooth extraction which untreated could've led him to $250,000 unpaid brain surgery & death otherwise?

And how many family members & co-workers, benefited from the people who finally got their depression treated?

Did a neighborhood benefit from someone who won the lottery & finally got their schizophrenia managed again?

If the Oregon study can't tell you about this shit... then it doesn't tell you much of anything regarding the reason for Medicaid.

I recognize that scientifically, the statistical analysis of the Oregon Medicaid study is just a whole lot of not enough of anything to prove a negative or whatever...

That said, I think the whole exercise of pining on that is a case of missing the forest for the trees.

The roads in Pennsylvania are ATROCIOUS.
If someone suggested the solution to the problem of the horrendous roads & failing bridges, is to just completely disregard the roads & let people find ways to traverse the terrain if they're determined to travel or commute... the person who suggested that would be called insane, surely.
Yet that's exactly what some people seem to be considering regarding health care for scores living in our communities.

You know we're living in a society here.
Thomas Edsall at NYTimes has a long article claiming the inequality is DEcreasing!
written by A Populist, June 26, 2013 11:51
Seriously - this looks like some press that needs a beating!
written by Tim Bartik, June 27, 2013 7:59
Some of the points made by commenters are confused about how to interpret statistical results.

It is true that if results are statistically insignificantly different from zero, that we cannot use the results to argue that this evidence clearly indicates that the policy has large effects.

On the other hand, if the point estimate implies large results, or alternatively, the confidence interval includes effects that are large, the results also cannot be used to argue that the evidence clearly indicates that the policy does NOT have large effects.

In other words, the results are inconclusive.

In fact, if the point estimate is large, than it is somewhat more likely that the effect is large than that the effect is small or zero. However, this "somewhat more likely" is not ENOUGH more likely that we would judge it to be statistically significant at conventional levels.

Some of the commenters instead are taking the perspective that all policy effects are to be deemed zero unless proven otherwise. This is a stance that unduly privileges a point estimate of zero over other possible point estimates. There is no reason to do this unless there are strong theoretical grounds to prefer a point estimate of zero.

All that estimates tell us is some probability distribution of possible policy effects based on the sample evidence. If the point estimate is "large" but the confidence interval includes zero, then there is a non-negligible probability that either "large" or "zero" are the true policy effect.
written by watermelonpunch, June 27, 2013 8:52
Some of the commenters instead are taking the perspective that all policy effects are to be deemed zero unless proven otherwise. This is a stance that unduly privileges a point estimate of zero over other possible point estimates.

I thought of that too. It sounds a bit like some perverse version gambler's fallacy. And the arguments remind me of debates about the existence of god among students in my Philosophy 101 class 2 decades ago.

"Econ Talk" had 2 podcasts about the Oregon study. I thought they were pretty informative about the issues involved.
written by Ken Schulz, June 30, 2013 4:32
xteeth, MacCruiskeen, Raven, pete: There is a grain of truth in your arguments. The logic of the Null Hypothesis Significance Test requires one to pretend that if the test statistic does not fall in the rejection region, that nothing can be learned from the data. The proper conclusion to come to is that the NHST is a lousy way to do science.

Dr. Baker, Prof. Mulligan, Jennifer, BH, Tim Bartik and I do not have to play by these silly rules. We're free to consider the evidence from a Bayesian, or information-theoretic point of view, which are much more efficient at extracting information from datasets. In which case, these positive results provide (albeit weak) support for a benefit from increased Medicaid enrollment.

If you are interested in understanding this better, I suggest you might read up on Bayesian statistics, or the Theory of Signal Detection (essentially the logic of the NHST, but correctly and rigorously developed, so that it makes use of all available information.)

this is me being generous: Actually, these researchers had one of the most powerful tools of experimental method: random assignment, which is unusual for this kind of field study. The lack of _statistical_ power was due to insufficient n. Nothing to due with data contamination/cleaning, probably everything to due with budget. Do you have anything to back up your bald assertions?

Write comment

(Only one link allowed per comment)

This content has been locked. You can no longer post any comments.


Support this blog, donate
Combined Federal Campaign #79613

About Beat the Press

Dean Baker is co-director of the Center for Economic and Policy Research in Washington, D.C. He is the author of several books, his latest being The End of Loser Liberalism: Making Markets Progressive. Read more about Dean.