Odd Are, It’s Wrong

There’s a good article by Tom Siegfried in Science News about what’s wrong with statistics.

Take the idea of statistical significance. Much of social science is based on the (very arbitrary) idea that for any given correlation, there should be a less than 5% chance of that result being due to random chance.

[And as any sociology grad student knows, if you run 20 random regressions, one will be found to be “significant” at the p < .05 level. This is why quantitative methods are no substitute for having a brain.]

More counter-intuitive is the idea that a 5% chance that findings are random is most definitely notthe same as saying there’s a 95% chance the result is accurate.

Take drug testing. You pay some private company too much money to test 400 people. 400 people go to some office and pee in a cup. 38 test positive. So what percent take drugs? You might guess that 38 positives mean 38 people (or 9.5%) take drugs. But you’d be very wrong. Oh, those stats… they are slippery!

The answer is we don’t know. The results by themselves mean very little. We need more information. And there are two questions that might be asked here: 1) what is the level of drug use overall and 2) does a certain individual takes drugs. The former is a bit easier because you can adjust for errors. But when you’re talking about a individual, there is very little room for error.

Let’s say we knew the tests are 95% accurate (a big if). If there were 40 drug takers, the drug test would test accurately positive for 38 of these 40. So if we got 38 positives out of 400, could we say that the drug use overall is 40 out of 400 or 10%? No. That’s not how it works in the real world.

Now let’s say we know there are 20 drug users out of 400. Of course in the real world it’s hard to imagine knowing this “prior probability” before you gave a drug test. And let’s say we also knew that the test were 95% accurate. If we knew all this, then we could say that of the 20 who take drugs, 19 would test positive. And of the 380 who do not take drugs, another 19 would also test (falsely) positive.

So after the tests, the people making money looking at pee would report that based on their test (which they advertise is “95% accurate!”), 38 of 400 people tested positive for drugs. But we started this based on the assumption that there are only 20 drug users! Of these 38 “positive” results, only 50% are actually drug takes! Half. And that’s a 50% error rate for a test that is “95% accurate”!

Look at it this way: if nobody among the 400 took drugs, 20 people (5%) would still test positive!

Want your money back?

Now if you just cared about the overall usage rates, you could make a simple little table that tells you, based on the accuracy of the tests and the total number of positive results, what percentage of the group actually takes drugs. (Though keep in mind that as long as the actually percentage of drug users is less than the “confidence” level of the test, in this case 95%, the number of “positives” will always be greater than the actual number of users.) Such data would be useful for researchers and major league baseball.

But the tests don’t tell you which of those who test positive are actually guilty. This, as you can imagine, is a big problem. Consider the employee not hired or the paroled man sent back to the joint for pissing hot.

Of course you could improve the drug test or re-test those who test positive. But even with a retest at 95% accuracy, 1 in 400 would still falsely test positive twice. Sucks to be him.

Back in the real world, at least in our free country, we continually re-test those who test negative. But if “positives” are only 50% accurate, what’s the point?

What does this mean for the world of science and statistics? “Any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical.” Put thatin your quantitative pipe and smoke it.

2 thoughts on “Odd Are, It’s Wrong

  1. OK, I read it again, and it makes sense now. Thank you. 🙂

    This is one of the reasons I stay away from both history and statistics in my drug policy presentations. I'm sure both fields contain information that supports the need to end prohibition, but I don't trust myself with either of these topics. Mostly I focus on logic and my own experiences in law enforcement.

Comments are closed.