Pages

Wednesday, February 23, 2011

Cell-phones and the Brain

I know it's popular to pick on established institutions and point out how much they suck lately. I'm going to resist the urge to do that, and simply point out on example of how a generally good institution, Scientific American, seems to have lost the plot.

(before I start it's disclaimer time: although I no longer work for a cellular operator, I'm a fan of cellular technology in general. So while I may have some emotional bias in this area, I try very hard to be rational about it - it's not like my livelihood depends on it or anything.)

Unless you've been living under a rock for the last ten years, you've no doubt come across the claim that cell-phones are somehow bad for your health. The most popular claim is that it somehow causes brain cancer.

While there's still scope for long-term epidemiological studies to provide further insight, the best data we have so far suggests that there is no such effect. And if there is an effect, it's probably so small as to be virtually indistinguishable from chance.

Not only that, but the whole idea that cell-phones could do any kind of damage like that is highly implausible. They just don't work that way.

For a more comprehensive look at the issue, head over to Steve Novella's Neurologica Blog. He knows way more about this than I do.

However data never got in the way of a good health scare, and misinformed people have been making a fuss over the health risks of cell-phone radiation as long as there have been cell-phones. And it's not a small fuss either - people have been losing their goddam minds over this. It's becoming a real problem!

Given that the whole cell-phone health concerns thing is such a hot-button topic right now, why the hell would a publication that has a reputation for being a bastion for reason and rationality such as Scientific American publish a steaming pile of bullshit like this: "Cell phone emissions change brain metabolism - By Katherine Harmon"?

What's the Story?

In summary, this reports on a study being published in the Journal of the American Medical Association (how they got into that publication is a mystery to me). The study is an experimental one in which they expose healthy human subjects to cellphone radiation for a bit, then run them through a PET scanner to see what effect it had on the brain's glucose metabolism. And guess what? They found an effect.

Wow! That's hectic, right?

No. It's bullshit, and here's why. Without going into any deep analysis on their results, I see two huge problems with this study.

Problem the First

This was an experimental study in which they recruited volunteers to participate. Not 100 000 volunteers. Not  10 000. Not even 1000. Those would be pretty good studies. No, they recruited 47 volunteers. Forty seven.

I don't have to be a professional scientist to know that a sample of 47 is way too small to produce any meaningful results. All you need is one anomalous result and the whole graph is thrown out of kilter. That's not enough people!

Problem the Second

They found that the brain glucose metabolism in areas of the brain close to the active antenna were"significantly higher". What do they define as "significantly"? Seven percent. That's right, seven percent.

I don't have to be a professional statistician to know that 7% is not significant. It's around what you'd expect to see with random noise in the result. And considering they had such a small sample (forty seven!) that noise ratio would probably be even higher!

So What Am I Saying?

This study has produced practically nothing. At best this might be considered an interesting preliminary result, prompting further study. But I think even that would be generous. I don't think it's interesting at all - it looks like a negative result to me.

That said, I can't really fault the researchers here. Despite their willingness to speculate wildly on all sorts of ways cell-phones might be killing us, at least they're doing science (albeit bad science).

I don't even blame Katherine Harmon. While her piece was a little alarmist, she at least made an effort to include some sceptical opinion in there. While I think it leans a little too far towards the false balance side of things, at least there was balance of some sort.

No, the person I blame is the idiot Scientific American editor who put that sensationalist headline on the piece. It's clearly deliberately provocative. And thanks to that guy (or girl), the cell-phone radiation cranks will be all over this shit like teenage white girls on Justin Beiber. We won't hear the end of it!

Thanks for nothing, dumbass!

2 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. Owen, you’re very probably right that the article, via its title, is alarmist/sensationalist. If you are right though, it’s not because your intuitive grasp of epidemiological methods and their use of statistics is necessarily satisfactory. You’d be right because you got lucky, not because your reasoning is sound.

    The first point to note is that, given certain requirements being met, a sample of 47 can be sufficient to demonstrate quite convincingly that a significant correlation exists between a test variable and a response variable. It is spurious to dismiss a study purely on the grounds that it involves only a few subjects. (Where indeed is the cut-off that separates sufficiency from insufficiency?) Perhaps the most important (and most difficult) aspect in this regard is being certain that the sample was randomly drawn (and, where relevant, randomly separated into test- and control groups). In this case, volunteers were used and it isn’t clear how their involvement was solicited, so a sample selection bias is surely possible.

    The second point to consider is that a difference of “just” 7% in observed effect between control and test conditions doesn’t mean the difference can be rejected as minor or due to chance. If the measurement error of the response variable (brain glucose metabolism) is small, a difference of a few per cent can be significant IF the spread in observed values under both test- AND control conditions is small. To illustrate, suppose the experiment showed a certain average for the response variable with little variation before the test variable (cell phone EM exposure) was introduced AND, upon introduction of the test variable, ALL (or nearly all) subjects showed an increase in the response variable with very little variation THEN the test can be taken as showing a real phenomenon. To be even more clear and specific, if the test group responded with an average increase of +7% that varied uniformly across the group by between +4% and +10% then that would be much more convincing than an average change of +7% that varied uniformly between –10% and +24%. In the latter case, the spread of observed values is much wider. I hope the principle here is clear enough.

    Statistics has precise methods for quantifying the significance of an effect taking the spreads of values (technically, “variances”) and their distribution types into account. A further point to bear in mind is the fact that most epidemiological studies are conducted at a significance level of α = 0.05, which means that, on average, one in twenty studies will throw up a false positive or a false negative.

    ReplyDelete