tag:blogger.com,1999:blog-9319785.post6077226918527434513..comments2023-10-21T14:31:58.215+02:00Comments on 01 and the universe: Cell-phones and the BrainOwen Swarthttp://www.blogger.com/profile/03023166526319714519noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-9319785.post-59705359391146697502011-02-25T21:06:55.598+02:002011-02-25T21:06:55.598+02:00Owen, you’re very probably right that the article,...Owen, you’re very probably right that the article, via its title, is alarmist/sensationalist. If you are right though, it’s not because your intuitive grasp of epidemiological methods and their use of statistics is necessarily satisfactory. You’d be right because you got lucky, not because your reasoning is sound.<br /><br />The first point to note is that, given certain requirements being met, a sample of 47 can be sufficient to demonstrate quite convincingly that a significant correlation exists between a test variable and a response variable. It is spurious to dismiss a study purely on the grounds that it involves only a few subjects. (Where indeed is the cut-off that separates sufficiency from insufficiency?) Perhaps the most important (and most difficult) aspect in this regard is being certain that the sample was randomly drawn (and, where relevant, randomly separated into test- and control groups). In this case, volunteers were used and it isn’t clear how their involvement was solicited, so a sample selection bias is surely possible.<br /><br />The second point to consider is that a difference of “just” 7% in observed effect between control and test conditions doesn’t mean the difference can be rejected as minor or due to chance. If the measurement error of the response variable (brain glucose metabolism) is small, a difference of a few per cent can be significant IF the spread in observed values under both test- AND control conditions is small. To illustrate, suppose the experiment showed a certain average for the response variable with little variation before the test variable (cell phone EM exposure) was introduced AND, upon introduction of the test variable, ALL (or nearly all) subjects showed an increase in the response variable with very little variation THEN the test can be taken as showing a real phenomenon. To be even more clear and specific, if the test group responded with an average increase of +7% that varied uniformly across the group by between +4% and +10% then that would be much more convincing than an average change of +7% that varied uniformly between –10% and +24%. In the latter case, the spread of observed values is much wider. I hope the principle here is clear enough.<br /><br />Statistics has precise methods for quantifying the significance of an effect taking the spreads of values (technically, “variances”) and their distribution types into account. A further point to bear in mind is the fact that most epidemiological studies are conducted at a significance level of α = 0.05, which means that, on average, one in twenty studies will throw up a false positive or a false negative.Annikahttps://www.blogger.com/profile/15735098460350135925noreply@blogger.comtag:blogger.com,1999:blog-9319785.post-81815281934822112252011-02-25T20:52:22.378+02:002011-02-25T20:52:22.378+02:00This comment has been removed by the author.Annikahttps://www.blogger.com/profile/15735098460350135925noreply@blogger.com