The New York Times ran a story yesterday, highlighting the findings of a paper coming out in the journal Food and Chemical Toxicology. The study reports that rats fed genetically modified corn developed more tumors and died more quickly than rats not fed genetically modified corn.
The study will no doubt ignite a firestorm on par with the Monarch butterfly scandal a decade ago (in that episode, Cornell researchers originally reported butterflies being killed by crops containing the Bt gene but later studies published in the Proceedings of the National Academies of Science concluded the effects were negligible).
In many ways, I applaud the efforts of the French scientists conducting the research. This is how science is done. Publish a result. Be upfront and honest with the methods. Others will see if they can replicate.
That said, an reasonable person must interpret these new results in light of the existing knowledge on the science of eating GM foods. The new study did to appear in a vacuum, and there are a large number of similar studies finding no such effects from eating GM food. Given this large baseline of previous research, we can't expect the present study have much influence on our prior beliefs. This is especially true in light of the fact that the statistical analysis used by at least some of these authors has been questioned before by none other than the European Food Safety Authority. And that the supposed causal mechanism between the effects the authors report and the genes involved in conveying resistance to herbicide seems, to me, highly speculative at best.
I am not on expert on rat feeding trials. But, the first thing that stood out to me about this study was the very small sample size. For each gender, there are only 10 rats per treatment group. It would be difficult, if not impossible, to publish an experimental paper in an economics journal with such a small sample size. Why? Because with such a small sample you can never really be sure whether the outcomes observed are simply due to chance.
Using a standard sample size calculation, we can find that with a sample of 10 individuals, the margin of error on a dichotomous variable (like whether a tumor is present or not) is over 30%. That means, assuming that that the researchers found 50% of rats had a tumor, that if we repeated the study over and over and over, that 95% of the time we'd find expect to find tumor rates between 20% and 70%. In other words, we cannot have much confidence that the effect the authors observe is "really there" or simply due to chance.