This study in the American Journal of Clinical Nutrition reveals the twin problems of: 1) selectivity bias in the academic publication process (papers finding positive results are more publishable than those finding negative results) and 2) epidemiological studies based on cross-sectional data and simple regression analysis (which confuse correlation with causation).
The study picked 50 common ingredients from random recipes in a cookbook and then scoured the published academic research to see if they could find studies purporting a link with cancer. Here is what they found:
Forty ingredients (80%) had articles reporting on their cancer risk. Of 264 single-study assessments, 191 (72%) concluded that the tested food was associated with an increased (n = 103) or a decreased (n = 88) risk; 75% of the risk estimates had weak (0.05 > P ≥ 0.001) or no statistical (P > 0.05) significance. Statistically significant results were more likely than nonsignificant findings to be published in the study abstract than in only the full text (P< 0.0001).
Here are the authors in the Washington Post:
"I was constantly amazed at how often claims about associations of specific foods with cancer were made, so I wanted to examine systematically the phenomenon,” e-mails study author John Ioannidis ”I suspected that much of this literature must be wrong. What we see is that almost everything is claimed to be associated with cancer, and a large portion of these claims seem to be wrong indeed.”
For reference, here are the ingredients studied:
veal, salt, pepper spice, flour, egg, bread, pork, butter, tomato, lemon, duck, onion, celery, carrot, parsley, mace, sherry, olive, mushroom, tripe, milk, cheese, coffee, bacon, sugar, lobster, potato, beef, lamb, mustard, nuts, wine, peas, corn, cinnamon, cayenne, orange, tea, rum, and raisin
The conclusion isn't that nothing causes cancer. Rather, one should be careful over-interpreting the results from one or two studies conducted using correlation analysis.