It’s likely you know someone who has bought into the notion that nutrition is everything, the source of all health and the cause of all illness. Nutrition is very important, to be sure, but it is only one of many possible causes of disease, and if you live in a Western industrialized nation you probably have adequate nutrition. The notion, however, that food can heal is powerfully alluring, and it makes great headlines. The result is that people who read the headlines for the latest food to avoid, or the latest ingredient that will make them live longer or stave off disease, seem to have an association for everything. Eating around them is to be constantly told that food X is good for you and will prevent Y, or that some other food should be avoided because it causes Z.
Red peppers will help prevent cancer and help you lose weight. Garlic will help prevent heart disease and aids in iron metabolism. Cayenne pepper prevents strokes. Peaches prevent heart disease and cancer. In fact- think of any food at random and type “random food health benefits” into Google and chances are you will be rewarded with a list of the amazing health benefits of whatever food you wish.
My usual response when offered such advice is – you know, food is healthy for you. I recommend you eat food every day. Food is full of nutrition, essential vitamins and minerals, and will give you energy. If you don’t eat food, your health with dramatically suffer. But don’t eat too much food – that’s not healthful.
It is helpful to have published evidence and statistics to back up my casual observation – that nearly all foods are touted as having health benefits or risks. Earlier this year Schoenfeld and Ioannidis did just that. They selected 50 common ingredients at random out of cookbooks, then scoured the literature looking for studies showing an association (positive or negative) with cancer. They found that 80% of the ingredients had such published studies:
Forty ingredients (80%) had articles reporting on their cancer risk. Of 264 single-study assessments, 191 (72%) concluded that the tested food was associated with an increased (n = 103) or a decreased (n = 88) risk; 75% of the risk estimates had weak (0.05 > P ≥ 0.001) or no statistical (P > 0.05) significance. Statistically significant results were more likely than nonsignificant findings to be published in the study abstract than in only the full text (P < 0.0001). Meta-analyses (n = 36) presented more conservative results; only 13 (26%) reported an increased (n = 4) or a decreased (n = 9) risk (6 had more than weak statistical support). The median RRs (IQRs) for studies that concluded an increased or a decreased risk were 2.20 (1.60, 3.44) and 0.52 (0.39, 0.66), respectively. The RRs from the meta-analyses were on average null (median: 0.96; IQR: 0.85, 1.10).
These results mean a few things. First – about 60% of all food ingredients will either increase or decrease your cancer risk, if you believe the studies included in this review. That is a lot of ingredients to keep track of – imagine designing your diet around this information, and this is only for cancer specifically. However, meta-analysis decreased that number by about 2/3, to about 20%. That’s a bit more manageable, but still quite a large number of ingredients.
Fortunately (well, depending on your perspective) you can probably ignore most of this data. Most of the studies were one-off studies, and 2/3 of those that were replicated did not fare well according to the meta-analysis. Further, study replication tended to diminish the magnitude of the relative risk (either increase or decrease).
This is a pattern that has been found generally in the literature, including by Ioannidis himself. Preliminary studies tend to have a large effect size which diminishes throughout replication and with more rigorous studies. Often that effect size declines to zero. This phenomenon is called “the decline effect”, and is at least in part due to publication bias.
Generally speaking, preliminary studies tend to have a positive bias, caused by both a researcher bias and a publication bias. With replication and more rigorous studies, most of these preliminary (and positive) results are discovered to be wrong (Ioannidis again).
Those effects that survive replication tend to diminish in magnitude (the decline effect). The simplest explanation for this is regression to the mean – the tendency for any extreme result (which is likely to get published) to return to a more statistically average result. Further, practitioners are probably adopting interventions too early, before they have finished declining in the research, and are subject to later reversals with more rigorous data.
In this current study on food and cancer, we are seeing all of these same effects, just applied to the relative risk of cancer associated with specific foods and ingredients. There is a large volume of preliminary and low grade evidence, most of it shows a positive result (an association), and most of that does not survive later replication.
Also reassuring is that when you look at the relative risk of various foods and cancer as revealed in the meta-analysis, they all average out. As the authors say – “The RRs from the meta-analyses were on average null.”
This can mean one of two things (or both): either there is some risk or benefit to certain foods, but all these effects average out to zero, or we are just seeing a scatter of random results in the literature that average out to zero – meaning the results themselves may not be real.
Either way it seems to me that these results suggest it is probably a waste of time to obsess over the health risks and benefit of every food and ingredient.
The more research we do on the health effects of food the more the aggregate of this research supports the following simple rule: eat a variety of food, don’t eat anything to excess, and like your mother said, make sure you eat your vegetables.