Blog

Does Diet Coke Cause Fat Babies?

O.k., I just couldn't let this one slide.  I've seen the results of this study in JAMA Pediatrics discussed in a variety of news outlets with the claim that researchers have found a link between mothers drinking artificially sweetened beverages and the subsequent weight of their infants.

I'm going to be harsh here, but this sort of study represents everything wrong with a big chunk of the nutritional and epidemiology studies that are published and how they're covered by the media.  

First, what did the authors do?  They looked at the weight of babies one year after birth and looked at how those baby weights correlated with whether (and how much) Coke and Diet Coke the mom drank, as indicated in a survey, during pregnancy.  

The headline result is that moms who drank artificially sweetened beverages every day in pregnancy had slightly larger babies, on average, a year later than the babies from moms who didn't drink any artificially sweetened beverages at all.  Before I get to the fundamental problem with this result, it is useful to look at a few more results contained in the same study which might give us pause.

  • Mom's drinking sugar sweetened beverages (in any amount) had no effect on infants' later body weights.  So drinking a lot of sugar didn't affect babys' outcomes at all but drinking artificial sweeteners did?
  • The researchers only found an effect for moms who drank artificially sweetened beverages every day.  Compared to moms who never drink them, those who drink diet sodas less than once a week actually had lighter babies! (though the result isn't statistically significant).  Also, moms drinking artificially sweetened beverages 2-6 times per week had roughly the same weight babies as moms who never drank artificially sweetened beverages.  In short, there is no evidence of a dose-response relationship that one would expect to find if there was a causal relationship at play.  

And, that's the big issue here: causality.  The researchers have found a single statistically significant correlation in one of six comparisons they made (three levels of drinking compared to none for sugar sweetened beverages and for artificially sweetened beverages).  But, as the researchers themselves admit, this is NOT a casual link (somehow that didn't prevent the NYT editors from using the word "link" in the title of their story).  

Causality is what we want to know.  An expecting mother wants to know: if I stop drinking Diet Coke every day will that lower the weight of my baby?  That's a very different question than what the researchers actually answered: are the types of moms who drink Diet Coke every day different from moms who never drink Diet Coke in a whole host of ways, including how much their infants weigh?  

Why might this finding be only a correlation and not causation? There are a bunch of possible reasons.  For example, moms who expect their future children might have weight problems may choose to drink diet instead of regular.  If so, the the moms drinking diet have selected themselves into a group that is already likely to have heavy children.  Another possible explanation: moms who never drink Diet Cokes may be more health conscious overall.  This is an attitude that is likely to carry over to how they feed and raise their children which will affect their weight in ways that has nothing to do with artificially sweetened beverages.

Fortunately economics (at least applied microeconomics) has undergone a bit of credibility revolution.  If you attend a research seminar in virtually any economist department these days, you're almost certain to hear questions like, "what is your identification strategy?" or "how did you deal with endogeneity or selection?"  In short, the question is: how do we know the effects you're reporting are causal effects and not just correlations.  

Its high time for a credibility revolution in nutrition and epidemiology.  

Lab grown meat

Quartz.com just ran a piece taken from one of the chapters of Unnaturally Delicious on lab grown meat.  Here's the start:

On Aug. 5, 2013, Mark Post went out to grab a hamburger. This was no drive-through Big Mac. Rather, Post bit into his $325,000 burger in front of an invitation-only crowd of journalists, chefs, and food enthusiasts in the heart of London.

The strangest part wasn’t the cost or the crowd but the meat. Post, a professor of vascular physiology at Maastricht University in the Netherlands, grew the burger himself. Not from a cow on his farm, mind you, but from a bovine stem-cell in a petri dish in his lab. Post’s research, partially funded by Sergey Brin, one of Google’s co-founders, has the potential to upend conventional wisdom on the environmental, animal welfare, and health impacts of meat eating.

Ironically enough, I first met Post at a meeting of some of the world’s largest hog producers.

The Quartz editors left out what I think is one of the most important points made in the chapter about relative inefficiencies of meat eating.  So, for sake of completeness, here's the segment they left out (long time readers will recognize that I've touch one this theme in previous blog posts).

****************************

More broadly, this line of argument – that meat production (inside the lab or out) is “wasteful” because it requires feed inputs that humans might use – is misplaced.  To see this, it is useful to consider a thought experiment – an imaginary story that might help us get to the bottom of things. 

Imagine a biologist on an excursion to the Amazon looking for new plant species.  She comes across a new grass she’s never before seen, and brings it back home to her lab.  She finds that the grass grows exceedingly well in greenhouses with the right fertilizer and soil, and she immediately moves to field trials.  She also notices that the grass produces a seed that is durable, storable, and extraordinarily calorie dense.  The scientist immediately recognizes the potential for the newly discovered plant to meet the dietary demands of a growing world population.

But, there is a problem.  Lab analysis reveals that the seeds are, alas, toxic to humans.  Despite the set-back, the scientist doesn’t give up.  She toils away year after year until she creates a machine that can convert the seeds into a food that is not only safe for humans to consume but that is incredibly delicious to eat.  There are a few downsides.  For every five calories that go into the machine, only one comes out.  Plus, the machine uses water, runs on electricity, burns fossil fuels, and creates carbon emissions. 

Should the scientist be condemned for her work?  Or, hailed as an ingenious hero for finding a plant that can inexpensively produce calories, and then creating a machine that can turn those calories into something people really want to eat?  Maybe another way to think about it is to ask whether the scientist’s new food can - despite its inefficiencies (which will make the price higher than it otherwise would be) - compete against other foods in the marketplace?  Are consumers willing to pay the higher price for this new food? 

Now, let’s call the new grass corn and the new machine cow. 

            This thought experiment is useful in thinking about the argument that corn is “wasted” in the process of feeding animals (or growing lab grown meat).  Yet, the idea that animal food is “wasted” is a common view.  For example, one set of authors in the journal Science wrote,

“Although crops used for animal feed ultimately produce human food in the form of meat and dairy products, they do so with a substantial loss of caloric efficiency. If current crop production used for animal feed and other nonfood uses (including biofuels) were targeted for direct consumption, ~70% more calories would become available, potentially providing enough calories to meet the basic needs of an additional 4 billion people. The human-edible crop calories that do not end up in the food system are referred to as the ‘diet gap.’”

The argument isn’t as convincing as it might first appear.  Few people really want to eat the calories that directly come from corn or other common animal feeds like soybeans.  Unlike my hypothetical example, corn is not toxic to humans (although some of the grasses cows eat really are inedible to humans), but most people don’t want to field corn.    

So if we don’t want to directly eat the stuff, why do we grow so much corn and soy?  They are incredibly efficient producers of calories and protein.  Stated differently, these crops (or grasses if you will) allow us to produce an inexpensive, bountiful supply of calories in a form that is storable and easily transported. 

The assumption seems to either be that the “diet gap” will be solved by convincing people to eat the calories in corn and soy directly, or that there are other tasty crops that can be widely grown instead of corn and soy which can produce calories as efficiently as corn and soy.  Aside from maybe rice or wheat (which also require some processing to become edible), the second assumption is almost certainly false.  Looking at current consumption patterns, we should also be skeptical that large swaths of people will want to voluntarily consume substantial calories directly from corn or soy.

What we typically do is take our relatively un-tasty corn and soy, and plug them into our machine (the cow or pig or chicken, or in Post’s case the Petri dish) to get a form of food we want to eat.  Yes, it seems inefficient on the surface of it, but the key is to realize that the original calories from corn and soy were not in a form most humans find desirable.  As far as the human pallet is concerned, not all calories are created equal; we care a great deal about the form in which the calories are delivered to us.

The grass-machine analogy also helps make clear that it is probably a mistake to compare the calorie and carbon footprint of corn directly with the cow.  Only a small fraction of the world’s caloric consumption comes from directly consuming the raw corn or soybean seeds.  It takes energy to convert these seeds into an edible form – either through food processing or through animal feeding.  So, what we want to compare is beef with other processed foods.  Otherwise we’re comparing apples and oranges (or in this case, corn and beef).

 The more relevant question in this case is whether lab grown meat uses more or less corn, and creates more or less environmental problems, than does animal grown meat.  

Restaurant Performance Index

I was recently made aware of the so-called Restaurant Performance Index (RPI) put out by the National Restaurant Association (NRA).  According to their website: 

The RPI is based on the responses to the National Restaurant Association’s Restaurant Industry Tracking Survey, which is fielded monthly among more than 400 restaurant operators nationwide on a variety of indicators including sales, traffic, labor and capital expenditures. Restaurant operators interested in participating in the tracking survey

and

Index values above 100 indicate that key industry indicators are in a period of expansion, while index values below 100 represent a period of contraction for key industry indicators. The Index consists of two components – the Current Situation Index, which measures current trends in four industry indicators (same-store sales, traffic, labor and capital expenditures), and the Expectations Index, which measures restaurant operators’ six-month outlook for four industry indicators (same-store sales, employees, capital expenditures and business conditions).

I was a bit curious. How well do some of the variables I'm tracking in my Food Demand Survey (FooDS) follow the NRA's RPI?  By looking back at past releases, I was able pull together  monthly data on the overall RPI, the current situation index, and the expectations index, and I merged these with data from FooDS on average reported spending on food away from home and changes in anticipated spending on food away from home each month.  

First, the good news.  Spending on food away from home (as measured by FooDS) seems to track closely with the RPI-current situation index.  

It probably isn't too surprising that the two are positive correlated since two of the measures of the RPI-current situation index are same store sales and traffic volume; however, it is still comforting to see my FooDS data roughly track this measure from the NRI.  One benefit of FooDS is that we release the data in a more timely manner.  Right now, the latest figures available from the National Restaurant Association (NRA) are for March.  However, I already have a measure of April's away from home food expenditures from FooDS (it's $55.43).  A simple linear regression predicts that NRA's current situation index will be 102.3 for April.  We're already fielding May's FooDS right now, so I can make an even more up-to-date forecast in a few days.

Now, the not-so-good news.  In FooDS, we track consumers' stated intentions to increase or decrease spending on food away from home in the following weeks.  One "anomaly" present in the FooDS data is that, every month, people say they plan to spend less on food away from home next month.  Over the three years we've been doing FooDS, the measure ranges from a -2.4% cutback in spending to a -1.05% cutback in spending (average is -1.59%).  I interpret this as people feeling guilty about spending on food away from home, and perhaps even evidence of a self-control problem: people plan to cut back on eating out but rarely actually do.  In any event, despite this "bias", trends in this variable might still be useful as there are some months people plan to cut back more than others.  

Here's a plot of the planned spending change on food away from home from FooDS alongside the expectations portion of the NRA's Restaurant Performance Index:  

While the correlation between the two is positive, it is pretty weak.  The two track each other pretty closely through about mid 2014 and then start moving in opposite directions.  The two measures are getting at slightly different things.  One is measuring how restaurant owners/managers are planning to change capital expenditures, staffing, and what they think sales will be in the future; the other is a measure of how much consumers think they'll change spending.  Maybe all this says is that restaurant operators' expectations are not the same as restaurant consumers' expectations.  

So, here's a little test.  How well do restaurant owner's expectations this month correlate with their own "acutal" (or current performance index) the following month?  The correlation between the lagged NRA expectation index and the current period NRA current situation index is 0.35.  O.k., so there is some accuracy associated with the NRA's expectation index.

Now, let's do the same thing with the FooDS data.  How well do consumers' expected spending changes this month correlate with their own "acutal" food away from spending next month? The correlation between the two is 0.68.  So, despite the fact consumers' expectations are downwardly biased as discussed above, changes in their expectations seem quite predictive of next month's spending on food away from home.    

So, it seems consumers (in aggregate) know their own futures a bit better than restaurant operators know theirs.  

Books of possible interest

Food and Nutrition Economics by George Davis and Elena Serrano published by Oxford University Press.  

Food and Nutrition Economics offers a much-needed resource for non-economists looking to understand the basic economic principles that govern our food and nutritional systems. Comprising both a quick grounding in nutrition with the fundamentals of economics and expert applications to food systems, it is a uniquely accessible and much-needed bridge between previously disparate scholarly and professional fields.

Douglas Southgate sent me a review copy of his book with Lois Roberts, Globalized Fruit, Local Entrepreneurs How One Banana-Exporting Country Achieved Worldwide Reach just published by University of Pennsylvania Press.  Here's how the book starts:

A tropical commodity bought and sold by the boatload throughout the world. Agribusinesses with worldwide reach, including a firm that has been a lightning rod for anti-corporate criticism since the Great Depression. Minor Latin American states on the receiving end of globalization. An uncomplicated, and oft-told, story of banana republics and the misfortunes visited on them by multinational companies. What more need be said?

Bananas are the ultimate nonlocal food.

But . . . (from the publisher):

Instead, Southgate and Roberts show that a competitive market for tropical fruit exists in and around Guayaquil, a port city dedicated to international commerce for centuries. Moreover, that market has consistently rewarded productive entrepreneurship.

Wheat breeding

One of my favorite interviews in Unnaturally Delicious was with Brett Carver, who is a fellow professor at Oklahoma State.  Carver is a wheat breeder.  He took me out to some fields I drive past every day.

Carver took me out to the middle of an unusual-looking wheat field. The feeling of awe and beauty that comes when you look out at amber waves of grain arises, in part, from the many acting as one: each stalk and head of grain is about the same height and size, and the group moves in unison with the wind. But this wasn’t that type of field. Carver’s field looked a bit like
a bad hair day. It was chaotic. Some stalks of wheat were almost up to my waist, others were only a bit taller than ankle height. Some stalks were golden yellow, others were darker brown. Some spikes scrawny, others fat. Long bristles protruded from most of the plants’ heads, but some had no bristles. Carver’s goal is to create a new wheat variety.

and

Standing in the middle of the proverbial haystack he planted, Carver said, “There are sixty-six thousand different strains out here. I’ll pick one of them, and it will ultimately be grown on millions of acres. It’s a big responsibility.” Carver developed all the top four varieties of wheat planted in the state of Oklahoma——Duster, Endurance, Gallagher, and Ruby Lee— where farmers planted more than five million acres of wheat in 2015.

One of the most fascinating lessons I learned was about the history of wheat.

Even though wheat has been around since the dawn of civilization, it is actually a product of biotechnology. But, as Carver said, “Man didn’t do it. . . . God did it or nature did it, but it wasn’t man.” He added, “If I tried to do this today, I’d be labeled a mad scientist who’s creating some sort of evil genetically modified food.”

The history of wheat can be found in its DNA. Unlike humans, wheat does not have one father and mother but three fathers and three mothers. Rather than a single pairing of genes, which is what occurs in humans (a diploid), wheat has three sets of chromosomes, and each set exists as a pair—something called a hexaploid. This somewhat strange state of affairs came about when one species mated with another, and then it happened yet again. Carver explained that about 300,000 years ago one grassy weed species crossed with another—a spiky, unruly-looking plant that eventually led to the plant we call emmer. Then, about ten thousand years ago, this crossbreed mated yet again, with another grassy species, one of the many goatgrasses. The result is our modern wheat used for making bread . . . All this makes Carver’s job more complex. Whereas humans have an estimated twenty-to twenty-five thousand genes, wheat has 164,000 to 334,000 genes.