Blog

What's going on in your brain?

Ever wonder why you choose one food over another?  Sure, you might have the reasons you tell yourself for why you picked, say, cage vs. cage free eggs. But, are these the real reasons?

I've been interested in these sorts of questions for a while, and along with several colleagues, have turned to a new tool - functional magnetic resonance imaging (fMRI) - to peak people inside people's brains as they're choosing between different foods.  You might be able to fool yourself (or survey administrators) about why you do something, but you're brain activity doesn't lie (at lest we don't think it does).  

In a new study that was just released by the Journal of Economic Behavior and Organization,  my co-authors and I sought to explore some issues related to food choice.  The main questions we wanted to know were: 1) does one of the core theories for how consumers choose between goods of different qualities (think cage vs cage free eggs) have any support in neural activity?, and 2) after only seeing how your brain responses to seeing images of eggs with different labels, can we actually predict which eggs you will ultimately choose in a subsequent choice task?   

Our study suggests the answers to these two questions are "maybe" and "yes".  

First, we asked people to just look at eggs with different labels while they were laying in the scanner.  The labels were either a high price, a low price, a "closed" production method (caged or confined), or an "open" production method (cage free or free range), as the below image suggests.  As participants were looking at different labels we observed whether blood flow increased or decreased to different parts of the brain when seeing, say, higher prices vs. lower prices.  

We focused on a specific areas of the brain, the ventromedial prefrontal cortex (vmPFC), which previous research had identified as a brain region associated with forming value.  

What did his stage of the research study find?  Not much.  There were no significant differences in brain activation in the vmPFC when looking at high vs. low prices or when looking at open vs. closed production methods.  However, there was a lot of variability across people.  And, we conjectured that this variability across people might predict which eggs people might choose in a subsequent task.  

So, in the second stage of the study, we gave people a non-hypothetical choice like the following, which pitted a more expensive carton of eggs produced in a cage free system against a lower priced carton of eggs from a cage system.  People answered 28 such questions where we varied the prices, the words (e.g., free range instead of cage free), and the order of the options.  One of the choices was randomly selected as binding and people had to buy the option they chose in the binding task.  

Our main question was this: can the brain activation we observed in the first step, where people were just looking at eggs with different labels predict which eggs they would choose in the second step?

The answer is "yes".  In particular, if we look at the difference in the brain activation in the vmPFC when looking at eggs with a "open" label vs. an "closed" label, this is significantly related to the propensity to choose the higher-priced open eggs over the lower-priced closed eggs (it should be noted that we did not any predictive power from the difference in vmPFC when looking at high vs. low priced egg labels).  

Based on a statistical model, we can even translate these differences in brain activation into willingness-to-pay (WTP) premiums:

Here's what we say in the text:

Moving from the mean value of approximately zero for vmPFCmethodi to twice the standard deviation (0.2) in the sample while holding the price effect at its mean value (also approximately zero), increases the willingness-to-pay premium for cage-free eggs from $2.02 to $3.67. Likewise, moving two standard deviations in the other direction (-0.2) results in a discount of about 38 cents per carton. The variation in activations across our participants fluctuates more than 80 percent, a sizable effect that could be missed by simply looking at vmPFCmethod value alone and misinterpreting its zero mean as the lack of an effect.

Does Diet Coke Cause Fat Babies?

O.k., I just couldn't let this one slide.  I've seen the results of this study in JAMA Pediatrics discussed in a variety of news outlets with the claim that researchers have found a link between mothers drinking artificially sweetened beverages and the subsequent weight of their infants.

I'm going to be harsh here, but this sort of study represents everything wrong with a big chunk of the nutritional and epidemiology studies that are published and how they're covered by the media.  

First, what did the authors do?  They looked at the weight of babies one year after birth and looked at how those baby weights correlated with whether (and how much) Coke and Diet Coke the mom drank, as indicated in a survey, during pregnancy.  

The headline result is that moms who drank artificially sweetened beverages every day in pregnancy had slightly larger babies, on average, a year later than the babies from moms who didn't drink any artificially sweetened beverages at all.  Before I get to the fundamental problem with this result, it is useful to look at a few more results contained in the same study which might give us pause.

  • Mom's drinking sugar sweetened beverages (in any amount) had no effect on infants' later body weights.  So drinking a lot of sugar didn't affect babys' outcomes at all but drinking artificial sweeteners did?
  • The researchers only found an effect for moms who drank artificially sweetened beverages every day.  Compared to moms who never drink them, those who drink diet sodas less than once a week actually had lighter babies! (though the result isn't statistically significant).  Also, moms drinking artificially sweetened beverages 2-6 times per week had roughly the same weight babies as moms who never drank artificially sweetened beverages.  In short, there is no evidence of a dose-response relationship that one would expect to find if there was a causal relationship at play.  

And, that's the big issue here: causality.  The researchers have found a single statistically significant correlation in one of six comparisons they made (three levels of drinking compared to none for sugar sweetened beverages and for artificially sweetened beverages).  But, as the researchers themselves admit, this is NOT a casual link (somehow that didn't prevent the NYT editors from using the word "link" in the title of their story).  

Causality is what we want to know.  An expecting mother wants to know: if I stop drinking Diet Coke every day will that lower the weight of my baby?  That's a very different question than what the researchers actually answered: are the types of moms who drink Diet Coke every day different from moms who never drink Diet Coke in a whole host of ways, including how much their infants weigh?  

Why might this finding be only a correlation and not causation? There are a bunch of possible reasons.  For example, moms who expect their future children might have weight problems may choose to drink diet instead of regular.  If so, the the moms drinking diet have selected themselves into a group that is already likely to have heavy children.  Another possible explanation: moms who never drink Diet Cokes may be more health conscious overall.  This is an attitude that is likely to carry over to how they feed and raise their children which will affect their weight in ways that has nothing to do with artificially sweetened beverages.

Fortunately economics (at least applied microeconomics) has undergone a bit of credibility revolution.  If you attend a research seminar in virtually any economist department these days, you're almost certain to hear questions like, "what is your identification strategy?" or "how did you deal with endogeneity or selection?"  In short, the question is: how do we know the effects you're reporting are causal effects and not just correlations.  

Its high time for a credibility revolution in nutrition and epidemiology.  

Economics of Food Waste

There seems to be a lot of angst these days about food waste.  Last month, National Geographic focused a whole issue on the topic.  While there has been a fair amount of academic research on the topic, there has been comparatively little on the economics of food waste.  Brenna Ellison from the University of Illinois and I just finished up a new paper to help fill that void.

Here's the core motivation.

Despite growing concern about food waste, there is no consensus on the causes of the phenomenon or solutions to reduce waste. In fact, many analyses of food waste seem to conceptualize food waste as a mistake or inefficiency, and in some popular writing a sinful behavior, rather than an economic phenomenon that arises from preferences, incentives, and constraints. In reality consumers and producers have time and other resource constraints which implies that it simply will not be worth it to rescue ever last morsel of food in every instance, nor should it be expected that consumers with different opportunity costs of time or risk preferences will arrive at the same decisions on whether to discard food

So, what do we do?

First, we create a conceptual model based on Becker's model of household production to show that waste is indeed "rational" and responds to various economic incentives like time constraints, wages, and prices.  

We use some of these insights to design a couple empirical studies.  One problem is that it is really tough to measure waste.  And, people aren't likely to be very accurate at telling you, on a survey, how much food they waste.  Thus, we got a bit creative and came up with a couple vignette designs that focused on very specific situations.  

In the first study, respondents were shown the following verbiage.  The variables that were experimentally varied across people are in brackets (each person only saw one version).  

Imagine this evening you go to the refrigerator to pour a glass of milk. While taking out the carton of milk, which is [one quarter; three quarters] full, you notice that it is one day past the expiration date. You open the carton and the milk smells [fine; slightly sour]. [There is another unopened carton of milk in your refrigerator that has not expired; no statement about replacement]. Assuming the price of a half-gallon carton of milk at stores in your area is [$2.50; $5.00], what would you do?

More than 1,000 people responded to versions of this question with either "pour the expired milk down the drain" or "go ahead and drink the expired milk."  

Overall, depending on the vignette seen, the percentage of people throwing milk down the drain ranged from 41% to 86%.

Here are how the decision to waste varied with changes in the vignette variables.

The only change that had much impact on food waste was food safety concern.  The percentage of people who said they'd discard the milk fell by 38.5 percentage points, on average, when the milk smelled fine vs. sour.  The paper also reports how these results vary across people with different demographics like age income, etc.

We conducted a separate study (with another 1,000 people) where we changed the context from milk to a meal left-over.  Each person was randomly assigned to a group (or vignette), where they saw the following (experimentally manipulated variables are in brackets).

Imagine you just finished eating dinner [at home; out at a restaurant]. The meal cost about [$8; $25] per person. You’re full, but there is still food left on the table – enough for [a whole; half a] lunch tomorrow. Assuming you [don’t; already] have meals planned for lunch and dinner tomorrow, what would you do?

People had two response options: “Throw away the remaining dinner” or “Save the leftovers to eat tomorrow”.

Across all the vignettes, the percent throwing away the remaining dinner ranged from 7.1% to 19.5%.  

Here are how the results varied with changes in the experimental variables.

Meal cost had the biggest effect.  Eating a meal that cost $25/person instead of one that cost only $8/person reduced the percentage of people discarding the meal by an average of 5.8 percentage points.  People were also less likely to throw away home cooked meals than restaurant meals.  

There's a lot more in the paper if you're interested.

Do Survey Respondents Pay Attention?

Imagine taking a survey that had the following question. How would you answer?

If you answered anything but "None of the Above", I caught you in a trap.  You were being inattentive.  If you read the question carefully, the text explicitly asks the respondent to check "None of the Above."  

Does it matter whether survey-takers are inattentive?  First, note surveys are used all the time to inform us on a wide variety of issues from who is most likely to be the next US president to whether people want mandatory GMO labels.  How reliable are these estimates if people aren't paying attention to the questions we're asking?  If people aren't paying attention, perhaps its no wonder they tell us things like that they want mandatory labels on food with DNA.

The survey-takers aren't necessarily to blame.  They're acting rationally.  They have an opportunity cost of time, and time spent taking a survey is time not making money or doing something else enjoyable (like reading this post!).  Particularly in online surveys, where people are paid when they complete the survey, the incentive is to finish - not necessarily to pay 100% attention to every question.

In a new working paper with Trey Malone, we sought to figure whether missing a "long" trap question like the one above or missing "short" trap questions influence the willingness-to-pay estimates we get from surveys.  Our longer traps "catch" a whopping 25%-37% of the respondents; shorter traps catch 5%-20% depending on whether they're in a list or in isolation.  In addition, Trey had the idea of going beyond the simple trap question and prompting people if they got it wrong.  If you've been caught in our trap, we'll let you out, and hopefully we'll find better survey responses.  

Here's the paper abstract.

This article uses “trap questions” to identify inattentive survey participants. In the context of a choice experiment, inattentiveness is shown to significantly influence willingness-to-pay estimates and error variance. In Study 1, we compare results from choice experiments for meat products including three different trap questions, and we find participants who miss trap questions have higher willingness-to-pay estimates and higher variance; we also find one trap question is much more likely to “catch” respondents than another. Whereas other research concludes with a discussion of the consequences of participant inattention, in Study 2, we introduce a new method to help solve the inattentive problem. We provide feedback to respondents who miss trap questions before a choice experiment on beer choice. That is, we notify incorrect participants of their inattentive, incorrect answer and give them the opportunity to revise their response. We find that this notification significantly alters responses compared to a control group, and conclude that this simple approach can increase participant attention. Overall, this study highlights the problem of inattentiveness in surveys, and we show that a simple corrective has the potential to improve data quality.

Experimental Auction Summer School 2016

Applications are now being accepted for a summer school on Experimental Auctions that is organized by organized by Maurizio Carnavari and co-taught with Rudy Nayga and Andreas Drichoutis.  This will be our 5th installment.  In the past we've had the summer school near Bologna Italy, but last year we venturing out to Crete, Greece with great success.  This year's course is scheduled from July 5 to July 12, 2016 in Catania, Sicily.

Experimental auctions are a technique used to measure consumer willingness-to-pay for new food products, which in turn is used to project demand, market share, and benefits/costs of public policies. Two weeks ago, I got to meet with a company in Amsterdam, Veylinx (see my previous mention of them here), who is using the method in an online format at a commercial level for marketing research. The content of the course is mainly targeted toward graduate students or early career professionals (or marketing researchers interested in learning about a new technique).  You can find out more and register here.

Here's last year's class in Crete.

And, of course, one shouldn't forget what is perhaps the most valuable part - the after-class networking and brainstorming sessions!