Blog

NYT Editorial on My Food Policy Study

Yesterday, the New York Times ran an editorial on the political fight over GMO labeling.  In the piece, the editorial board cited one of my studies (with Marco Costanigro) in the following passage:

There is no harm in providing consumers more information about their food. A study published in the journal Food Policy in 2014 found that labels about genetic modification did not influence what people thought about those foods.

I want to add a clarification and caveat to that statement.   What we found (in the context of an internet survey), is that the addition of GMO labels didn't make people more concerned about GMOs than they already were.  That is, the addition of a label didn't seem to send a signal that GMOs were more risky than consumers already thought they were.  

However, we did find that consumers would attempt to avoid foods with a GMO label.  Consumers' choices in our studied implied they were willing to pay as much $1.98/lb to avoid an apple that has a mandatory "genetically engineered" label relative to an unlabeled apple.  As I discussed just yesterday, it is precisely this issue that is the big potential driver of the costs of mandatory labeling.  That is, if some segment of consumers tries to avoid GMO labels, retailers and food manufacturers may respond by trying to source more costly non-GMO crops.    

Finally, I'll note that despite the above quote, that different types of GE labels in fact had very big effects on what people "thought" or were willing to pay for GE foods.  In particular, we compared how willingness-to-pay (WTP) for an unlabeled apple varied when there were apples with mandatory labels (i.e., "genetically engineered) vs.  voluntary labels (i.e., "not genetically engineered").

We found that the WTP premium for the unlabeled apple relative to the apple labeled "genetically engineered" was the aforementioned $1.98/lb.  However, the WTP premium for apples labeled "not genetically engineered" relative to the unlabeled apple was only $0.81/lb.  Thus, the implied willingness-to-pay to avoid GE was [(1.98–0.81)/0.81] ∗ 100 = 144% higher in the mandatory labeling treatment as compared to the voluntary labeling treatment.  In the paper, we write:

The differences in responses to mandatory vs. voluntary labels may result from the asymmetric negativity effect, which may in turn result from differences in what these two labels signal about the relative desirability of the unlabeled product. The differences in the “contains” vs. “does not contain” may also send different signals and change beliefs about the likelihood that the unlabeled product is GE or non-GE.

One more point that I just can't led slide.  The editorial also mentions the following:

Various polls have found that about 90 percent of Americans favor mandatory labels for genetically modified foods.

Yes, but about the same percentage of consumers say they want mandatory labels on foods with DNA.  And, when you directly ask people, the vast majority say they don't want the issue decided by state ballot initiatives but rather by the FDA.  And, we've had real-life ballot initiatives in five states now, and all have failed to garner more than 50% support.  Whatever positive reasons may exist for mandatory labeling, the cited "90% of people want it" reason is the most dubious and misleading.

Consumer Uncertainty about GMOs and Climate Change

A lot of the debate and discussion surrounding public policies toward controversial food and agricultural issues like GMOs or climate change revolves around public sentiment.  We ask people survey questions like "Do you support mandatory labeling of GMOs?"  However, as I've pointed out, consumers may not even want to have to make this sort of decision; they would prefer to defer to experts.  Thus, we're presuming a level of understanding and interest that consumers may not actually have.  This is related to the recent discussion started by Tamar Haspel in the Washington Post about whether the so-called food movement is large or small.  Are "regular" people actually paying much attention to this food stuff that occupies the attention of so many journalists, researchers, writers, and non-profits?

I had these thoughts in mind as I went back and looked at this post by Dan Kahan who took issue with Pew's survey on public opinions about GMOs (this was the survey that attracted a lot of attention because it showed a large gap in public and scientific opinion on GMOs).  Kahan wrote:

the misimpression that GM foods are a matter of general public concern exists mainly among people who inhabit these domains, & is fueled both by the vulnerability of those inside them to generalize inappropriately from their own limited experience and by the echo-chamber quality of these enclaves of thought.

and

That people are answering questions in a manner that doesn’t correspond to reality shows that the survey questions themselves are invalid. They are not measuring what people in the world think—b/c people in the world (i.e., United States) aren’t thinking anything at all about GM foods; they are just eating them.

The only things the questions are measuring—the only thing they are modeling—is how people react to being asked questions they don’t understand.

This let me to think: what if we asked people whether they even wanted to express an opinion about GMOs?  So, in the latest issue of my Food Demand Survey (FooDS) that went out last week, I did just that.  I took my sample of over 1,000 respondents and split them in half.  For half of the sample, I first asked, "Do you have an opinion about the safety of eating genetically modified food?"  Then, only for people who said "yes", I posed the following: "Do you think it is generally safe or unsafe to eat genetically modified foods?" For the other half of the sample, I just asked the latter question about safety beliefs and added the option of "I don't know".  This question, by the way, is the same one Pew asked in their survey, and they didn't even offer a "don't know" option - it had to be volunteered by the respondent.  So, what happens when you allow for "I don't know" in these three different ways? 

When "don't know" is asked 1st in sequence before the safety question, a whopping 43% say they don't have an opinion!  By contrast, only 28% say "don't know" when it is offered simultaneously with the safety question.  And, as the bottom pie graph shows, only about 6% of respondents in the Pew survey voluntarily offer "don't know".  Thus, I think Kahan's critique has a lot of merit: a large fraction of consumers gave an opinion in the Pew survey, when in fact, they probably didn't have one when this option was allowed in a more explicitly matter.  

Moreover, allowing (or not allowing) for "don't know" in these different ways generates very different conclusions about consumers' beliefs about the safety of GMOs.  Conditional on having an opinion, the percent saying "generally safe" varies from 40% in the sequential question to 50% in the simultaneous question to 39% in the Pew format which didn't offer "don't know."  That support can vary so widely depending on how "don't know" is asked is hardly indicative of stable, firm, beliefs about GMOs among the general public. 

In last week's survey I also carried out the same exercise regarding Pew's questions on climate change.  For half of my sample, I first asked whether people had an opinion about the causes of changes in the earth's temperature; for the other half, I included "don't know" as an option simultaneous with the question itself.   Here are the results compared to Pew's, which again did not explicitly offer a "don't know."  

Again, we see big differences in the extent to which "don't know" is expressed depending on question format, varying from 37% in the sequential version to only 2% in Pew's survey.  In this case, it appears that people who would have said "don't know" in the sequential question format are more likely to pick response categories that disagree with scientists, when they are given questions where "don't know" isn't so explicitly allowed.  

What can we learn from all this?  Just because people express an opinion on surveys doesn't mean they actually have one (or at least not a very firmly held one).  

Do Survey Respondents Pay Attention?

Imagine taking a survey that had the following question. How would you answer?

If you answered anything but "None of the Above", I caught you in a trap.  You were being inattentive.  If you read the question carefully, the text explicitly asks the respondent to check "None of the Above."  

Does it matter whether survey-takers are inattentive?  First, note surveys are used all the time to inform us on a wide variety of issues from who is most likely to be the next US president to whether people want mandatory GMO labels.  How reliable are these estimates if people aren't paying attention to the questions we're asking?  If people aren't paying attention, perhaps its no wonder they tell us things like that they want mandatory labels on food with DNA.

The survey-takers aren't necessarily to blame.  They're acting rationally.  They have an opportunity cost of time, and time spent taking a survey is time not making money or doing something else enjoyable (like reading this post!).  Particularly in online surveys, where people are paid when they complete the survey, the incentive is to finish - not necessarily to pay 100% attention to every question.

In a new working paper with Trey Malone, we sought to figure whether missing a "long" trap question like the one above or missing "short" trap questions influence the willingness-to-pay estimates we get from surveys.  Our longer traps "catch" a whopping 25%-37% of the respondents; shorter traps catch 5%-20% depending on whether they're in a list or in isolation.  In addition, Trey had the idea of going beyond the simple trap question and prompting people if they got it wrong.  If you've been caught in our trap, we'll let you out, and hopefully we'll find better survey responses.  

Here's the paper abstract.

This article uses “trap questions” to identify inattentive survey participants. In the context of a choice experiment, inattentiveness is shown to significantly influence willingness-to-pay estimates and error variance. In Study 1, we compare results from choice experiments for meat products including three different trap questions, and we find participants who miss trap questions have higher willingness-to-pay estimates and higher variance; we also find one trap question is much more likely to “catch” respondents than another. Whereas other research concludes with a discussion of the consequences of participant inattention, in Study 2, we introduce a new method to help solve the inattentive problem. We provide feedback to respondents who miss trap questions before a choice experiment on beer choice. That is, we notify incorrect participants of their inattentive, incorrect answer and give them the opportunity to revise their response. We find that this notification significantly alters responses compared to a control group, and conclude that this simple approach can increase participant attention. Overall, this study highlights the problem of inattentiveness in surveys, and we show that a simple corrective has the potential to improve data quality.

Did the Cancer Announcement Affect Bacon Demand?

On October 26, 2015 the International Agency for Research on Cancer (IARC) — an agency within the World Health Organization — released its report indicating that processed meat is carcinogenic.  

The announcement sparked a lot of media coverage with titles like: "Bad Day for Bacon".  (Here were my thoughts shortly after the announcement, along with some survey responses based the news).

Despite the news coverage after the announcement, I haven't seen much investigation of whether it impacted meat markets.  Thus, I thought I'd take a look at the data, recognizing it is probably impossible at this point to conclusively identify whether the IARC report caused a shift in demand.

I turned to the USDA Ag Marketing Service's daily reporting of pork primal composite values.  Rather than just looking at what happened to the prices of bacon (or rather pork belly) in isolation, it is probably useful to look in relation to another cut that may be less affected by the announcement.  I chose the pork loin.  This is an attempt to control for any changes over time happening on the supply-side (the quantity of loin from a pig is, at least in the short run, in fixed proportion to the quantity of pig belly).

I calculated the ratio of pork belly prices to pork loin prices over the past year.  The graph below shows the price ratio before and after the IARC announcement.  In the few weeks before the announcement, bellys were selling at 1.9 times the price of loins.  In the few weeks after the announcement, bellys were selling at only 1.5 times the price of loins.  Thus, there has been a roughly 26% drop in the relative value of bacon. 

At this point, I'd be hesitant to say that the IARC announcement is THE cause of this change, but the large immediate drop just following the release date is suggestive of some impact.  


The Cost of Others Making Choices for You

The journal Applied Economics just released a paper entitled "Choosing for Others" that I coauthored with Stephan Marette and Bailey Norwood.  The paper builds off our previous research that aimed to study the value people place on the freedom of choice by trying to explicitly calculate the cost of others making choices for you (at least in our experimental context).  

The motivation for the study:

It is not uncommon for behavioural economic studies to utilize experimental evidence of a bias as the foundation for advocating for a public policy intervention. In these cases, the paternalist/policymaker is a theoretical abstraction who acts at the will of the theorist to implement the preferred plan. In reality, paternalists are flesh-and-blood people making choices on the behalf of others. Yet, there is relatively little empirical research (Jacobsson, Johannesson, and Borgquist 2007 being a prominent exception) exploring the behaviour of people assigned to make choices on another’s behalf.

The essence of the problem is as follows:

When choices are symmetric, the chooser gives the same food to others as they take for themselves, and assuming the recipient has the same preferences as the chooser, the choice inflicts no harm. However, when asymmetric choices occur, an individual receives an inferior choice and suffers a (short-term) welfare loss. Those losses might be compensated by other benefits if the chooser helps the individual overcome behavioural obstacles to their own,
long-run well-being. However, the short-term losses that arise from a mismatch between outcomes preferred and received should not be ignored, though they often are, and this study seeks to measure their magnitude in a controlled experiment.

What do we find?

We find that a larger fraction of individuals made the same choices for themselves as for others in the US than in France, and this fraction increased in both locations after the provision of information about the healthfulness of the two choices.

and

What is interesting is that the per cent of paternalistic choices declined in both the US and France after information was revealed, with a very small decline in France and a considerable decline in the US. The per cent of indulgent choices also declined after information, so the effect of information was that it largely reduced asymmetric choices. Information substituted for paternalism. After information, choosers selected more apples for themselves and more apples for others, such that there was less need for paternalism to increase apple consumption.