Blog

Consumer Uncertainty about GMOs and Climate Change

A lot of the debate and discussion surrounding public policies toward controversial food and agricultural issues like GMOs or climate change revolves around public sentiment.  We ask people survey questions like "Do you support mandatory labeling of GMOs?"  However, as I've pointed out, consumers may not even want to have to make this sort of decision; they would prefer to defer to experts.  Thus, we're presuming a level of understanding and interest that consumers may not actually have.  This is related to the recent discussion started by Tamar Haspel in the Washington Post about whether the so-called food movement is large or small.  Are "regular" people actually paying much attention to this food stuff that occupies the attention of so many journalists, researchers, writers, and non-profits?

I had these thoughts in mind as I went back and looked at this post by Dan Kahan who took issue with Pew's survey on public opinions about GMOs (this was the survey that attracted a lot of attention because it showed a large gap in public and scientific opinion on GMOs).  Kahan wrote:

the misimpression that GM foods are a matter of general public concern exists mainly among people who inhabit these domains, & is fueled both by the vulnerability of those inside them to generalize inappropriately from their own limited experience and by the echo-chamber quality of these enclaves of thought.

and

That people are answering questions in a manner that doesn’t correspond to reality shows that the survey questions themselves are invalid. They are not measuring what people in the world think—b/c people in the world (i.e., United States) aren’t thinking anything at all about GM foods; they are just eating them.

The only things the questions are measuring—the only thing they are modeling—is how people react to being asked questions they don’t understand.

This let me to think: what if we asked people whether they even wanted to express an opinion about GMOs?  So, in the latest issue of my Food Demand Survey (FooDS) that went out last week, I did just that.  I took my sample of over 1,000 respondents and split them in half.  For half of the sample, I first asked, "Do you have an opinion about the safety of eating genetically modified food?"  Then, only for people who said "yes", I posed the following: "Do you think it is generally safe or unsafe to eat genetically modified foods?" For the other half of the sample, I just asked the latter question about safety beliefs and added the option of "I don't know".  This question, by the way, is the same one Pew asked in their survey, and they didn't even offer a "don't know" option - it had to be volunteered by the respondent.  So, what happens when you allow for "I don't know" in these three different ways? 

When "don't know" is asked 1st in sequence before the safety question, a whopping 43% say they don't have an opinion!  By contrast, only 28% say "don't know" when it is offered simultaneously with the safety question.  And, as the bottom pie graph shows, only about 6% of respondents in the Pew survey voluntarily offer "don't know".  Thus, I think Kahan's critique has a lot of merit: a large fraction of consumers gave an opinion in the Pew survey, when in fact, they probably didn't have one when this option was allowed in a more explicitly matter.  

Moreover, allowing (or not allowing) for "don't know" in these different ways generates very different conclusions about consumers' beliefs about the safety of GMOs.  Conditional on having an opinion, the percent saying "generally safe" varies from 40% in the sequential question to 50% in the simultaneous question to 39% in the Pew format which didn't offer "don't know."  That support can vary so widely depending on how "don't know" is asked is hardly indicative of stable, firm, beliefs about GMOs among the general public. 

In last week's survey I also carried out the same exercise regarding Pew's questions on climate change.  For half of my sample, I first asked whether people had an opinion about the causes of changes in the earth's temperature; for the other half, I included "don't know" as an option simultaneous with the question itself.   Here are the results compared to Pew's, which again did not explicitly offer a "don't know."  

Again, we see big differences in the extent to which "don't know" is expressed depending on question format, varying from 37% in the sequential version to only 2% in Pew's survey.  In this case, it appears that people who would have said "don't know" in the sequential question format are more likely to pick response categories that disagree with scientists, when they are given questions where "don't know" isn't so explicitly allowed.  

What can we learn from all this?  Just because people express an opinion on surveys doesn't mean they actually have one (or at least not a very firmly held one).  

Do Survey Respondents Pay Attention?

Imagine taking a survey that had the following question. How would you answer?

If you answered anything but "None of the Above", I caught you in a trap.  You were being inattentive.  If you read the question carefully, the text explicitly asks the respondent to check "None of the Above."  

Does it matter whether survey-takers are inattentive?  First, note surveys are used all the time to inform us on a wide variety of issues from who is most likely to be the next US president to whether people want mandatory GMO labels.  How reliable are these estimates if people aren't paying attention to the questions we're asking?  If people aren't paying attention, perhaps its no wonder they tell us things like that they want mandatory labels on food with DNA.

The survey-takers aren't necessarily to blame.  They're acting rationally.  They have an opportunity cost of time, and time spent taking a survey is time not making money or doing something else enjoyable (like reading this post!).  Particularly in online surveys, where people are paid when they complete the survey, the incentive is to finish - not necessarily to pay 100% attention to every question.

In a new working paper with Trey Malone, we sought to figure whether missing a "long" trap question like the one above or missing "short" trap questions influence the willingness-to-pay estimates we get from surveys.  Our longer traps "catch" a whopping 25%-37% of the respondents; shorter traps catch 5%-20% depending on whether they're in a list or in isolation.  In addition, Trey had the idea of going beyond the simple trap question and prompting people if they got it wrong.  If you've been caught in our trap, we'll let you out, and hopefully we'll find better survey responses.  

Here's the paper abstract.

This article uses “trap questions” to identify inattentive survey participants. In the context of a choice experiment, inattentiveness is shown to significantly influence willingness-to-pay estimates and error variance. In Study 1, we compare results from choice experiments for meat products including three different trap questions, and we find participants who miss trap questions have higher willingness-to-pay estimates and higher variance; we also find one trap question is much more likely to “catch” respondents than another. Whereas other research concludes with a discussion of the consequences of participant inattention, in Study 2, we introduce a new method to help solve the inattentive problem. We provide feedback to respondents who miss trap questions before a choice experiment on beer choice. That is, we notify incorrect participants of their inattentive, incorrect answer and give them the opportunity to revise their response. We find that this notification significantly alters responses compared to a control group, and conclude that this simple approach can increase participant attention. Overall, this study highlights the problem of inattentiveness in surveys, and we show that a simple corrective has the potential to improve data quality.

Did the Cancer Announcement Affect Bacon Demand?

On October 26, 2015 the International Agency for Research on Cancer (IARC) — an agency within the World Health Organization — released its report indicating that processed meat is carcinogenic.  

The announcement sparked a lot of media coverage with titles like: "Bad Day for Bacon".  (Here were my thoughts shortly after the announcement, along with some survey responses based the news).

Despite the news coverage after the announcement, I haven't seen much investigation of whether it impacted meat markets.  Thus, I thought I'd take a look at the data, recognizing it is probably impossible at this point to conclusively identify whether the IARC report caused a shift in demand.

I turned to the USDA Ag Marketing Service's daily reporting of pork primal composite values.  Rather than just looking at what happened to the prices of bacon (or rather pork belly) in isolation, it is probably useful to look in relation to another cut that may be less affected by the announcement.  I chose the pork loin.  This is an attempt to control for any changes over time happening on the supply-side (the quantity of loin from a pig is, at least in the short run, in fixed proportion to the quantity of pig belly).

I calculated the ratio of pork belly prices to pork loin prices over the past year.  The graph below shows the price ratio before and after the IARC announcement.  In the few weeks before the announcement, bellys were selling at 1.9 times the price of loins.  In the few weeks after the announcement, bellys were selling at only 1.5 times the price of loins.  Thus, there has been a roughly 26% drop in the relative value of bacon. 

At this point, I'd be hesitant to say that the IARC announcement is THE cause of this change, but the large immediate drop just following the release date is suggestive of some impact.  


The Cost of Others Making Choices for You

The journal Applied Economics just released a paper entitled "Choosing for Others" that I coauthored with Stephan Marette and Bailey Norwood.  The paper builds off our previous research that aimed to study the value people place on the freedom of choice by trying to explicitly calculate the cost of others making choices for you (at least in our experimental context).  

The motivation for the study:

It is not uncommon for behavioural economic studies to utilize experimental evidence of a bias as the foundation for advocating for a public policy intervention. In these cases, the paternalist/policymaker is a theoretical abstraction who acts at the will of the theorist to implement the preferred plan. In reality, paternalists are flesh-and-blood people making choices on the behalf of others. Yet, there is relatively little empirical research (Jacobsson, Johannesson, and Borgquist 2007 being a prominent exception) exploring the behaviour of people assigned to make choices on another’s behalf.

The essence of the problem is as follows:

When choices are symmetric, the chooser gives the same food to others as they take for themselves, and assuming the recipient has the same preferences as the chooser, the choice inflicts no harm. However, when asymmetric choices occur, an individual receives an inferior choice and suffers a (short-term) welfare loss. Those losses might be compensated by other benefits if the chooser helps the individual overcome behavioural obstacles to their own,
long-run well-being. However, the short-term losses that arise from a mismatch between outcomes preferred and received should not be ignored, though they often are, and this study seeks to measure their magnitude in a controlled experiment.

What do we find?

We find that a larger fraction of individuals made the same choices for themselves as for others in the US than in France, and this fraction increased in both locations after the provision of information about the healthfulness of the two choices.

and

What is interesting is that the per cent of paternalistic choices declined in both the US and France after information was revealed, with a very small decline in France and a considerable decline in the US. The per cent of indulgent choices also declined after information, so the effect of information was that it largely reduced asymmetric choices. Information substituted for paternalism. After information, choosers selected more apples for themselves and more apples for others, such that there was less need for paternalism to increase apple consumption.

The Future for GMO Foods

On a number of occasions, I've been asked questions like, "What will it take for consumers to become accepting of GMO foods?"  My guess is that we probably aren't going to see much movement resulting from new information or new communication strategies, but rather I suspect a bigger catalyst may be the technology itself.  When scientists produce a product people really want, consumers probably won't care whether it's labeled and they'll overlook whatever small perceived risks are present.  

A while back when writing about the duplicity of a many food companies on the issue of GMO labeling, I wrote

For now, food companies are not required to add labels indicating the presence of genetically engineered ingredients. But, it might ultimately be in their best interest to do it voluntarily, and in a way that avoids the negative connotations implied by the labels that would have been mandated in state ballot initiatives.

Some day in the near future, after concerted efforts to educate the public and create consumer-oriented biotechnologies, we may see food companies clamoring to voluntarily add a label that proclaims: proudly made with biotechnology.

I've been reading Dan Charles's 2001 book Lords of the Harvest.  While I could quibble with some of the book's tone and framing of the issues, overall it is an educational and fascinating historical account of the emergence of biotech crops, including many first-hand interviews with the key players (many of whom are still active today).  

Writing about a new genetically engineered tomato that had longer shelf life and better processing characteristics that preserved taste, Charles includes a passage that indicates how GMOs might have evolved  differently (and might still evolve differently) in the public perception.  He writes the following about activities circa 1996:

Best and his colleagues at Zeneca Plant Sciences had spent an enormous amount of time cultivating British journalists and lining up partners in the food business. They’d already decided that this tomato paste would be packaged in special cans and labeled as the product of ‘genetically altered tomatoes,” even though such labels weren’t required. Two large supermarket chains, Sainsbury and Safeway, agreed to carry the product and promote it. They even turned genetic engineering into a marketing gimmick, advertising the launch of the tomato paste as ‘a world-first opportunity to taste the future.’

The Zeneca tomato paste was in fact purely an experiment in marketing. The tomatoes were grown during a single summer in California and processed using conventional methods, then packaged and flown to Europe. As a consequence, the genetically engineered paste actually cost more to produce than conventional tomato paste and tasted exactly the same. Yet Zeneca and its partners decided to charge less than the going rate for it. They were willing to take a financial loss just to find out if the British public would buy a genetically engineered product.

The answer turned out to be an unequivocal ‘yes.’ Through the summer of 1996 Zeneca’s red cans of tomato paste, proudly labeled ‘genetically altered,’ outsold all competitors.

‘You need to give the consumer a choice,’ says Best. ‘Once they had that choice, eaten it for a couple of years, found that there was no big deal, I think the whole thing would have gone away.’

So, what happened?  A confluence of events.  Mad Cow was soon discovered in Britain, which heightened food fears and undermined food regulatory agencies (who'd previously promised it was safe to eat beef).  Charles seems to blame Monsanto who he argues focused more on gaining regulatory approval than on charting a path that would engage the public on the issue. In several spots in the book, Charles talks about how Best, and Salquist with Calgene in the US,  masterfully shaped public acceptance for their tomatoes products before bringing them to market.   

 But, as I see it, it was also the technology itself.  While farmers could clearly see the benefits of herbicide-resistant and Bt crops, and they quickly snatched them up in every location where they were allowed, consumers couldn't and still can't.  Fast forward 20 years, and while "GMOs" have become a lighting rod and a proxy-fight for all sorts of agricultural issues,  the underlying reality of "who is  perceived to benefit" still hasn't changed.   I think the anti-biotech crowd knows this because they've fought hard to keep some of the most promising consumer-oriented products from the market.  

So, what will it take to change consumer acceptance of GMOs?  New companies with new products who want to sell and tout the use of biotechnology rather than hide it.  One of the implicit lessons of Charles's book is that companies who seem dominant and powerful today are often upended by entrepreneurs with a new products and a new vision for the future.  My bet is that the same forces will eventually end our current and long-standing quagmire related to public perceptions of GMO foods.