Blog

How do people respond to scientific information about GMOs and climate change?

The journal Food Policy just published a paper by Brandon McFadden and me that explores how consumers respond to scientific information about genetically engineered foods and about climate change.  The paper was motivated by some previous work we'd done where we found that people didn't always respond as anticipated to television advertisements encouraging them to vote for or against mandatory labels on GMOs.  

In this study, respondents were shown a collection of statements from authoritative scientific bodies (like the National Academies of Science and United Nations) about the safety of eating approved GMOs or the risk of climate change.  Then we asked respondents whether they were more or less likely to believe GMOs were safe to eat or whether the earth was warming more than it would have otherwise due to human activities.    

We classified people as "conservative" (if they stuck with their prior beliefs regardless of the information), "convergent" (if they changed their beliefs in a way consistent with the scientific information), or "divergent" (if they changed their beliefs in a way inconsistent with the scientific information). 

We then explored the factors that explained how people responded to the information.  As it turns out, one of the most important factors determining how you respond to information is your prior belief.  If your priors were that GMOs were safe to eat and that global warming was occurring, you were more likely to find the information credible and respond in a "rational" (or Bayesian updating) way.  

Here are a couple graphs from the paper illustrating that result (where believers already tended to believe the information contained in the scientific statements and deniers did not).  As the results below show, the "deniers" were more likely to be "divergent" - that is, the provision scientific information caused them to be more likely to believe the opposite of the message conveyed in the scientific information.  

We also explored a host of other psychological factors that influenced how people responded to scientific information.  Here's the abstract:

The ability of scientific knowledge to contribute to public debate about societal risks depends on how the public assimilates information resulting from the scientific community. Bayesian decision theory assumes that people update a belief by allocating weights to a prior belief and new information to form a posterior belief. The purpose of this study was to determine the effects of prior beliefs on assimilation of scientific information and test several hypotheses about the manner in which people process scientific information on genetically modified food and global warming. Results indicated that assimilation of information is dependent on prior beliefs and that the failure to converge a posterior belief to information is a result of several factors including: misinterpreting information, illusionary correlations, selectively scrutinizing information, information-processing problems, knowledge, political affiliation, and cognitive function.

An excerpt from the conclusions:

Participants who misinterpreted the information provided did not converge posterior beliefs to the information. Rabin and Schrag (1999) asserted that people suffering from confirmation bias misinterpret evidence to conform to a prior belief. The results here confirmed that people who misinterpreted information did indeed exhibit confirmation, as well as people who conserved a prior belief. This is more evidence that assuming optimal Bayesian updating may only be appropriate when new information is somewhat aligned with a prior belief.

Why people lie on surveys and how to make them stop

Companies spend millions (perhaps billions?) of dollars every year surveying consumers to figure out want they want.  Environmental, health, and food economists do the same to try to figure out the costs and benefits of various policies.  What are people willing to pay for organic or non-GMO foods or for country of origin labels on meat?  These are the sorts of questions I'm routinely asked.

Here's the problem: there is ample evidence (from economics and marketing among other disciplines) that people don't always do what they say they will do on a survey.  A fairly typical result from the economics literature is that the amount people say they are willing to pay for a new good or service is about twice what they'll actually pay when money is on the line.  It's what we economists call hypothetical bias.

We don't yet have a solid theory that explains this phenomenon in every situation, and it likely results from a variety of factors like: social desirability bias (we give answers we think the surveyor wants to hear), warm glow, yea-saying, and self presentation bias (it feels good to support "good" causes and say "yes", and why not say we're willing to do something, particularly when there is no cost to doing so and it can make us look and feel good about ourselves), idealized responses (we imagine whether we'd ever buy the good when we eventually have the money and the time is right, rather than answering whether we'd buy it here and now), strategy (if we think our answers to a survey question can influence the eventual price that is charged or whether the good is actually offered, we might over- or under-state our willingness to buy), uncertainty (research suggest a lot of the hypothetical bias comes from those who say they aren't sure about whether they'd buy the good), among other possible reasons.

What to do?

Various fixes have been proposed over the years.

  • Calibration.  Take responses from a survey and reduce them by some factor so that they more closely approximate what consumers will actually do.  The problem: calibration factors are unknown and vary across people and goods.
  • Cheap talk.  On the survey, explain the problem of hypothetical bias and explicitly ask people to avoid it.  The problem: it doesn't always "work" for all people (particularly experienced people familiar with the good), and there is always some uncertainty over whether you've simply introduced a new bias.
  • Certainty scales.  Ask people how sure they are about their answers, and for people who indicate a high level of uncertainty, re-code their "yes" answers to "no".  The problem: the approach is ad-hoc, and it is hard to know a priori what the cut-off on the certainty scale should be.  Moreover, it only works for simple yes/no questions.
  • Use particular question formats.  Early practitioners of contingent valuation (an approach for asking willingness-to-pay popular in environmental economics) swear by a "double-bounded dichotomous choice, referendum question" which they believe has good incentives for truth telling if respondents believe their answers might actually influence whether the good is provided (i.e., if the answer is consequential).  I'm skeptical.  I'm more open to the use of so-called "choice experiments", where people make multiple choices between goods that have different attributes, and where we're only interested in "marginal" trade offs (i.e., whether you want good A vs. good B).  There is likely more bias in the "total" (i.e., whether you want good A or nothing).    

There is another important alternative.  If the problem is that surveys don't prompt people to act as they would in a market, well, whey don't we just create a real market?  A market where people have to give up real money for real goods - where we make people put their money where their mouth is?  It is an approach I wrote about in the book Experimental Auctions with Jason Shogren and it is the approach I teach with  Rudy Nayga, Andreas Drichoutis, and Maurizio Canavari in the summer school we have planned for this summer in Crete (sign up now!)  It is an approach with a long history , stemming mainly from the work of experimental economists.

One of the drawbacks with the experimental market approach is that it is often limited to a particular geographic region.  You've got to recruit people and get them in a room (or as people like John List and others have done, go to a real-world market already in existence and bend it to your research purposes).   

Well, there's now a new option with particularly wider reach.  Several months ago I was contacted by Anouar El Haji who is at the Business School at the University of Amsterdam.  He's created a simple online platform he calls Veylinx where researchers can conduct real auctions designed to give participants an incentive to truthfully reveal their maximum willingness-to-pay.  The advantage is that one can reach a large number of people across the US (potentially across the world).  It's a bit like ebay, but with a much simpler environment (which researchers can control) with a clearer incentive to get people to bid their maximum willingness-to-pay.  

One of the coolest parts is that you can even sign up to participate in the auctions.  I've done so, and encourage you to do the same.  Hopefully, we'll eventually get some auctions up and running that relate specifically to food and agriculture.  

2015 Summer School on Experimental Auctions

Applications are now being accepted for a summer school on Experimental Auctions that I've co-taught with Rudy Nayga and Andreas Drichoutis for 3 years.  In the past we've had the summer school near Bologna Italy organized by Maurizio Carnavari, but this year we're venturing out to Crete, Greece.  The course is from July 7 to July 14, 2015 at the Mediterranean Agronomic Institute of Chania.

Experimental auctions are a technique used to measure consumer willingness-to-pay for new food products, which in turn is used to project demand, market share, and benefits/costs of public policies.  We've had a fantastic time in the past and I'm looking forward to this fourth offering, which is approved for credit hours through the University of Bologna.  The content is mainly targeted toward graduate students or early career professionals (or marketing researchers interested in learning about a new technique).  You can find out more and register here.

For a little enticement, here a picture of the venue.

Impact of Academic Journals

Dan Rigby, Michael Burton, and I just published an article in the American Journal of Agricultural Economics on the impact of academic journals - as seen through the eyes of the academics who write journal articles.  

Motivating the work is the fact that more emphasis is being placed on the "impact" or our academic work.  This can be see most directly in places like the UK where funding directly follows measures of impact.  At my own University, we have to write annual "impact statements", and it is commonplace in promotion and tenure decisions for candidates to have to document "impact."  One of the most common metrics used to identify impact is the Impact Factor of the journal in which an author's article appears.  This impact factor is calculated by measuring citations to articles published in a journal in the two years following the publication date.  There are many critiques of the use of the Impact Factor, and my own research with Tia Hilmer shows that using the impact factor of a journal to measure the impact of a particular article is potentially misleading: some articles published in low Impact Factor journals receive many more citations than some articles published in high Impact Factor journals.

In our current research, we wanted to know what academics themselves think of the impact of different journals, were "impact" can mean several different things.  We surveyed agricultural and environmental economics who were members of at least one of the seven largest agricultural economics associations throughout the world.   We asked respondents to tell us which (of a set of 23 journals) they thought 1) would "most/least enhance your career progression, whether at your current institution or another at which you would like to work" and 2) "The journal whose papers you think have most/least impact beyond academia (i.e., on policy makers, business community, etc.).”  We compared the journal rankings based on these two measures of impact to each other and to the aforementioned Impact Factor based on citations data. 

We find:

We find no significant correlation between the journal scores based on the two criteria, nor between them and the journals’ impact factors. These results suggest that impact beyond academia is poorly aligned with career incentives and that citation measures reflect poorly, if at all, peers’ esteem of journals.

My favorite part of the paper are a set of graphs Dan put together plotting the various measures of impact against each other.  Here's one showing a journal's Impact Factor vs. respondent's perception of the career impact of publishing in the journal.

What's going on inside people's heads when they see controversial food technologies?

That was the question I attempted to answer with several colleagues (John Cresip, Brad Cherry, Brandon McFadden, Laura Martin, and Amanda Bruce) in research that was just published in the journal Food Quality and Preference.

We put people in an fMRI machine and recorded their neural activations when they saw pictures of (or made choices between) milk jugs that had different prices and were labeled as being produced with (or without) added growth hormones or cloning.  

What did we find?

Our findings are consistent with the evidence that the dlPFC is involved in resolving tradeoffs among competing options in the process of making a choice. Because choices in the combined-tradeoff condition requires more working memory (as multiple attributes are compared) and because this condition explicitly required subjects to weigh the costs and benefits of the two alternatives, it is perhaps not surprising that greater activation was observed in the dlPFC than in the single-attribute choices in the price and technology conditions. Not only did we find differential dlPFC activations in different choice conditions, we also found that activation in this brain region predicted choice. Individuals who experienced greater activation in the right dlPFC in the technology condition, and who were thus perhaps weighing the benefits/costs of the technology, were less likely to choose the higher-priced non-hormone/non-cloned option in the combined-tradeoff condition.

and

Greater activation in the amygdala and insula when respondents were making choices in the price condition compared to choices in the combined-tradeoff condition might have resulted from adverse affective reactions to high prices and new technologies, although our present research cannot conclusively determine whether this is a causal relationship. In the price condition, the only difference between choice options was the price, and the prices ranged from $3.00 to $6.50, an increase of more than 100% from the lowest to the highest. Such a large price difference could be interpreted as a violation of a social norm or involve a fearful/painful/ threatening response, which, as just indicated, has been associated with activity in the amygdala and insula. Kahneman (2011, p. 296) argues that these particular brain responses to high prices are consistent with the behavioral-economic concept of loss aversion, in this case, a feeling that the seller is overcharging the buyer.

The punchline:

Estimates indicate that the best fitting model is one that included all types of data considered: demographics, psychometric scales, product attributes, and neural activations observed via fMRI. Overall, neuroimaging data adds significant predictive and explanatory power beyond the measures typically used in consumer research.