Blog

How do people respond to scientific information about GMOs and climate change?

The journal Food Policy just published a paper by Brandon McFadden and me that explores how consumers respond to scientific information about genetically engineered foods and about climate change.  The paper was motivated by some previous work we'd done where we found that people didn't always respond as anticipated to television advertisements encouraging them to vote for or against mandatory labels on GMOs.  

In this study, respondents were shown a collection of statements from authoritative scientific bodies (like the National Academies of Science and United Nations) about the safety of eating approved GMOs or the risk of climate change.  Then we asked respondents whether they were more or less likely to believe GMOs were safe to eat or whether the earth was warming more than it would have otherwise due to human activities.    

We classified people as "conservative" (if they stuck with their prior beliefs regardless of the information), "convergent" (if they changed their beliefs in a way consistent with the scientific information), or "divergent" (if they changed their beliefs in a way inconsistent with the scientific information). 

We then explored the factors that explained how people responded to the information.  As it turns out, one of the most important factors determining how you respond to information is your prior belief.  If your priors were that GMOs were safe to eat and that global warming was occurring, you were more likely to find the information credible and respond in a "rational" (or Bayesian updating) way.  

Here are a couple graphs from the paper illustrating that result (where believers already tended to believe the information contained in the scientific statements and deniers did not).  As the results below show, the "deniers" were more likely to be "divergent" - that is, the provision scientific information caused them to be more likely to believe the opposite of the message conveyed in the scientific information.  

We also explored a host of other psychological factors that influenced how people responded to scientific information.  Here's the abstract:

The ability of scientific knowledge to contribute to public debate about societal risks depends on how the public assimilates information resulting from the scientific community. Bayesian decision theory assumes that people update a belief by allocating weights to a prior belief and new information to form a posterior belief. The purpose of this study was to determine the effects of prior beliefs on assimilation of scientific information and test several hypotheses about the manner in which people process scientific information on genetically modified food and global warming. Results indicated that assimilation of information is dependent on prior beliefs and that the failure to converge a posterior belief to information is a result of several factors including: misinterpreting information, illusionary correlations, selectively scrutinizing information, information-processing problems, knowledge, political affiliation, and cognitive function.

An excerpt from the conclusions:

Participants who misinterpreted the information provided did not converge posterior beliefs to the information. Rabin and Schrag (1999) asserted that people suffering from confirmation bias misinterpret evidence to conform to a prior belief. The results here confirmed that people who misinterpreted information did indeed exhibit confirmation, as well as people who conserved a prior belief. This is more evidence that assuming optimal Bayesian updating may only be appropriate when new information is somewhat aligned with a prior belief.

Consumer sovereignty vs. scientific integrity

This post by Olga Khazan at Atlantic.com highlights some recent food company decisions to remove ingredients of concern to certain consumers.  Yet, the best science we have available suggests these same ingredients are perfectly safe.

Examples mentioned in the story include announcements that Diet Pepsi is removing aspartame, Ben and Jerry's and Chipotle are removing GMOs (the former company's decision is a bit ironic given that they're essentially selling frozen fat with sugar; the later is duplicitous since  they're still selling sodas and cheese that will contain GMOs), Pepsi dropping high fructose corn syrup in some of their drinks, and Clif's Luna Bars going gluten-free.  To that we could add a long list of others such as Cheerios dropping GMOs, many milk brands years ago dropping rBST, etc.  

It's difficult to know what to make of these moves.  On the one hand, we ought to champion consumer freedom and sovereignty.   Whatever one might think about the "power" of Big Food, these examples clearly show food companies willing to bend over backwards to meet customer demands.  That, in principle, is a good thing.  

The darker side of the story is that many consumers have beliefs about food ingredients that don't comport with the best scientific information we have available.  As a result, food companies are making a variety of cost-increasing changes that only convey perceived (but not real) health benefits to consumers.  

The longer-run potential problem for food companies is that they may inadvertently be fostering a climate of distrust.  Rather than creatively defending use of ingredient X and taking the opportunity to talk about the science, their moves come across as an admission of some sort of guilt:  Oh, you caught us!  You found out we use X.  Now, we'll now remove it.  All the while, we'll donate millions to causes that promote X or prevent labeling of X, while offering brands that promote the absence of X.  It's little wonder people get confused, lose trust, and question integrity.  

I'm not sure there is an easy answer to this conundrum.  In a competitive environment, I'm not sure I'd expect (or shareholders would expect) one food company to try to make a principled stand for ingredient X while their competitor is stealing market share by advertising "no-X".  On the other hand, I'd like consumers to make more informed decisions, but I'm not all that sure "education" has much impact or that, at least for many middle- to upper-income consumers, that given the price of food they have much economic incentive to adjust their prior beliefs.  

Faced with the conundrum, I suspect some  people would advocate for some sort of policy (i.e., ban ingredient X or prevent claims like "no-X"), but I don't think that's the right answer.  Despite my frustration, I suspect the marketplace will work it out in a messy way.  Some companies will adopt "no-X", will incur higher costs than their consumers are willing to pay, and will go out of business or go back to X. Some companies that are seen as lacking integrity will lose market share. Some consumers will pay more for "no-X" only later to find out it wasn't worth it, and switch back.  Maybe the scientists wind up being wrong and some consumers avoided X for good reason, and all companies drop X.  The feature of the marketplace, dynamism, that is, at times, frustrating is also the key to ultimately solving  some of those same frustrations.  

Impotence or Death?

Last week I was in Italy teaching a short course and speaking at a conference.  At the conference, I attended a session where the author described an an experiment on alcohol warning labels.  He had people choose between different bottles of wine that had different warning labels.

I thought this was a bit of a strange experiment because once you've seen one bottle with a warning label, doesn't it tell you something about all the bottles?  When I voiced this concern, my friend Maurizio Canavari pointed out that in Italy, different cigarette packages have different warning labels (apparently determined at random).

He sent me this picture yesterday, which reminds me of the joke he told me after the session.  A man walks into a tobacco shop and asks for a pack.  On his way out, he notices the warning label on the pack says that smoking may cause problems in the bedroom (e.g., see the above label "Il fumo riduce la fertilita").  He goes back in and hands the pack back to the shop owner and says: I'll take the one that just kills you.

Seriously, I wonder about the effectiveness of spreading information out over multiple packs vs. trying to cram it all on one.  And, I do wonder if people are more/less likely to pick packs with certain labels despite the fact that the labels warn about smoking in general and not about the effects of one particular pack or brand over another.

 

Why people lie on surveys and how to make them stop

Companies spend millions (perhaps billions?) of dollars every year surveying consumers to figure out want they want.  Environmental, health, and food economists do the same to try to figure out the costs and benefits of various policies.  What are people willing to pay for organic or non-GMO foods or for country of origin labels on meat?  These are the sorts of questions I'm routinely asked.

Here's the problem: there is ample evidence (from economics and marketing among other disciplines) that people don't always do what they say they will do on a survey.  A fairly typical result from the economics literature is that the amount people say they are willing to pay for a new good or service is about twice what they'll actually pay when money is on the line.  It's what we economists call hypothetical bias.

We don't yet have a solid theory that explains this phenomenon in every situation, and it likely results from a variety of factors like: social desirability bias (we give answers we think the surveyor wants to hear), warm glow, yea-saying, and self presentation bias (it feels good to support "good" causes and say "yes", and why not say we're willing to do something, particularly when there is no cost to doing so and it can make us look and feel good about ourselves), idealized responses (we imagine whether we'd ever buy the good when we eventually have the money and the time is right, rather than answering whether we'd buy it here and now), strategy (if we think our answers to a survey question can influence the eventual price that is charged or whether the good is actually offered, we might over- or under-state our willingness to buy), uncertainty (research suggest a lot of the hypothetical bias comes from those who say they aren't sure about whether they'd buy the good), among other possible reasons.

What to do?

Various fixes have been proposed over the years.

  • Calibration.  Take responses from a survey and reduce them by some factor so that they more closely approximate what consumers will actually do.  The problem: calibration factors are unknown and vary across people and goods.
  • Cheap talk.  On the survey, explain the problem of hypothetical bias and explicitly ask people to avoid it.  The problem: it doesn't always "work" for all people (particularly experienced people familiar with the good), and there is always some uncertainty over whether you've simply introduced a new bias.
  • Certainty scales.  Ask people how sure they are about their answers, and for people who indicate a high level of uncertainty, re-code their "yes" answers to "no".  The problem: the approach is ad-hoc, and it is hard to know a priori what the cut-off on the certainty scale should be.  Moreover, it only works for simple yes/no questions.
  • Use particular question formats.  Early practitioners of contingent valuation (an approach for asking willingness-to-pay popular in environmental economics) swear by a "double-bounded dichotomous choice, referendum question" which they believe has good incentives for truth telling if respondents believe their answers might actually influence whether the good is provided (i.e., if the answer is consequential).  I'm skeptical.  I'm more open to the use of so-called "choice experiments", where people make multiple choices between goods that have different attributes, and where we're only interested in "marginal" trade offs (i.e., whether you want good A vs. good B).  There is likely more bias in the "total" (i.e., whether you want good A or nothing).    

There is another important alternative.  If the problem is that surveys don't prompt people to act as they would in a market, well, whey don't we just create a real market?  A market where people have to give up real money for real goods - where we make people put their money where their mouth is?  It is an approach I wrote about in the book Experimental Auctions with Jason Shogren and it is the approach I teach with  Rudy Nayga, Andreas Drichoutis, and Maurizio Canavari in the summer school we have planned for this summer in Crete (sign up now!)  It is an approach with a long history , stemming mainly from the work of experimental economists.

One of the drawbacks with the experimental market approach is that it is often limited to a particular geographic region.  You've got to recruit people and get them in a room (or as people like John List and others have done, go to a real-world market already in existence and bend it to your research purposes).   

Well, there's now a new option with particularly wider reach.  Several months ago I was contacted by Anouar El Haji who is at the Business School at the University of Amsterdam.  He's created a simple online platform he calls Veylinx where researchers can conduct real auctions designed to give participants an incentive to truthfully reveal their maximum willingness-to-pay.  The advantage is that one can reach a large number of people across the US (potentially across the world).  It's a bit like ebay, but with a much simpler environment (which researchers can control) with a clearer incentive to get people to bid their maximum willingness-to-pay.  

One of the coolest parts is that you can even sign up to participate in the auctions.  I've done so, and encourage you to do the same.  Hopefully, we'll eventually get some auctions up and running that relate specifically to food and agriculture.  

How effective is education at correcting misperceptions

Whether its GMOs or pesticides or economic effects of various food policies, it seems that the public often holds beliefs that are at odds with what the experts believe.  A natural tendency - especially for someone who is an educator - it to propose that we need more education on these topics.

But, how effective are we at changing people's minds?  This article in Pacific Standard by the psychologist David Dunning might give us pause.  

The research suggests:

What’s curious is that, in many cases, incompetence does not leave people disoriented, perplexed, or cautious. Instead, the incompetent are often blessed with an inappropriate confidence, buoyed by something that feels to them like knowledge.

But, before you start feeling too confident in your own abilities, read the following:

An ignorant mind is precisely not a spotless, empty vessel, but one that’s filled with the clutter of irrelevant or misleading life experiences, theories, facts, intuitions, strategies, algorithms, heuristics, metaphors, and hunches that regrettably have the look and feel of useful and accurate knowledge. This clutter is an unfortunate by-product of one of our greatest strengths as a species. We are unbridled pattern recognizers and profligate theorizers. Often, our theories are good enough to get us through the day, or at least to an age when we can procreate. But our genius for creative storytelling, combined with our inability to detect our own ignorance, can sometimes lead to situations that are embarrassing, unfortunate, or downright dangerous—especially in a technologically advanced, complex democratic society that occasionally invests mistaken popular beliefs with immense destructive power (See: crisis, financial; war, Iraq). As the humorist Josh Billings once put it, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” (Ironically, one thing many people “know” about this quote is that it was first uttered by Mark Twain or Will Rogers—which just ain’t so.)

Several studies seem to suggest that providing people with a little information may not lead to more agreement on an issue, but rather can result in polarizing opinions. The reason is that information makes use feel more informed, and lets us feel more confident in whatever our political or cultural tendencies would lead us to believe in the first place.  That is, people bend information to reinforce their identity and cultural beliefs.