Blog

Consumer sovereignty vs. scientific integrity

This post by Olga Khazan at Atlantic.com highlights some recent food company decisions to remove ingredients of concern to certain consumers.  Yet, the best science we have available suggests these same ingredients are perfectly safe.

Examples mentioned in the story include announcements that Diet Pepsi is removing aspartame, Ben and Jerry's and Chipotle are removing GMOs (the former company's decision is a bit ironic given that they're essentially selling frozen fat with sugar; the later is duplicitous since  they're still selling sodas and cheese that will contain GMOs), Pepsi dropping high fructose corn syrup in some of their drinks, and Clif's Luna Bars going gluten-free.  To that we could add a long list of others such as Cheerios dropping GMOs, many milk brands years ago dropping rBST, etc.  

It's difficult to know what to make of these moves.  On the one hand, we ought to champion consumer freedom and sovereignty.   Whatever one might think about the "power" of Big Food, these examples clearly show food companies willing to bend over backwards to meet customer demands.  That, in principle, is a good thing.  

The darker side of the story is that many consumers have beliefs about food ingredients that don't comport with the best scientific information we have available.  As a result, food companies are making a variety of cost-increasing changes that only convey perceived (but not real) health benefits to consumers.  

The longer-run potential problem for food companies is that they may inadvertently be fostering a climate of distrust.  Rather than creatively defending use of ingredient X and taking the opportunity to talk about the science, their moves come across as an admission of some sort of guilt:  Oh, you caught us!  You found out we use X.  Now, we'll now remove it.  All the while, we'll donate millions to causes that promote X or prevent labeling of X, while offering brands that promote the absence of X.  It's little wonder people get confused, lose trust, and question integrity.  

I'm not sure there is an easy answer to this conundrum.  In a competitive environment, I'm not sure I'd expect (or shareholders would expect) one food company to try to make a principled stand for ingredient X while their competitor is stealing market share by advertising "no-X".  On the other hand, I'd like consumers to make more informed decisions, but I'm not all that sure "education" has much impact or that, at least for many middle- to upper-income consumers, that given the price of food they have much economic incentive to adjust their prior beliefs.  

Faced with the conundrum, I suspect some  people would advocate for some sort of policy (i.e., ban ingredient X or prevent claims like "no-X"), but I don't think that's the right answer.  Despite my frustration, I suspect the marketplace will work it out in a messy way.  Some companies will adopt "no-X", will incur higher costs than their consumers are willing to pay, and will go out of business or go back to X. Some companies that are seen as lacking integrity will lose market share. Some consumers will pay more for "no-X" only later to find out it wasn't worth it, and switch back.  Maybe the scientists wind up being wrong and some consumers avoided X for good reason, and all companies drop X.  The feature of the marketplace, dynamism, that is, at times, frustrating is also the key to ultimately solving  some of those same frustrations.  

Impotence or Death?

Last week I was in Italy teaching a short course and speaking at a conference.  At the conference, I attended a session where the author described an an experiment on alcohol warning labels.  He had people choose between different bottles of wine that had different warning labels.

I thought this was a bit of a strange experiment because once you've seen one bottle with a warning label, doesn't it tell you something about all the bottles?  When I voiced this concern, my friend Maurizio Canavari pointed out that in Italy, different cigarette packages have different warning labels (apparently determined at random).

He sent me this picture yesterday, which reminds me of the joke he told me after the session.  A man walks into a tobacco shop and asks for a pack.  On his way out, he notices the warning label on the pack says that smoking may cause problems in the bedroom (e.g., see the above label "Il fumo riduce la fertilita").  He goes back in and hands the pack back to the shop owner and says: I'll take the one that just kills you.

Seriously, I wonder about the effectiveness of spreading information out over multiple packs vs. trying to cram it all on one.  And, I do wonder if people are more/less likely to pick packs with certain labels despite the fact that the labels warn about smoking in general and not about the effects of one particular pack or brand over another.

 

Why people lie on surveys and how to make them stop

Companies spend millions (perhaps billions?) of dollars every year surveying consumers to figure out want they want.  Environmental, health, and food economists do the same to try to figure out the costs and benefits of various policies.  What are people willing to pay for organic or non-GMO foods or for country of origin labels on meat?  These are the sorts of questions I'm routinely asked.

Here's the problem: there is ample evidence (from economics and marketing among other disciplines) that people don't always do what they say they will do on a survey.  A fairly typical result from the economics literature is that the amount people say they are willing to pay for a new good or service is about twice what they'll actually pay when money is on the line.  It's what we economists call hypothetical bias.

We don't yet have a solid theory that explains this phenomenon in every situation, and it likely results from a variety of factors like: social desirability bias (we give answers we think the surveyor wants to hear), warm glow, yea-saying, and self presentation bias (it feels good to support "good" causes and say "yes", and why not say we're willing to do something, particularly when there is no cost to doing so and it can make us look and feel good about ourselves), idealized responses (we imagine whether we'd ever buy the good when we eventually have the money and the time is right, rather than answering whether we'd buy it here and now), strategy (if we think our answers to a survey question can influence the eventual price that is charged or whether the good is actually offered, we might over- or under-state our willingness to buy), uncertainty (research suggest a lot of the hypothetical bias comes from those who say they aren't sure about whether they'd buy the good), among other possible reasons.

What to do?

Various fixes have been proposed over the years.

  • Calibration.  Take responses from a survey and reduce them by some factor so that they more closely approximate what consumers will actually do.  The problem: calibration factors are unknown and vary across people and goods.
  • Cheap talk.  On the survey, explain the problem of hypothetical bias and explicitly ask people to avoid it.  The problem: it doesn't always "work" for all people (particularly experienced people familiar with the good), and there is always some uncertainty over whether you've simply introduced a new bias.
  • Certainty scales.  Ask people how sure they are about their answers, and for people who indicate a high level of uncertainty, re-code their "yes" answers to "no".  The problem: the approach is ad-hoc, and it is hard to know a priori what the cut-off on the certainty scale should be.  Moreover, it only works for simple yes/no questions.
  • Use particular question formats.  Early practitioners of contingent valuation (an approach for asking willingness-to-pay popular in environmental economics) swear by a "double-bounded dichotomous choice, referendum question" which they believe has good incentives for truth telling if respondents believe their answers might actually influence whether the good is provided (i.e., if the answer is consequential).  I'm skeptical.  I'm more open to the use of so-called "choice experiments", where people make multiple choices between goods that have different attributes, and where we're only interested in "marginal" trade offs (i.e., whether you want good A vs. good B).  There is likely more bias in the "total" (i.e., whether you want good A or nothing).    

There is another important alternative.  If the problem is that surveys don't prompt people to act as they would in a market, well, whey don't we just create a real market?  A market where people have to give up real money for real goods - where we make people put their money where their mouth is?  It is an approach I wrote about in the book Experimental Auctions with Jason Shogren and it is the approach I teach with  Rudy Nayga, Andreas Drichoutis, and Maurizio Canavari in the summer school we have planned for this summer in Crete (sign up now!)  It is an approach with a long history , stemming mainly from the work of experimental economists.

One of the drawbacks with the experimental market approach is that it is often limited to a particular geographic region.  You've got to recruit people and get them in a room (or as people like John List and others have done, go to a real-world market already in existence and bend it to your research purposes).   

Well, there's now a new option with particularly wider reach.  Several months ago I was contacted by Anouar El Haji who is at the Business School at the University of Amsterdam.  He's created a simple online platform he calls Veylinx where researchers can conduct real auctions designed to give participants an incentive to truthfully reveal their maximum willingness-to-pay.  The advantage is that one can reach a large number of people across the US (potentially across the world).  It's a bit like ebay, but with a much simpler environment (which researchers can control) with a clearer incentive to get people to bid their maximum willingness-to-pay.  

One of the coolest parts is that you can even sign up to participate in the auctions.  I've done so, and encourage you to do the same.  Hopefully, we'll eventually get some auctions up and running that relate specifically to food and agriculture.  

How effective is education at correcting misperceptions

Whether its GMOs or pesticides or economic effects of various food policies, it seems that the public often holds beliefs that are at odds with what the experts believe.  A natural tendency - especially for someone who is an educator - it to propose that we need more education on these topics.

But, how effective are we at changing people's minds?  This article in Pacific Standard by the psychologist David Dunning might give us pause.  

The research suggests:

What’s curious is that, in many cases, incompetence does not leave people disoriented, perplexed, or cautious. Instead, the incompetent are often blessed with an inappropriate confidence, buoyed by something that feels to them like knowledge.

But, before you start feeling too confident in your own abilities, read the following:

An ignorant mind is precisely not a spotless, empty vessel, but one that’s filled with the clutter of irrelevant or misleading life experiences, theories, facts, intuitions, strategies, algorithms, heuristics, metaphors, and hunches that regrettably have the look and feel of useful and accurate knowledge. This clutter is an unfortunate by-product of one of our greatest strengths as a species. We are unbridled pattern recognizers and profligate theorizers. Often, our theories are good enough to get us through the day, or at least to an age when we can procreate. But our genius for creative storytelling, combined with our inability to detect our own ignorance, can sometimes lead to situations that are embarrassing, unfortunate, or downright dangerous—especially in a technologically advanced, complex democratic society that occasionally invests mistaken popular beliefs with immense destructive power (See: crisis, financial; war, Iraq). As the humorist Josh Billings once put it, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” (Ironically, one thing many people “know” about this quote is that it was first uttered by Mark Twain or Will Rogers—which just ain’t so.)

Several studies seem to suggest that providing people with a little information may not lead to more agreement on an issue, but rather can result in polarizing opinions. The reason is that information makes use feel more informed, and lets us feel more confident in whatever our political or cultural tendencies would lead us to believe in the first place.  That is, people bend information to reinforce their identity and cultural beliefs. 

What's going on inside people's heads when they see controversial food technologies?

That was the question I attempted to answer with several colleagues (John Cresip, Brad Cherry, Brandon McFadden, Laura Martin, and Amanda Bruce) in research that was just published in the journal Food Quality and Preference.

We put people in an fMRI machine and recorded their neural activations when they saw pictures of (or made choices between) milk jugs that had different prices and were labeled as being produced with (or without) added growth hormones or cloning.  

What did we find?

Our findings are consistent with the evidence that the dlPFC is involved in resolving tradeoffs among competing options in the process of making a choice. Because choices in the combined-tradeoff condition requires more working memory (as multiple attributes are compared) and because this condition explicitly required subjects to weigh the costs and benefits of the two alternatives, it is perhaps not surprising that greater activation was observed in the dlPFC than in the single-attribute choices in the price and technology conditions. Not only did we find differential dlPFC activations in different choice conditions, we also found that activation in this brain region predicted choice. Individuals who experienced greater activation in the right dlPFC in the technology condition, and who were thus perhaps weighing the benefits/costs of the technology, were less likely to choose the higher-priced non-hormone/non-cloned option in the combined-tradeoff condition.

and

Greater activation in the amygdala and insula when respondents were making choices in the price condition compared to choices in the combined-tradeoff condition might have resulted from adverse affective reactions to high prices and new technologies, although our present research cannot conclusively determine whether this is a causal relationship. In the price condition, the only difference between choice options was the price, and the prices ranged from $3.00 to $6.50, an increase of more than 100% from the lowest to the highest. Such a large price difference could be interpreted as a violation of a social norm or involve a fearful/painful/ threatening response, which, as just indicated, has been associated with activity in the amygdala and insula. Kahneman (2011, p. 296) argues that these particular brain responses to high prices are consistent with the behavioral-economic concept of loss aversion, in this case, a feeling that the seller is overcharging the buyer.

The punchline:

Estimates indicate that the best fitting model is one that included all types of data considered: demographics, psychometric scales, product attributes, and neural activations observed via fMRI. Overall, neuroimaging data adds significant predictive and explanatory power beyond the measures typically used in consumer research.