Blog

Pew Survey on Consumers, GMOs, and Trust in Science

About a week ago, the PewResearchCenter released a new report (report summary here) on GMOs, organic, and trust in food science.  The report has already been covered quite a bit in the media, but I thought I'd share a few observations on the study's headline results.  

First, the study finds:

Four-in-ten Americans (40%) say that most (6%) or some (34%) of the foods they eat are organic.

It's hard to know what to make of this claim as "some" is a pretty loose category.  One important point to keep in mind here is that USDA data reveal that, except for a few exceptions like lettuce or carrots, for most foods the percent of production that is is organic is typically far less than 5%. 

The study also finds:

The minority of U.S. adults who care deeply about the issue of GM foods (16%) . . . are also much more likely to consider organic produce healthier

The finding is consistent with prior research showing that WTP for organic is heavily influenced by the desire to avoid pesticides and GMOs.  In fact, there was a lot of attention given to the organic industry's support of the new mandatory labeling law for GMOs, which allowed disclosure via relatively innocuous QR codes.  The organic industry has worked to make sure people know non-GMO is not synonymous organic.  In other words, these two attributes (organic and non-GMO) are likely demand substitutes for consumers, and the organic industry knows this.  

One of the highlighted conclusions from the study is:

The divides over food do not fall along familiar political fault lines.

I'm not so sure.  While I agree things like concern for GMOs or preferences for organic don't have strong correlations with political ideology, the same can't be said for people's desires to regulate GMOs (say via labels or bans) or subsidize organics .  See, for example, this paper entitled "The political ideology of food" I published in Food Policy in 2012. From the abstract:

Food ideology was related to conventional measures of political ideology with, for example, more liberal respondents desiring more government involvement in food than more conservative respondents . . .

As I've written about in the past, I think it is important to separate "food preferences" from "policy preference", and on this last issue, there are big partisan and ideological divides.   

Much of the news coverage I saw about the report focused on the results related to American's trust in scientists and GMOs.  The study reports

Americans have limited trust in scientists connected with genetically modified foods.

 The study also reveals only about half the respondents think scientists think GMOs are safe to eat.  Well, Pew's other research shows us that it is more like 88%.  Thus, people under-estimate scientists beliefs about the safety of GMOs.  One might think then, that the answer is to just tell people about the scientific consensus regarding GMOs.  However, my research with Brandon McFadden suggests this probably won't have much affect.  In our study, the biggest determinant of how an individual responded to information about the science on GMOs was their prior belief about the safety of GMOs.  In fact, about a third of the people who thought GMOs were unsafe prior to information said they thought GMOs were even more unsafe after receiving statements from the National Academies of Science, the American Medical Association, etc. indicating GMOs were safe (we called these folk "divergent"); the plurality of people who thought GMOs were unsafe just ignored the scientific information indicating GMOs were safe.  This behavior is a form of motivated reasoning that Dan Kahan has discussed extensively in his work on cultural cognition.  We look for the information that supports our prior beliefs and ignore or discount the rest.  

On this issue of trust in scientists and GMO foods: it is important to note that trust in virtually ALL institutions is down over time.  Gallup has been tracking trust in about a dozen institutions since the 1970s.  Aside from a few exceptions (like the military and police), trust is way down for most institutions.  For example over 65% of people had a great deal or a lot of trust in "church or organized religion" in the 1970s, whereas today the figure is 41%.  For "public schools", confidence was running about 60% in the mid 1970s, but today is only 30%.  Newspapers went from around 40% to now around 20%.  "Big business" from around 30% to now around 18%.   "The medical system" from 80% to 39%.  Similar trends exist for congress, the presidency, organized labor, banks, and so on.   

As a result, it is important to ask how much trust is there in scientists . . . compared to what?  I haven't asked this question specifically in regard to GMOs in particular or food science in general, but a while back I asked on my monthly Food Demand Survey (FooDS):  “How trustworthy is information about meat and livestock from the following sources?” Fifteen sources were listed (the order randomly varied across respondents), and respondents had to place five sources in the most trustworthy category and five sources in the least trustworthy category. A scale of importance was created by calculating the proportion of times a meat and livestock information source as ranked most trustworthy minus the proportion of times it was ranked least trustworthy.

We found:

The USDA and FDA were reported as most trustworthy with 50% more people indicating the source as most trustworthy than least. A University professor from Harvard were seen as slightly more trustworthy than one from Texas A&M, but both were viewed as less trustworthy than interest groups like the Farm Bureau, the CSPI, or the HSUS.

News organizations, and particularly food companies, were viewed as least trustworthy. Chipotle was the seen as the least trust worthy organization studied – the restaurant chain was placed in the least trustworthy category 69% more often than in the most trustworthy category.

While individuals scientists at either Harvard or Texas A&M were less trusted than some others perhaps it was because it was phrased a single professor rather than a group of professors.  Indeed, the four top groups are all collections of scientists (among other people).  A subsequent survey asked how much people knew about each of these individuals and institutions, and while CSPI is trusted, it isn't well known.  I suspect people were responding to the word "science".  So, I think there is good reason to suspect people trust scientists as much or more than other societal institutions.

The issue of trust and acceptance of GMOs has been researched quite heavily in the academic literature (e.g., see several studies by Lynn Frewer).  In this paper,  she and coauthors show that people's response to information on GMOs doesn't depend on how much they trust the source per se, but rather it's the other way round: people trust the sources giving them the information that fits with their prior beliefs.  So, again we're back to motivated reasoning.  Still, we should acknowledge some research that shows "information matters."  I've done work on this topic, as has Matt Rousu, Wally Huffman and Jason Shogren.  This last set of researchers show, for example, that relatively uninformed people are influenced by information by interested and third party sources.

There is a lot more in the Pew report, but I think I'll leave it here for now.

 

The Benefits of Mandatory GMO Labeling

I ran across this post over at RegBlog which notes that the USDA will have to do a cost-benefit analysis of the new mandatory labeling law for GMOs.  The post relies heavily on this paper by Cass Sunstein written back in August.  Sunstein's article discusses the fact that regulatory agencies typically do a very bad job at quantifying the benefits of mandatory labeling policies (and identifying when or why those benefits only apply to mandatory rather than voluntary labels).

Sunstein argues that, in theory, consumer willingness-to-pay (WTP) is the best way to measure benefits of a labeling policy.  I wholeheartedly agree (and have even written papers using WTP to estimate the benefits of GMO labels) but I want to offer a couple important caveats.  

The issue in ascertaining the value of a label isn't whether consumers are willing a premium for non-GM over GM food.  Rather, as emphasized in this seminal paper by Foster and Just, what is key is whether the added information would have changed what people bought.  If you learn a food you're eating contains GMOs (via a mandatory label) but you're still unwilling to pay the premium for the non-GMO, then the the label has produced no measurable economic value.  Thus, a difference in WTP for GMO and non-GMO foods is a necessary but not sufficient condition for a labeling policy to have economic value.  

The Foster and Just paper outlines the theory behind the value of information.  Here's the thought experiment.  Imagine you regularly consume X units of a product.  Some new information comes along that lowers your value for the product (you find out it isn't as safe, not as high quality, or whatever).  Thus, at the same price, you'd now prefer to instead consume only Y units of the product.  The value of the information is the amount of money I'd have to give you to keep consuming X (the amount you consumed in ignorance) in spite of the fact you'd now like to consume only Y.  Given an estimate of demand (or WTP) before and after information, economists can back out this inferred value of information.      

But, here is a really important point: this conception of the value of information only logically applies in the case of so-called "experience" goods - goods for which you know afterward whether it was "high" or "low" quality.  Just and Foster's empirical example related to a food safety scare in milk.  In their study, people continued to drink milk because they didn't know that it had been tainted.  By comparing consumer demand (or consumer WTPs) for milk before and after the contamination was finally disclosed, the authors could estimate a value of the information.  In this case, the information had real value because the people would really have short and long term health consequences if they kept consuming X when they would have wanted to consume Y.

It is less clear to me that this same conceptual thinking about the value of information and labels applies to the case of so-called "credence" goods.  These are goods for which the consumer never knows the quality even after consumption.  Currently marketed GMOs are credence goods from the consumers' perspective.  Unless you're told by a credible source, you'll never know whether you ate a GMO or not.  So, even if a consumer learned a food was GMO when they thought it was non-GMO, and wanted to consume Y instead of X units, it is unclear to me that the consumer experienced a compensable loss.  

Expressing a view with which I'm sympathetic, Sunstein also notes that mandatory labels on GMOs don't make much sense because the scientific consensus is that they don't pose heightened health or environmental risks.  Coupling this perspective with the credence-good discussion above reminds me a bit of this philosophical puzzle published by Paul Portney back in 1992 in an article entitled "Trouble in Happyville".  

You have a problem. You are Director of Environmental Protection in Happyville, a community of 1000 adults. The drinking water supply in Happyville is contaminated by a naturally occurring substance that each and every resident believes may be responsible for the above-average cancer rate observed there. So concerned are they that they insist you put in place a very expensive treatment system to remove the contaminant. Moreover, you know for a fact that each and every resident is truly willing to pay $1000 each year for the removal of the contaminant.

The problem is this. You have asked the top ten risk assessors in the world to test the contaminant for carcinogenicity. To a person, these risk assessors - including several who work for the activist group, Campaign Against Environmental Cancer - find that the substance tests negative for carcinogenicity, even at much higher doses than those received by the residents of Happyville. These ten risk assessors tell you that while one could never prove that the substance is harmless, they would each stake their professional reputations on its being so. You have repeatedly and skillfully communicated this to the Happyville citizenry, but because of a deep-seated skepticism of all government officials, they remain completely unconvinced and truly frightened - still willing, that is, to fork over $1000 per person per year for water purification.

What should the Director do?  My gut response to this dilemma is the same as what my Ph.D. adviser Sean Fox wrote in a chapter for a book I edited a few years ago:

It’s a difficult question of course, and the answer is well beyond both the scope of this chapter and the philosophical training of the author.

 

 

How much do millennials like to eat out?

A recent article in Forbes discussed millennial's eating habits utilizing, it seems, a report from the Food Institute and USDA Economic Research Service Data.

The Forbes article writes:

Millennials spend 44 percent of their food dollars – or $2,921 annually – on eating out, according to the Food Institute’s analysis of the United States Department of Agriculture’s food expenditure data from 2014. That represents a 10.7 percent increase from prior data points in 2010.

In contrast, baby boomers in 2014 spent 40 percent of their food dollars on eating out or $2,629 annually.

It's a little hard from this article to really get a nice comparison of millennials food spending without controlling for differences in income and total spending on food at home and away from home.  Thus, I turned to the data from my Food Demand Survey (FooDS) where we've been asking, for more than three years, how much people spent on food at home and away from home.

Here is a breakdown on spending on food away from home (expressed as a share of total household income) by age and by income.  The black and red dashed lines are the two age groups that could be considered millennials.  The results show that for incomes less than about $80,000/year, millennials do indeed spend a larger share of their income on food away from home than do other generations; however, the same isn't necessarily true for higher income households.  People in the two oldest age categories spend a lower share of their income on food away from home at virtually every income level.  For each age group, the curves are downward sloping as suggested by Engle's Law: the share of income spend on food falls as income rises.   

The next graph below shows the same but for spending on food at home.  For the lowest income categories, the youngest individuals spend more of their income on food at home than do older consumers; however, at higher income levels, all age groups are fairly similar.  Coupling the insights from the two graphs suggests that, at incomes less than about $60,000, younger folks are spending more of their income on food (combined at home and away from home) than older folks.   

Finally, here is the share of total food spending that goes toward food away from home by age group and income level.  In general, as incomes rise, people spend more of their food budget away from home.  That is, richer people eat out more.  No surprise there. 

Generally speaking, consumers younger than 44 years of age spend more of their food budget away from home than do older consumers.  The 24-34 year old age group that is firmly in the millennial generation consistently spends more of their food budget away from home than other age groups at almost every income level.   

What's going on in your brain?

Ever wonder why you choose one food over another?  Sure, you might have the reasons you tell yourself for why you picked, say, cage vs. cage free eggs. But, are these the real reasons?

I've been interested in these sorts of questions for a while, and along with several colleagues, have turned to a new tool - functional magnetic resonance imaging (fMRI) - to peak people inside people's brains as they're choosing between different foods.  You might be able to fool yourself (or survey administrators) about why you do something, but you're brain activity doesn't lie (at lest we don't think it does).  

In a new study that was just released by the Journal of Economic Behavior and Organization,  my co-authors and I sought to explore some issues related to food choice.  The main questions we wanted to know were: 1) does one of the core theories for how consumers choose between goods of different qualities (think cage vs cage free eggs) have any support in neural activity?, and 2) after only seeing how your brain responses to seeing images of eggs with different labels, can we actually predict which eggs you will ultimately choose in a subsequent choice task?   

Our study suggests the answers to these two questions are "maybe" and "yes".  

First, we asked people to just look at eggs with different labels while they were laying in the scanner.  The labels were either a high price, a low price, a "closed" production method (caged or confined), or an "open" production method (cage free or free range), as the below image suggests.  As participants were looking at different labels we observed whether blood flow increased or decreased to different parts of the brain when seeing, say, higher prices vs. lower prices.  

We focused on a specific areas of the brain, the ventromedial prefrontal cortex (vmPFC), which previous research had identified as a brain region associated with forming value.  

What did his stage of the research study find?  Not much.  There were no significant differences in brain activation in the vmPFC when looking at high vs. low prices or when looking at open vs. closed production methods.  However, there was a lot of variability across people.  And, we conjectured that this variability across people might predict which eggs people might choose in a subsequent task.  

So, in the second stage of the study, we gave people a non-hypothetical choice like the following, which pitted a more expensive carton of eggs produced in a cage free system against a lower priced carton of eggs from a cage system.  People answered 28 such questions where we varied the prices, the words (e.g., free range instead of cage free), and the order of the options.  One of the choices was randomly selected as binding and people had to buy the option they chose in the binding task.  

Our main question was this: can the brain activation we observed in the first step, where people were just looking at eggs with different labels predict which eggs they would choose in the second step?

The answer is "yes".  In particular, if we look at the difference in the brain activation in the vmPFC when looking at eggs with a "open" label vs. an "closed" label, this is significantly related to the propensity to choose the higher-priced open eggs over the lower-priced closed eggs (it should be noted that we did not any predictive power from the difference in vmPFC when looking at high vs. low priced egg labels).  

Based on a statistical model, we can even translate these differences in brain activation into willingness-to-pay (WTP) premiums:

Here's what we say in the text:

Moving from the mean value of approximately zero for vmPFCmethodi to twice the standard deviation (0.2) in the sample while holding the price effect at its mean value (also approximately zero), increases the willingness-to-pay premium for cage-free eggs from $2.02 to $3.67. Likewise, moving two standard deviations in the other direction (-0.2) results in a discount of about 38 cents per carton. The variation in activations across our participants fluctuates more than 80 percent, a sizable effect that could be missed by simply looking at vmPFCmethod value alone and misinterpreting its zero mean as the lack of an effect.

Polling 101

I teach a graduate level course every spring semester on survey and experiment methods in economics and the social sciences.  In this election season, I thought it might be worthwhile to share a few of the things I discuss in the course so that you might more intelligibly interpret some of survey research results being continuously reported in the newspapers and on the nightly news. 

You've been hiding under a rock if you haven't by now seen reports of polls on the likelihood of Trump or Clinton winning the presidential election.  Almost all these polls will report (often in small font) something like "the margin of error is plus or minus 3 percent".  

What does this mean?

In technical lingo it means the "sampling error" is +/- 3% with 95% confidence.  This is the error that comes about from the fact that the polling company doesn't survey every single voter in the U.S.  Because not every single voter is sampled, there will be some error, and this is the error you see reported alongside the polls.  Let's say the projected percent vote for Trump is 45% with a "margin of error" of 3%.  The interpretation would be that if we were to repeatedly sample potential voters, 95% of the time we would expect to find a voting percentage for Trump that is between 42% and 48%.

The thought experiment goes like this: imagine you had a large basket full of a million black and white balls.  You want to know the percentage of balls in the basket that are black.  How many balls would you have to pull out and inspect before you could be confident of the proportion of balls that are black?  We can construct many such baskets where we know the truth about the proportion of black balls and try different experiments to see how accurate we are in many repeated attempts where we, say, pull out 100, 1,000, or 10,000 balls.  The good news is that we don't have to manually do these experiments because statisticians have produced precise mathematical formulas that give us the answers we want.  

As it turns out, you need to sample about 1,000 to 1,500 people (the answer is 1,067 to be precise) out of the U.S. population to get a sampling error of 3%, and thus most polls use this sample size.  Why not a 1% sampling error you might ask?  Well, you'd need to survey almost 10,000 respondents to achieve a 1% sample error and the 10x increase in cost is probably not worth a measly two percentage point increase in accuracy. 

Here is a key point: the 3% "margin of error" you see reported on the nightly news is only one kind of error.  The true error rate is likely something much larger because there are many additional types of error besides just sampling error. However, these other types of errors are more difficult to quantify, and thus, are not reported.

For example, a prominent kind of error is "selection bias" or "non-response error" that comes about because the people who choose to answer the survey or poll may be systematically different than the people who choose not to answer the survey or poll.  Alas, response rates to surveys have been falling quite dramatically over time, even for "gold standard" government surveys (see this paper or listen to this podcast).  Curiously, those nightly news polls don't tell you the response rate, but my guess is that it is typically far less than 10% - meaning that less than 10% of the people they tried to contact actually told them whether they intend to vote for Trump or Clinton or someone else.  That means more than 90% of the people they contacted wouldn't talk to them.  Is there something special about the ~10% willing to talk to the pollsters that is different than the ~90% of non-respondents?  Probably.  Respondents are probably much more interested and passionate about their candidate and politics and general.  And yet, we - the consumer of polling information - are rarely told anything about this potential error.

One way pollsters try to partially "correct" for non-response error is through weighting.  To give a sense for how this works, consider a simple example.  Let's say I surveyed 1,000 Americans and asked whether they prefer vanilla or chocolate ice cream.  When I get my data back, I find that there are 650 males and 350 females.  Apparently males were more likely to take my survey.  Knowing that males might have different ice cream preferences than females, I know that my answer of the most popular ice cream flavor will likely be biased if I don't do something.  So, I can create a weight.  I know that the true proportion of the US population is roughly 50% male and 50% female (in actuality, there are slightly more females than males but lets put that to the side).  So, what I need to do is make the female respondents "count" more in the final answer than the males.  When we typically take an average, each person has a weight of one (we add up all the answers - implicitly multiplied by a weight of one - and divide by the total).  A simple correction in our ice cream example would be to make a females have a weight of 0.5/0.35=1.43 and males have a weight of 0.5/0.65=0.7.  Females will count more than one and males will count less.  And, I report a weighted average: add up all the female answers (and multiply by a weight of 1.43) and add to them all the male answers (multiplied by 0.7), and divide by the total.  

Problem solved right?  Hardly.  For one, gender is not a perfect predictor of ice cream preference.  And the reason someone chooses to respond to my survey almost certainly has something to do with more than gender.  Moreover, weights can only be constructed using variables for which we know the "truth" - or have census bureau data which reveals the characteristics of the whole population.  But, in the case of political polling, we aren't trying to match up with the universe of U.S. citizens but the universe of U.S. voters.  Determine the characteristics of voters is a major challenge that is in constant flux.  

I addition, when we create weights, we could end up with a few people having a disproportionate effect on the final outcome - dramatically increasing the possible error rate. Yesterday, the New York Times ran a fantastic story by Nate Cohn illustrating exactly how this can happen.  Here are the first few paragraphs:

There is a 19-year-old black man in Illinois who has no idea of the role he is playing in this election.

He is sure he is going to vote for Donald J. Trump.

And he has been held up as proof by conservatives — including outlets like Breitbart News and The New York Post — that Mr. Trump is excelling among black voters. He has even played a modest role in shifting entire polling aggregates, like the Real Clear Politics average, toward Mr. Trump.

How? He’s a panelist on the U.S.C. Dornsife/Los Angeles Times Daybreak poll, which has emerged as the biggest polling outlier of the presidential campaign. Despite falling behind by double digits in some national surveys, Mr. Trump has generally led in the U.S.C./LAT poll. He held the lead for a full month until Wednesday, when Hillary Clinton took a nominal lead.

Our Trump-supporting friend in Illinois is a surprisingly big part of the reason. In some polls, he’s weighted as much as 30 times more than the average respondent, and as much as 300 times more than the least-weighted respondent.

Here's a figure they produced showing how this sort of "extreme" weighting affects the polling result reported:

The problem here is that when one individual in the sample counts 30 times more than the typical respondent, the effective sample size is actually something much smaller than actual sample size, and the "margin of error" is something much higher than +/- 3%.

There are many additional types of biases and errors that can influence survey results (e.g., How was the survey question asked? Is there an interviewer bias? Is the sample drawn from a list of all likely voters?).   This doesn't make polling useless.  But, it does mean that one needs to be a savvy consumer of polling results.  It's also why it's often useful to look at aggregations across lots of polls or, my favorite, betting markets.