Blog

Impacts of Agricultural Research and Extension

About a month ago, I posted on some new research suggesting decline rates of productivity growth in agriculture.  Last week at a conference in Amsterdam, I ran into Wally Huffman from Iowa State University, and knowing he's done work in this area, I asked him if he had any thoughts on the issue.  As it turns out, along with Yu Jin he has a new paper forthcoming in the journal Agricultural Economics on agricultural productivity and the impacts of state and federal spending on agricultural research and extension.  

Jin and Huffman also find evidence of a slowdown in productivity growth, writing: 

We find a strong impact of trended factors on state agricultural productivity of 1.1 percent per year. The most likely reason is continued strong growth in private agricultural R&D investments. The size and strength of this trend makes it unlikely for average annual TFP growth for the U.S. as a whole to become negative in the near future. However, for two-thirds of the states, the forecast of the mean ln(TFP) over 2004-2010 is less than trend. The primary reason is under-investment in public agricultural research and extension in the past. For public agricultural research where the lags are long, it will be impossible for these states to exceed the trend rate of growth for TFP in the near future.

They also find large returns to spending on agricultural research, and even larger returns to spending on extension.  They find the following:

For public agricultural research with a productivity focus the estimated real [internal rate of return] is 67%, and for narrowly defined agricultural and natural resource extension is over 100%. Stated another way, these public investment project could pay a very high interest rate (66% for agricultural research and 100% for extension) and still have a positive net present value. Hence, these [internal rate of return] estimates are quite large relative to alternative public investments in programs of education and health. In addition, there is no evidence of a low returns to public agricultural extension in the U.S., or that public funds should be shifted from public agricultural extension to agricultural research. In fact, if any shifting were to be recommended, it would be to shift some funds from public agricultural research to extension.

The paper includes a couple really interesting graphs on research spending and extension employment over time.  First, they show that for four major agricultural states, real spending on agricultural research peaked in the mid 1990s. 

And, while extension staff has declined in some states, it hasn't in others.  

The Behavioral and Neuroeconomics of Food and Brand Decisions

That's the title of a special issue I helped edit with John Crespi and Amanda Bruce in the latest issue of the Journal of Food and Agricultural Industrial Organization.  

Here's an excerpt from our summary:

To economists interested in food decisions, progress seen in other fields ought to be exciting. In the articles for this special issue, we gathered information from a wide range of research related to food decisions from behavioral economics, psychology, and neuroscience. The articles, we hope, will provide a useful reference to researchers examining these techniques for the first time…The variety of papers in this special issue of JAFIO should provide readers with a broad introduction to newer methodological approaches to understanding food choices and human decision-making

A complete listing of the authors and papers are below (all of which can be accessed here)

•       The Behavioral and Neuroeconomics of Food and Brand Decisions: Executive Summary
o   Bruce, Amanda / Crespi, John / Lusk, Jayson

•       Cognitive Neuroscience Perspectives on Food Decision-Making: A Brief Introduction
o   Lepping, Rebecca J. / Papa, Vlad B. / Martin, Laura E.

•       Marketing Placebo Effects – From Behavioral Effects to Behavior Change?
o   Enax, Laura / Weber, Bernd

•       The Role of Knowledge in Choice, Valuation, and Outcomes for Multi-Attribute Goods
o   Gustafson, Christopher R.

•       Brands and Food-Related Decision Making in the Laboratory: How Does Food Branding Affect Acute Consumer Choice, Preference, and Intake Behaviours? A Systematic Review of Recent Experimental Findings
o   Boyland, Emma J. / Christiansen, Paul

•       Modeling Eye Movements and Response Times in Consumer Choice
o   Krajbich, Ian / Smith, Stephanie M.

•       Visual Attention and Choice: A Behavioral Economics Perspective on Food Decisions
o   Grebitus, Carola / Roosen, Jutta / Seitz, Carolin Claudia

•       Towards Alternative Ways to Measure Attitudes Related to Consumption: Introducing Startle Reflex Modulation
o   Koller, Monika / Walla, Peter

•       I Can’t Wait: Methods for Measuring and Moderating Individual Differences in Impulsive Choice
o   Peterson, Jennifer R. / Hill, Catherine C. / Marshall, Andrew T. / Stuebing, Sarah L. / Kirkpatrick, Kimberly

•       A Cup Today or a Pot Later: On the Discounting of Delayed Caffeinated Beverages
o   Jarmolowicz, David P. / Lemley, Shea M. / Cruse, Dylan / Sofis, Michael J.

•       Are Consumers as Constrained as Hens are Confined? Brain Activations and Behavioral Choices after Informational Influence
o   Francisco, Alex J. / Bruce, Amanda S. / Crespi, John M. / Lusk, Jayson L. / McFadden, Brandon / Bruce, Jared M. / Aupperle, Robin L. / Lim, Seung-Lark

Big Fat Surprise

I just finished reading Nina Teicholz’s best selling book The Big Fat Surprise, which takes issue with our long-held belief that low-fat diets in general, and diets free of animal fat in particular, best promote good health.  

It’s been an enjoyable read, and the history of the development of our dietary beliefs and guidelines is both fascinating and eye opening.  There is a bit of a tendency in the book for the author to nit pick any study which doesn’t support her hypothesis without applying the same skepticism of those studies which do support it, but overall, she makes a compelling case.  I probably found chapter 10 on "Why Saturated Fat is Good For You" most interesting in that regard.  Teicholz lays bare the sad state of affairs associated with the science behind much of the nutritional advice we’re given.  One takeaway is that we really don’t know as much as is often presumed about what sorts of diets increase/decrease chances or heart attack or cancer.  

There is one nit I want to pick with a phrase in Teicholz’s book.  It is a technical one, but because it is the sort of thing I expect my students to fully understand, I'll delve into it.  On page 167 of the paperback version she writes (about an epidemiological study finding no relationship between breast cancer and consumption of dietary fat), “These conclusions were all associations.  But although epidemiology cannot demonstrate causation, it can be used to reliably show the absence of a connection.” (the emphasis is hers)

That claim is patently false (I'm presuming by "connection" she means "causation").  The trouble with the sort of correlation analysis used in many epidemiology studies is that of missing variables.  We can't observe everything about people's behaviors or about the effects of dietary changes, and that results in "omitted variable bias."  That bias can inflate or reduce the size of a measured effect.   In fact, contrary to Teicholz's claims, omitted variable bias can make a "real" effect look like nothing.

Wikipedia describes the problem, but similar treatments can be found in almost any introductory econometrics textbook.     

Suppose we have the following true relationship:

y=b0 + b1*x + b2*z + e

where y is the chance of breast cancer among women, x is amount of fat consumed, and z is a personality trait reflecting the person's overall health conscientiousness.  The "true" relationship we want to know is given by b1.  

But, suppose we only observe y and x and we don't observe z.  Also suppose that z is related to x in the following way: z = a0 + a1*x + u.  Substituting this equation into the first means that when the epidemiologist runs their analysis they calculate:

y = b0 + b1*x +b2*(a0 + a1*x + u)+ e

or, re-writing:

y = b0 + b2*a0+ (b1+b2*a1)*x + b2*u+e.

So, the researcher looks at the relationship between x and y, and thinks they're estimating the "true" effect b1, but in reality, they're estimating the effect (b1+b2*a1), which could be larger or smaller than b1.  

Suppose b2 takes the positive value of +1.5 (more conscientious women are less likely to develop breast cancer) and a1 is also positive and takes the value of +2 (more conscientious women pay more attention to all that health advice and eat less fat).  This means the effect b2*a1 is positive at the value of +3.  But b1 could be negative (more fat = more breast cancer).  Say, b1=-3.  If the positive effect of b2*a1=+3 outweights the negative effect of b1=-3, so the estimated effect is 3-3=0.  It will look like there is no effect even though there really is one.  Even if the effects don't precisely outweigh each other, the estimated effect could be small enough that it the research concludes it isn't statistically different from zero.

Now, I'm not saying that there is a relationship between fat consumption and breast cancer - rather, I'm just making a conceptual point that omitted variables can result in upward or downward bias.  What I can more confidently say is that only the last part of Teicholz's claim is right:  "epidemiology cannot demonstrate causation."  

Now, there are regression methods that can get us much closer to the truth, but I don't often see these used in epidemiology studies.  In economics, the so-called "credibility revolution" has led to more specification testing and attention to causal-identification using instrumental variables, discontinuity designs, differences-in-differences, and others.  A good introduction to the topics and methods is given in Mostly Harmless Econometrics.

  

 

 

How do people respond to scientific information about GMOs and climate change?

The journal Food Policy just published a paper by Brandon McFadden and me that explores how consumers respond to scientific information about genetically engineered foods and about climate change.  The paper was motivated by some previous work we'd done where we found that people didn't always respond as anticipated to television advertisements encouraging them to vote for or against mandatory labels on GMOs.  

In this study, respondents were shown a collection of statements from authoritative scientific bodies (like the National Academies of Science and United Nations) about the safety of eating approved GMOs or the risk of climate change.  Then we asked respondents whether they were more or less likely to believe GMOs were safe to eat or whether the earth was warming more than it would have otherwise due to human activities.    

We classified people as "conservative" (if they stuck with their prior beliefs regardless of the information), "convergent" (if they changed their beliefs in a way consistent with the scientific information), or "divergent" (if they changed their beliefs in a way inconsistent with the scientific information). 

We then explored the factors that explained how people responded to the information.  As it turns out, one of the most important factors determining how you respond to information is your prior belief.  If your priors were that GMOs were safe to eat and that global warming was occurring, you were more likely to find the information credible and respond in a "rational" (or Bayesian updating) way.  

Here are a couple graphs from the paper illustrating that result (where believers already tended to believe the information contained in the scientific statements and deniers did not).  As the results below show, the "deniers" were more likely to be "divergent" - that is, the provision scientific information caused them to be more likely to believe the opposite of the message conveyed in the scientific information.  

We also explored a host of other psychological factors that influenced how people responded to scientific information.  Here's the abstract:

The ability of scientific knowledge to contribute to public debate about societal risks depends on how the public assimilates information resulting from the scientific community. Bayesian decision theory assumes that people update a belief by allocating weights to a prior belief and new information to form a posterior belief. The purpose of this study was to determine the effects of prior beliefs on assimilation of scientific information and test several hypotheses about the manner in which people process scientific information on genetically modified food and global warming. Results indicated that assimilation of information is dependent on prior beliefs and that the failure to converge a posterior belief to information is a result of several factors including: misinterpreting information, illusionary correlations, selectively scrutinizing information, information-processing problems, knowledge, political affiliation, and cognitive function.

An excerpt from the conclusions:

Participants who misinterpreted the information provided did not converge posterior beliefs to the information. Rabin and Schrag (1999) asserted that people suffering from confirmation bias misinterpret evidence to conform to a prior belief. The results here confirmed that people who misinterpreted information did indeed exhibit confirmation, as well as people who conserved a prior belief. This is more evidence that assuming optimal Bayesian updating may only be appropriate when new information is somewhat aligned with a prior belief.

Why people lie on surveys and how to make them stop

Companies spend millions (perhaps billions?) of dollars every year surveying consumers to figure out want they want.  Environmental, health, and food economists do the same to try to figure out the costs and benefits of various policies.  What are people willing to pay for organic or non-GMO foods or for country of origin labels on meat?  These are the sorts of questions I'm routinely asked.

Here's the problem: there is ample evidence (from economics and marketing among other disciplines) that people don't always do what they say they will do on a survey.  A fairly typical result from the economics literature is that the amount people say they are willing to pay for a new good or service is about twice what they'll actually pay when money is on the line.  It's what we economists call hypothetical bias.

We don't yet have a solid theory that explains this phenomenon in every situation, and it likely results from a variety of factors like: social desirability bias (we give answers we think the surveyor wants to hear), warm glow, yea-saying, and self presentation bias (it feels good to support "good" causes and say "yes", and why not say we're willing to do something, particularly when there is no cost to doing so and it can make us look and feel good about ourselves), idealized responses (we imagine whether we'd ever buy the good when we eventually have the money and the time is right, rather than answering whether we'd buy it here and now), strategy (if we think our answers to a survey question can influence the eventual price that is charged or whether the good is actually offered, we might over- or under-state our willingness to buy), uncertainty (research suggest a lot of the hypothetical bias comes from those who say they aren't sure about whether they'd buy the good), among other possible reasons.

What to do?

Various fixes have been proposed over the years.

  • Calibration.  Take responses from a survey and reduce them by some factor so that they more closely approximate what consumers will actually do.  The problem: calibration factors are unknown and vary across people and goods.
  • Cheap talk.  On the survey, explain the problem of hypothetical bias and explicitly ask people to avoid it.  The problem: it doesn't always "work" for all people (particularly experienced people familiar with the good), and there is always some uncertainty over whether you've simply introduced a new bias.
  • Certainty scales.  Ask people how sure they are about their answers, and for people who indicate a high level of uncertainty, re-code their "yes" answers to "no".  The problem: the approach is ad-hoc, and it is hard to know a priori what the cut-off on the certainty scale should be.  Moreover, it only works for simple yes/no questions.
  • Use particular question formats.  Early practitioners of contingent valuation (an approach for asking willingness-to-pay popular in environmental economics) swear by a "double-bounded dichotomous choice, referendum question" which they believe has good incentives for truth telling if respondents believe their answers might actually influence whether the good is provided (i.e., if the answer is consequential).  I'm skeptical.  I'm more open to the use of so-called "choice experiments", where people make multiple choices between goods that have different attributes, and where we're only interested in "marginal" trade offs (i.e., whether you want good A vs. good B).  There is likely more bias in the "total" (i.e., whether you want good A or nothing).    

There is another important alternative.  If the problem is that surveys don't prompt people to act as they would in a market, well, whey don't we just create a real market?  A market where people have to give up real money for real goods - where we make people put their money where their mouth is?  It is an approach I wrote about in the book Experimental Auctions with Jason Shogren and it is the approach I teach with  Rudy Nayga, Andreas Drichoutis, and Maurizio Canavari in the summer school we have planned for this summer in Crete (sign up now!)  It is an approach with a long history , stemming mainly from the work of experimental economists.

One of the drawbacks with the experimental market approach is that it is often limited to a particular geographic region.  You've got to recruit people and get them in a room (or as people like John List and others have done, go to a real-world market already in existence and bend it to your research purposes).   

Well, there's now a new option with particularly wider reach.  Several months ago I was contacted by Anouar El Haji who is at the Business School at the University of Amsterdam.  He's created a simple online platform he calls Veylinx where researchers can conduct real auctions designed to give participants an incentive to truthfully reveal their maximum willingness-to-pay.  The advantage is that one can reach a large number of people across the US (potentially across the world).  It's a bit like ebay, but with a much simpler environment (which researchers can control) with a clearer incentive to get people to bid their maximum willingness-to-pay.  

One of the coolest parts is that you can even sign up to participate in the auctions.  I've done so, and encourage you to do the same.  Hopefully, we'll eventually get some auctions up and running that relate specifically to food and agriculture.