Blog

Dealing with Lazy Survey Takers

A tweet by @thefarmbabe earlier this week has renewed interest in my survey result from back in January 2015, where we found more than 80% of survey respondents said they wanted mandatory labels on foods containing DNA. For interested readers, see this discussion on the result, a follow-up survey where the question was asked in a different way with essentially the same result, or this peer-reviewed journal article with Brandon McFadden where we found basically the same result in yet another survey sample. No matter how we asked this question, it seems 80% of survey respondents say they want to label foods because they have DNA.

All this is probably good motivation for this recent study that Trey Malone and I just published in the journal Economic Inquiry. While there are many possible reasons for the DNA-label results (as I discussed here), one possibility is that survey takers aren’t paying very close attention to the questions being asked.

One method that’s been around a while to control for this problem is to use a “trap question” in a survey. The idea is to “trap” inattentive respondents by making it appear one question is being asked, when in fact - if you read closely - a different question is asked. Here are two of the trap questions we studied.

trapquestions.JPG

About 22% missed the first trap question (they did not click “high” to the last question in figure 2A) and about 25% missed the second question (the respondent clicked an emotion rather than “none of the above” in question 2B). So far, this isn’t all that new.

Trey’s idea was to prompt people who missed the trap question. Participants who incorrectly responded were given the following prompt, “You appear to have misunderstood the previous question. Please be sure to read all directions clearly before you respond.” The respondent then had the chance to revise their answers to the trap question they missed before proceeding to the rest of the survey. Among the “trapped” respondents, about 44% went back and correctly answered the first question, whereas about 67% went back and correctly answered the second question. Thus, this “nudge” led to an increase in attentiveness among a non-trivial number of respondents.

After the trap questions and potential prompts, respondents subsequently answered several discrete choice questions about which beer brands they’d prefer at different prices. Here are the key findings:

We find that individuals who miss trap questions and do not correctly revise their responses have significantly different choice patterns as compared to individuals who correctly answer the trap question. Adjusting for these inattentive responses has a substantive impact on policy impacts. Results, based on attentive participant responses, indicate that a minimum beer price would have to be substantial to substantially reduce beer demand.

In our policy simulations, we find a counter-intuitive result - a minimum beer price (as implemented in some parts of the UK) might actually increase alcohol consumption as it leads to a substitution from lower to higher alcohol content beers.

In another paper in the European Review of Agricultural Economics that was published back in July, Trey and I proposed a different, yet easy-to-interpret measure of (and way to fix) inattention bias in discrete choice statistical models.

Taken together, these papers show that inattention is a significant problem in surveys, and that adjusting results for inattention can substantively alter one’s results.

We haven’t yet done a study of whether people who say they want DNA labels are more or less likely to miss trap question or exhibit other forms of inattention bias, but that seems a natural question to ask. Still, inattention can’t be the full explanation for absurd label preferences. We’ve never found inattention bias as high as the level of support for mandatory labels on foods indicating the presence/absence of DNA.