Blog

Similarity and substitution: Using pile sorting methods to explore economic behavior

That’s the title of a new paper I’ve co-authored with Amelia Ales and Vincenzina Caputo that has been accepted for publication in the Journal of the Agricultural and Applied Economics Association.

It has long been conjectured in economics that goods that are more similar will be stronger substitutes, but there is surprisingly scant evidence for this assertion.

To explore this issue, we turn to a research method called “pile sorting.” Pile sorting, also known as card sorting, is a method long used in qualitative social sciences, but it is largely unknown or unused by economists. The method, in use for over 50 years, entails asking people to sort items or concepts into piles or groups according to their similarities or dissimilarities. Responses are used to identify the structure of the cognitive relationships between items through cluster analysis and multidimensional scaling.

In this new paper, we explore whether perceptions of similarity or dissimilarity can help explain why consumers treat products as utility substitutes or complements. We also introduce an approach to modeling pile-sorting data that avoids downsides of common analytic techniques used in previous work.

Here’s a figure showing the analysis of the pile sort data for foods purchased in a grocery setting. People tend to rate the plant-based (PB) meats as similar to each other and very different from bananas, strawberries and apples.

Are products perceived as more similar stronger demand substitutes? No, not necessarily. In fact, in a grocery setting, foods that are perceived as more similar are more likely to be demand complements (i.e., purchased together)

We find the opposite in a restaurant setting.

There’s a lot more in the paper including a discussion of implications for food marketing.

Some new papers

I’ve been fortunate to have several papers accepted for publication in the past few days - one on meat demand, another on plant-based meat alternatives, and two papers on consumer research methods. Below is a summary of each, starting first with the research methods papers.

1) A Basket-Based Choice Experiment with Vincenzina Caputo in Food Policy. Here’s the abstract:

Although economic research on food consumer demand has exploded in recent years, most survey demand elicitation approaches have substantial limitations for food policy evaluations as they involve consumers choosing only one item out of a bundle. There is a need to design a more flexible approach capturing more realistic consumption patterns. This study introduces such an approach – a basket-based choice experiment – where consumers select their preferred food item or combination thereof. Our basket-based choice experiment includes 21 food items that can be freely combined to construct over 2 million possible baskets. Our results show that when given the opportunity, consumers select multiple items for their basket, most commonly three or four items. A composite conditional likelihood function approach is used to reduce the computational burden associated with modeling the choice of over 2 million possible baskets, and estimates are utilized in a multivariate logit model to calculate the probability of bundle selection and individual food price elasticities. Unlike typical choice experiments utilizing multinomial logit model variants, which forces products to be demand substitutes, our basket-based approach is able to capture a rich set of substitution and complementary patterns, and we find that most of the 21 food items studied are demand complements. The BBCE is used to explore policy questions related to the impacts of changing prices on the healthfulness of consumer dietary choices and the welfare effects of product bans, such as Meatless Monday.

2) A Calibrated Choice Experiment Method with Lauren Chenarides, Carola Grebitus, and Iryna Printezis in the European Review of Agricultural Economics. Here’s the abstract:

Although choice experiments have emerged as the most popular stated preference method in applied economics, the method is not free from biases related to order and presentation effects. This paper introduces a new preference elicitation method referred to as a calibrated choice experiment, and we explore the ability of the new method to alleviate starting point bias. The new approach utilizes the distribution of preferences from a prior choice experiment to provide real-time feedback to respondents about our best guess of their willingness-to-pay for food attributes, and allows respondents to adjust and calibrate their values. The analysis utilizes data collected in 2017 in two U.S. cities, Phoenix and Detroit, on consumer preferences for local and organic tomatoes sold through supermarkets, urban farms, and farmers markets to establish a prior preference distribution. We re-conduct the survey in May 2020 and implement the calibrated choice experiment. Conventional analysis of the 2020 choice experiment data shows willingness-to-pay is strongly influenced by a starting point: the higher the initial price a respondent encountered, the higher the absolute value of their willingness-to-pay. Despite this bias, we show that when respondents have the opportunity to update their willingness-to-pay when presented with the best-guess, the resulting calibrated willingness-to-pay is much less influenced by the random starting point.

3) Benchmarking US Consumption and Perceptions of Beef and Plant-Based Proteins with Hannah Taylor, Glynn Tonsor, and Ted Schroeder in Applied Economic Perspectives and Policy. Here’s the abstract:

This article uses two complementary analyses to document consumption of beef and plant-based proteins along with perceptions held by US consumers. Beef is chosen three times more often than plant-based proteins and consumers hold a positive image of beef overall. Key differences are outlined between regular meat consumers and those declaring alternative diets. Combined these findings extend understanding in the dynamic situation presented by plant-based proteins in the US market.

4) U.S. perspective: Meat Demand Outdoes Meat Avoidance with Glynn Tonsor in Meat Science. Here’s the abstract:

Despite ample discussion of health, environment, and animal welfare effects of meat production and consumption, this article documents past, current, and projected consumption patterns reflecting robust meat demand in the United States. There is some evidence of meat avoidance behavior among a segment of the population, including younger, higher educated, higher income consumers in the Western United States. At the same time, the majority of U.S. residents self-declare as regularly consuming products from animals, and there is evidence of strong demand growth for meat products in recent years. Key factors influencing protein purchasing decisions are presented revealing critical roles of taste, freshness, and safety. Combined this article summarizes both the aggregate and more refined, household-level situation underlying robust meat demand in the U.S.

Bacon Causes Cancer: Do Consumers Care?

That’s the title of a new working paper I’ve co-authored with Purdue PhD student, Xiaoyang He. The answer to the question is: “yes,” retail bacon prices and sales fell following the pronouncement that processed meat was classified as a carcinogen; however, we did not find the same for other processed meat categories, ham and sausage. Maybe all those headlines like “The great bacon freak-out” and “Eating just one slice of bacon a day linked to higher risk…” really served to focus people’s attention. Here is the abstract:

In October 2015, the International Agency for Research on Cancer (IARC) released a report classifying processed meat as a type 1 carcinogen. The report prompted headlines and attracted immediate public attention, but the economic impacts remain unknown. In this paper, we investigate the impacts of the IARC report on processed meat prices and purchases using retail scanner data from U.S. grocery stores. We compare changes in prices and sales of processed meat products to a constructed synthetic control group (using a convex combination of non-meat food products). We find a significant decrease in bacon prices and revenues in the wake of the IARC report release, but we find no evidence of a demand reduction in ham and sausage. At the same time, we find beef sales and revenue increased significantly after the report, while beef price significantly fell.

That bacon prices fell alongside the volume sold is a clear signal that consumer demand for bacon fell as a result of the IARC report.

As we discuss in the paper, a key challenge with identifying the effects of the IARC report rests in constructing a counter-factual prediction of what would have happened to prices and sales of processed meat products had the IARC report not been released. We cannot use data from an unaffected location because the media reports were widely distributed across the U.S. Instead, we use statistical methods (the so-called synthetic control method) to identify alternative food products as controls. We describe the approach as follows:

The synthetic control method sidesteps this problem and uses a combination of candidate controls instead. We Nielsen retail scanner data to determine the effect of the IARC report on processed meat markets. This data contains weekly information regarding sales, price, and revenue for processed meat categories as well as categories that are included in the synthetic control group. We use the data from 2014 to 2016, which includes approximately one year of data before and one year of data after IARC report released date. The post-IARC time period is long enough to determine, if any impact exists, how long it lasts.

In essence we use the the estimated relationship among dozens of possible grocery item prices and bacon prices prior to IARC report release to predict what bacon prices would have been had the report release not occurred. Here is the calculation of actual and counter-factual bacon prices ($/oz) before and after the report release:

baconIARC.JPG

After a few weeks of bacon prices remaining above their predicted values, bacon prices ultimately averaged 6.5% lower than what we predict would have occurred had the IARC report not been released.

You can read the whole thing here.

Who are you calling food insecure?

Every year, the USDA Economic Research Service (ERS) reports rates of food security in the United States. In 2018, 11.1% of U.S. households were estimated to be food insecure, down from a recent-history high of 14.9% in 2011.

These official statistics on food security are often interpreted in the media and by lay audiences as a measure of hunger. But, that’s not exactly what the USDA-ERS measures. A new paper by Sunjin Ahn, Travis Smith, and Bailey Norwood in Applied Economics Perspectives and Policy does a great job de-mystifying how official government measures of food insecurity are actually calculated. They also ably explain and articulate what other survey researchers must do to produce results that approximate the official measures.

Food insecurity is measured by the US Census Bureau asking a large sample of nationally-representative U.S. households a series of 10 questions (plus an additional 8 questions if there are children in the household) like how often, “In the last 12 months, were you ever hungry, but didn't eat, because you couldn't afford enough food?” or how often “I couldn’t afford to eat balanced meals.” A score is then calculated based on the frequency with which people respond affirmatively to the questions. If the score is high enough, the household is deemed food insecure. Seen in this way, food insecurity is probably best interpreted as a measure of a household’s perception of food affordability, although it almost surely positively correlated with hunger. The ERS has more information on how food security differs from hunger, and on the details of their measurement of food security here.

Ahn, Smith, and Norwood point out another issue that is not widely appreciated. They write:

To avoid overburdening respondents with unnecessary questions in the CPS‐FSS [Census Bureau Current Population Survey - Food Security Supplement] survey, surveyors first conduct a screening process. If a household’s income is greater than 185% of the poverty threshold, and they answer

(1) “no” to “… did you ever run short of money and try to make your food or your money go further,” or

(2) “enough of the kinds of food (I/we) want to eat” from the question “Which of these statements best describes the food eaten in your household …,”

they are assumed to be food secure and are not administered the Food Security questionnaire (ERS 2015b). This screening process varies: In a 2012 design description, the first of the above questions was not used (ERS 2012a), and documentation of the survey suggests sometimes the income threshold is 200% of the poverty threshold. Though it is recognized that some of the individuals screened out of the questions will in fact be food insecure, the screening was still seen as desirable because it reduces respondent burden (ERS 2015a). Thus, the CPS‐FSS food insecurity rates are a function of responses to food insecurity questions conditional on the statistical screening procedures employed.

Ahn, Smith, and Norwood’s paper is mainly framed around the question of whether opt-in, internet-based surveys can mimic the official government estimates of food insecurity. However, their results make abundantly clear the critical role of the income threshold in setting official food insecurity rates. In short, if we simply counted the scores on the food insecurity questions and ignored income, we would find MUCH higher rates of measured food insecurity. Before applying the income-cutoff, Ahn, Smith, and Norwood find food insecurity rates of 43% (in a 2016 survey) and 31% (in a 2017 survey). After applying the income cut-offs (essentially assuming anyone with an income over 180% of the poverty line can’t be food insecure) and some demographic weighting, the authors find opt-in internet surveys can produce estimates of food insecurity that are similar to that reported by the USDA-ERS.

I’m a little unsure of how to interpret these findings. On the one hand, I’m left with a sense that the official food insecurity statistics are heavily influenced by a somewhat arbitrary income cut-off, and that perhaps the official measure of food insecurity are too imprecise at measuring the construct we are really after. Another, reasonable, albeit alarming, conclusion is that there may a lot more food insecure people than we thought.

Experimental Auctions - What's New?

It is hard to believe it’s been over a decade since my book with Jason Shogren on experimental auctions was first published. We’ve learned a lot and the field has evolved in the intervening years since this publication. As a result, I’m happy to announce a new review article, just released by the European Review of Agricultural Economics, on experimental auctions with Maurizio Canavari, Andreas Drichoutis, Rudy Nayga, and myself. Maurizio, Andrea, Rudy, and I have been hosting a summer school in various European locations on this topic ever since 2011, and our annual discussions have been very useful in thinking about works well and what doesn’t when conducting an experimental auction.

For readers of this blog who aren’t academic economists, you might be wondering: what, exactly, is an experimental auction and why would you want to conduct one? The motivation for the method comes from the widely known fact that people’s answers on surveys don’t always align with their behavior in a grocery store. A general rule of thumb is that the average willingness-to-pay one finds in a survey can be divided by two if one wants to know know what people will actually pay when money is on the line.

The problem is that we often want to know the value people place on times that aren’t regularly traded in a market, where real economic incentives are at play. An experimental auction solves the non-market problem by creating a market in a lab or online setting. An experimental auction involves people bidding real money to obtain (or exchange) real goods (typically food in my applications) in a type of auction with rules where people have an incentive to truthfully reveal their preferences.

Here’s the abstract:

In this paper, we review recent advances in experimental auctions and provide practical advice and guidelines for researchers. We focus on issues related to randomisation to treatment and causal identification of treatment effects, design issues such as selection between different elicitation formats, multiple auction groups in a single session and house money effects. We also discuss sample size and power analysis issues in relation to recent trends in experimental research about pre-registration and pre-analysis plans. We position our discussion with respect to how the agricultural economics profession could benefit from practices adopted in the experimental economics community. We then present the pros and cons of moving auction studies from the laboratory to the field and review the recent literature on behavioural factors that have been identified as important for auction outcomes.

For Ph.D. students, or anyone looking for a new idea to work on, I’ll note that the conclusions section has a slew of ideas for future research.