Blog

Who are the vegetarians?

One of the challenges researchers face in trying to learn about the characteristics of vegetarians is that there are so few of them.  I've seen estimates that put the percentage of vegetarians in the US population as high as 13%, but most estimates are closer to 5%.  That means that if one does a survey that has 1,000 respondents (which is a pretty typical sample size for pollsters), you'll only have about 50 vegetarians in the sample - hardly a large enough sample size to say anything meaningful.

We've been running the Food Demand Survey (FooDS) for 19 months now, and each monthly survey has over 1,000 respondents.  I took the first years' data (from July 2013 to July 2015), which consists of responses from over 12,000 individuals.  This sample is potentially large enough to begin to make some more comprehensive statements about how vegetarians might differ from meat eaters in the US.

Applying weights to the sample that force the sample to match the population in terms of age, gender, region of residence, etc., we find that 4.2% of respondents say "yes" to the following question: "Are you a vegetarian or a vegan?", which means that 95.8% say "no".  

There is some sampling variability from month-to-month, but overall, the trend in the percentage of respondents declaring vegetarian/vegan status has remained relatively constant, and if anything, has trended slightly downward over time.

So, how do self-declared vegetarians/vegans differ from meat eaters?  The following table shows differences/similarities in socio-economic and demographic characteristics.

Some of the biggest differences appear for age, race, overweight status, and politics.  Vegetarians tend to be younger, less white, skinnier, and more liberal than meat eaters.  Two unexpected results are that vegetarians indicate a much higher rate of food stamp participation (which is a bit surprising since the share of households with >$100K in income is higher for vegetarians than meat eaters) and a much, much higher rate of food-borne illness.  

In our survey, we also measure respondents' "food values" (for detail on the approach, see this academic paper we published).  This approach requires people to make trade-offs (they cannot say all issues are most important).  Respondents are shown a set of 12 issues and are asked to place 4 (and only 4) of them in a box indicating they are the most important issues when buying food, and to also place 4 (and only 4) issues in a box indicating they are the least important issues when buying food.  We measure relative importance by subtracting the share of times an item appears in the least important box from the share of times it appears in the most important box.  Thus, relative importance is on a scale of +1 to -1, and average scores across all 12 items must to sum to zero.  

Meat eaters tend to rate taste and price as relatively more important food values than vegetarians.  Vegetarians tend to rate animal welfare and the environment as more important food values than meat eaters.  Even still, vegetarians rate nutrition, taste, price, and safety as more important food values than animal welfare or the environment.  

The survey also shows people a list of 16 issues and respondents indicate how concerned they are about each issue (1=very unconcerned to 5=very concerned).  As the table below shows, vegetarians are more concerned about all issues than are meat eaters, even an issue like GMOs which is (at present) primarily a plant issue.  The difference in level of concern between vegetarians and meat eaters is particularly large for gestation crates, battery cages, and farm animal welfare.  

Given some previous discussion on the economics of Meatless Monday, I ran some statistical models to determine whether vegetarians tend to spend more or less on food than meat eaters.  

Without controlling for any differences in income, age, etc. that were found in the initial table above, I do not find any statistically significant differences in spending patterns.  Meat eaters report spending about $94/week on food eaten at home and vegetarians report spending about $3 less (a difference that isn't statistically significant); meat eaters report spending about $46/week on food eaten away from home (e.g., at restaurants) and vegetarians spend about $9.80 more (a difference that isn't statistically significant).  Even after I control for differences in income, age, etc., I do not find any significant differences in food expenditures between vegetarians and meat eaters.  The biggest determinants of food spending is income (high income individuals (>$100K in income) spend $35/week more away from home than low income (<$40K in income)) and household size (larger households spend more).  Younger people spend about the same as the older on food a home, but spend more eating out than do the old.  

When fat taxes meet the supply side

Last week at the European Association of Agricultural Economist's meetings, I saw Louis-George Soler present a keynote talk on food and nutrition policies.  The paper-version of the talk, written with Vincent Réquillart  is being published in the European Journal of Agricultural Economics.

One of the key points of his talk was that much of the policy analysis on effects of fat taxes, soda taxes, veggie subsidies, etc. only consider consumer responses and ignore how firms will react to the policies.  It is often the case that such supply-side responses will substantively reduce the health impacts of the policies.

For example, suppose Congress passed a law banning advertising of sweetened sugared cereal to children.  How might Kellogg's or General Mills respond?  Given that the firms can no longer  use their revenue on promotion and advertising, they might instead re-direct those funds to cost-cutting efforts that reduce the cereals' prices.  Competition moves from who has the most compelling ad to who has the lowest price.  Lower prices will encourage more consumption: exactly the opposite of what was intended by the ban.

Another point they raise is related to the "pass-through" effect of taxes on firms profits and retail prices.  Given the nature of competition between firms and the type of tax (excise or ad valorem), a tax can be "over-shifted" or "under-shifted" to consumers.  Thus, tax policies might cause a larger or smaller reduction in consumption than anticipated.

Take another example.  Suppose the government requires firms to add "high fat" labels to certain products.  The research cited in the Requillart-Soler paper suggests that firms may respond by lowering the price of the high fat items and increasing the price of the low fat items.  While the "high fat" label will tend to discourage consumption, the now lower relative price for high fat items will tend to encourage consumption.  

None of this is to say that food policies won't have any impact on health, only that studies which ignore food companies' responses to the new policy environment will often overestimate the health impacts of food policies.   

A problem with cost-benefit analysis?

I'm a fan of cost-benefit analysis.  The approach provides a systematic way to think through the consequences of public policies and provides a reasonable approach to debate merits and demerits of a policy.  

Cost-benefit analysis shouldn't be the final word on a policy because there are some "rules" we may care about regardless of immediate short-run consequences.  For example, even if a cost-benefit analysis found that the benefits to TV thieves outweighed the costs to prior TV owners, few would support a policy of decriminalizing TV theft, in part because a society that had such little respect for property rights is not likely one that would be prosperous in the long-run (or enjoyable to live in for that matter).   All this is a way of saying that our moral intuitions often conflict (sometimes rightfully so) with a short-term utilitarian premise implied by cost benefit analysis (the trolley problem is a common example).

In the realm of food and public health policy, sometimes the way benefits and costs are calculated are myopic, fail to account for dynamic market responses to policies, and rest on shaky methodological assumptions.  Moreover, when we find that benefits exceed costs, one should also ask: what is preventing the market from capitalizing on this arbitrage opportunity?  Stated differently, there would need to be solid evidence of market failure (or some  government failure) in addition to a positive cost-benefit test to justify a public policy.          

Despite these qualms, I see cost-benefit analysis as a useful tool, and it provides one input into the decision making process.

Lately, I've been thinking about what happens to a cost-benefit analysis when one considers multiple policies - in an environment where are increasing calls for new regulations? 

Suppose one did a cost-benefit analysis (CBA) on mandatory country of labeling for meat.  Then, a CBA on a ban on use of subtherapeurtic antibiotics in meat production.  Then, a CBA on a ban on growth hormones.  Then, a CBA on banning gestation crates in pork production.  Then, a CBA on banning transfats.  Then, a CBA on new water regulations for confined animal feeding operations.  Then, a CBA on a carbon tax on methane production from cows.  (I could go on - these represent but a few of the policies that are commonly batted around that have some impact on meat and livestock markets.)  

Is it possible that each of these policies - in isolation - could pass a cost benefit test, and yet when considered jointly fail the test?  Stated differently, is it possible to strictly follow a cost-benefit rule when adopting public policies (only passing policies that pass a CBA) and wind up with a world that we find as less desirable than the one we started with?

I think the answer may be "yes."  For example, each CBA in isolation will assume that the status quo prevails with regard to every other policy.  But, the general equilibrium effects could differ from these individual partial-equilibrium analyses, particularly if there are nonlinearities.

Tyler Cowen recently linked to a new paper by Ian Martin and Robert Pindyck on policies related to catastrophic events that also seems relevant to this discussion.

How should we evaluate public policies or projects to avert or reduce the likelihood of a catastrophic event? Examples might include a greenhouse gas abatement policy to avert a climate change catastrophe, investments in vaccine technologies that would help respond to a “mega-virus,” or the construction of levees to avert major flooding. A policy to avert a particular catastrophe considered in isolation might be evaluated in a cost-benefit framework. But because society faces multiple potential catastrophes, simple cost-benefit analysis breaks down: Even if the benefit of averting each one exceeds the cost, we should not avert all of them.

Cowen summarized the paper as follows: 

The main point is simply that the shadow price of all these small anti-catastrophe investments goes up, the more of them we do, and thus we cannot do them all, even if every single investment appears to make sense on its own terms.

Typical CBAs often ignore the the hundreds (if not thousands) of laws that already affect farmers' and food purveyors' ability to operate.  It does make one wonder whether diminishing returns shouldn't feature more prominently in CBA.

A short lesson on experimental auctions

One of the most robust findings from the research on what consumers are willing to pay for non-market goods (for example, foods made with new technologies that are not yet on the market) is that people tell researchers they are willing to pay more than they actually will when money is actually on the line.  One review showed, for example, that people tend to overstate how much they are willing to pay in hypothetical settings by a factor of about three.  That means if someone tells you on a survey that they're willing to pay $15, then they'd probably only actually pay about $5.

One way to deal with this problem of hypothetical bias is to construct experimental markets where real money and real products are exchanged.  The key is to use market institutions that give consumers an incentive to truthfully reveal their values' for the good up for sale.  I wrote a whole book with Jason Shogren on the subject of using experimental auctions for this purpose a few years back.

I recently filmed a short primer on the consumer research method for an on-line course being created by my colleague Bailey Norwood.  He graciously put it up online for anyone's viewing pleasure.  

How has medical spending changed and why

Last week, I gave a plenary address to the annual meeting of the American Association of Clinical Endocrinologists on the topic of obesity and the government's role in addressing the issue.  

In my talk, I showed the following graph illustrating the change in spending on medical care expressed as a percentage of GDP from 1960 to 2012 (I created the graph using data from here)

People often use this sort of data to try to illustrate the adverse consequences of obesity and other dietary-related diseases that have risen over time.  That is part of the story.  But, it is also a complicated story, and a lot has changed over time.  

One partial explanation for the change is that Medicaid and Medicare didn't exist in 1960; some of the spending by these programs in 2012 would have occurred anyway but some probably wouldn't have (i.e., some people would have delayed or foregone treatments if they weren't covered by these programs), so that's part of the story.  But, it can't be a huge part, as these two program make up less than half of total spending in 2012.

Another reason we likely spend more of our GDP on medical care today than we did in 1960 is that we are today richer.  Health care is a normal good, meaning that we buy more of it when we become wealthier.  Here, for example, is a recent cross-sectional comparison of how countries that differ in terms of per-capital GDP spend money on health care.

Clearly, the US is an outlier.  But, don't let that distract from the main message of the graph.  Richer countries spend more on health care.  It is almost a perfectly linear trend except for the US and Luxembourg.  

So, let's do a little thought experiment.  In real terms, per-capital GDP in the US in 1960 was around $15,000, whereas today it is around $45,000.  Look at the graph above,  Countries that make around $15,000 in per-capita GDP spend about $1,000/person/year on heath care.  Countries that make around $45,000 in per-capita GDP spend about $5,000/person/year on heath care.  Extrapolating from these data would suggest that we're spending $4,000 more per person on medical care in the US today than we did in the 1960s simply because we're richer today than in 1960.  

If I take 2012 cross-sectional WHO data (173 countries) from here and here, I find the following relationship from a simple linear regression: (spending on medical care as a % of GDP) = 6.47 + 0.033*(GDP per capita in thousands of $).  P-values for both coefficients are well below 0.01.  As previously stated, US GDP per capita has gone up by about $30,000 since 1960.  This means, we would expect the % of our GDP spent on health care to be 30*0.033=0.99 percentage points higher simply as result of income changes.

One final thought experiment.  We are a lot older today than in the 1960s.  For example, 35.9% of the population was under the age of 18 in 1960.  Today that figure is only 24%.  Older people spend more on health care than younger people.  Thus, we'd expect more spending on medical care today than in 1960 because we have more older people today.

Thus, I thought I'd do a crude-age adjusted calculation of medical spending as a % of GDP.    

I pulled data on per-capita spending by age category from the Centers for Medicare and Medicaid Services, Office of the Actuary, National Health Statistics Group and data from the Census Bureau on distribution of age in 2010 and 1960.

Here is the data and my calculations.

The last two columns construct a counter-factual.  The second to last column multiplies the 1960 age distribution by the total population in 2010; it imagines a world as populated as our current one but with ages distributed like 1960.  The last column calculates expected spending on health care with this 1960 age distribution by multiplying per-capita spending by the counter-factual age distribution.

The data suggest we actually spent $2,192 billion on medical spending in 2010.  However, if our nation had been younger, like it was in 1960, we would have only spent $1,922 billion.  Thus, we're spending 14% more in total on health care in 2010 than in 1960 because we are today an older population (of course we're also spending more because there are more of us).  If I express these figures as a percentage of 2010 US GDP, I find that current medical spending (as determined from this particular set of data) is 14.7% of GDP.  However, if we had the 1960 age distribution, medical spending would only be 12.8% of 2010 GDP.

In summary, increasing medical expenditures might indeed be a cause for alarm.  But, that rise is also partially explained by the fact that we are today richer and living longer.  I'd say that's a good thing.