The story is a powerful lesson about externalities that can arise with herbicide resistant genetically engineered crops (this one is largely negative, but note that GE Bt crops can create positive externalities). Who's to blame in this case? Monsanto for releasing GE Dicamba-resistant seed before a new version of Dicamba was released? Regulators for their slowness in approving the new Dicamba? Farmers who improperly used and applied the old version of Dicamba? You'll have to listen and form your own judgement.
A new article by Kristin Runge et al. in the journal Public Opinion Quarterly pulls together polling results over the past few decades in an attempt to ascertain changes in public opinion about biotechnology and GMOs. Here's the abstract.
It is an interesting article focusing on more than just biotechnology, but misses some of the other attempts to aggregate polling results on these issues over the years from, for examples, Pew and IFIC. Also, one shouldn't discount the many meta analyses that have been done on this topic relying on the academic literature (e.g., here, here, or here), which doesn't show much trend toward increasing concern about biotechnology or GMOs. The results from my Food Demand Survey (FooDS) also shows very little evidence of changes in awareness or concern about GMOs over the past four years.
AgriPulse recently ran an article about a new Congressionally mandated effort to educate consumers about biotechnology. According to the article:
The article includes several quotes from yours truly. I was asked whether the spending will make any real difference with consumer attitudes based and whether the effort could harm FDA’s credibility as a regulator. Here is the (slightly edited) responses I gave to the article's author.
On the second question: can information affect public perceptions? The answer is yes - at least a bit. Most of our research shows consumers remain highly uniform (and often misinformed) about the technology. As a result, subtle changes in wording, descriptions of benefits of the technology, etc. can be persuasive. I think this can be seen most directly in the various state ballot initiatives on mandatory GE labeling. Early polling in all the states showed that voters approved of the laws by a wide margin. But as the vote neared and biotech companies and others started running ads, support eroded to such a point that the mandatory GE labeling laws failed in every state where they were put on the ballot. This is fairly strong evidence that information mattered in the "real world." That said, the USDA and FDA have communicated on these issues in the past, and it is unclear what effects they had.
All this suggests that the form of the communication matters. Information that is scientifically accurate but focused on the perspective of the farmers/consumers who benefit is likely to be most persuasive.
Could credibility be harmed? Well, I don't believe the government should promote a particular company or industry per se (though of course it already does that in a variety of direct and indirect ways such as encouraging conversion to organic, facilitating labeling programs and marketing orders, etc), but providing the public with accurate, scientific information on matters of public concern seems a legitimate role for government. Focusing on the wide range of applications in the private, public, and nonprofit sectors is one way of perhaps avoiding perceptions of impropriety. Also being honest about possible downsides and trade-offs is important. Also, not overselling - biotech is a tool but it's not a universal savior.
Posted college tuition has been on the rise in recent decades. There is a lot of debate about the the cause of rising sticker prices, but one factor that is often blamed is the increasing number of administrators. Less well appreciated are the factors causing the increasing number of administrators. One driver is the fact that more staff are needed to comply with increasing regulations and accountability imposed on universities from state and federal governments.
One example that affects my area of research most directly is human subjects research committees (or the so-called internal review board, IRB). Early in my career, if I wanted to survey food consumers or run an experiment in a grocery store, I just did it. I didn't have to ask permission or get approval from a university administrator. But, somewhere along the way, the federal government required universities wishing to receive federal monies to have projects approved by local IRBs. Now, all major universities have their own IRBs with various sized staffs and with faculty spending time chairing and serving on IRB committees (full disclosure: I've served as an alternative member of OSU's IRB for several years).
Some of the basic ideas behind IRB approval process are worthwhile: making sure people aren't being unduly coerced and are participating in research voluntarily, making sure research participants' information isn't being used in a way that embarrasses the participant or can be used against them in some way (i.e., protects confidentiality), and making sure the research doesn't generate undue risks for participants in research that are incommensurate with the benefits.
All that said, from a researcher's perspective, all this can be a real pain even for the most minor of surveys. Surveys that are anonymous are technically exempt from IRB approval, but the researcher doesn't have the ability to make that determination: they have to fill out a long form, send the research instrument, including information on participant recruiting, etc. to an IRB committee and wait for them make the determination (before all this, the researcher has to undergo a training on human subjects research and pass several tests). And wait you will. I've heard stories from colleagues having to wait several months for an IRB determination. And when you hear back, you often are asked to make changes to your research design that have little to do with the aforementioned purposes of the IRBs. If you're trying to do a survey on a current policy issue, you've now waited weeks or months for approval, and even if a project is approved, if you want to reword a question or add a new one to address evolving events, now you have to submit a change modification form that also has to be approved. Given these timing issues, it has become next to impossible, for example, to have graduate students do publishable quality surveys/experiments for class projects.
I've largely had positive experiences with IRBs (I've had a couple bad ones too), but one shouldn't mistake the cost this imposes on researchers, on the university, and ultimately the taxpayer and student. Whether the benefits of the system exceeds these costs is a question I've never seen seriously addressed.
Change is afoot. This is from an article by Richard Shweder and Richard Nisbett in the Chronicle of Higher Education back in March:
I suggest reading the whole thing. The authors provide some history of these programs and passionately convey their frustration with the present system. They also note that universities have till next year to figure out how to address the changes in federal regulations.
Another article in the New York Times is more critical of the changes and is less optimistic that real changes will occur for human subjects research. However, I'm beginning to hear rumblings at a few universities that they will not longer require prior approval from IRB for certain types of human subjects research. The end of the NYT article suggests what some of this is about: absent federal guidelines universities may still want to review research to reduce the risk of controversy, embarrassment, or lawsuits. Those are legitimate concerns but are likely to run up against issues of academic freedom and freedom of inquiry. And, they are concerns that are distinct from protecting participants of human subjects.
That's the title of a new working paper co-authored with Ph.D. student Kelsey Conley. There is a lot of talk about how millennial's food preference may differ from previous generations, but much less is available in terms of hard evidence. Here's what we write as the challenge with a lot of the previous research in this area (this criticism is also be true of previous blog posts I've written on the subject, such as this one and this one):
Some summary statistics and preliminary analysis:
Our main findings are likely to be somewhat unexpected. We find that the "millennial effect" is positive on food expenditure shares for three meat categories (beef, pork, and poultry), eggs, cereal, and fresh fruit. A statistically significant negative ‘millennial’ effect is found for non-alcoholic beverages and food away from home. This doesn't mean millennials are spending less of their food budget eating out (or spending more of their food budget on meat) than young people from the 1980s, only that they're spending less of their food budget eating out (and spending more of their budget on meat) compared to older folks today than in the past.