Respectable research reporting

It’s harder than you might expect to report accurately on the latest agronomy research

I’m a fan of ag research. When I’m reporting on new ag research, I do my best to figure out how the study was set up and whether there are any major flaws. I try to communicate this to readers. I have one of Les Henry’s columns on farm research taped next to my computer, as a reminder of what to watch for.

But figuring out how well the study was set up is not always easy. I have a graduate degree in communications, but it focused on qualitative research, which is very different from the quantitative stuff you’ll find in agronomic experiments. I’m sure quite a few of my readers have a stronger science background than I.

Still, I muddle my way through farm shows every winter, trying to figure out what readers need to know about the latest agronomic experiments. Here are a few things I think about.

Are the results
 statistically significant?

Generally, researchers are up-front about whether or not their results were statistically significant. This is important because if the results are not significant, they could were likely due to chance. If the results were due to chance, the treatment didn’t make a difference.

Be suspicious of anyone who tells you the results weren’t statistically significant, but then analyzes the economics of the different treatments. As Les Henry wrote in March 2014, doing an economic analysis on results that are due to chance is “completely inappropriate.” The economic analysis might reveal big differences between the treatments, but those differences will still be flukes.

Was the data 
“confounded?”

As a general rule, the only difference in treatment between the control and treatment should be the treatment. For example, let’s say you wanted to test a new fungicide. If you added more nitrogen to the fungicide-treated crop for some reason, you’d have no way of knowing whether the fungicide bumped yield.

In our February 23 issue, you’ll find an article about plant growth regulator trials. In one trial, researcher Sheri Strydhorst gave the treatment extra nitrogen, a fungicide application, plus an application of Manipulator. She then measured height reductions and standability improvements. But Strydhorst didn’t report yield in that trial, as the yield data would have been confounded. And I think it’s safe to assume the extra nitrogen and fungicide didn’t shorten the wheat varieties.

Was the trial repeated?

A single field experiment only tells you what happened in that field that year. One result, no matter how astounding, could be a fluke. It could be completely irrelevant to your soil type or region. It could change with the next year’s weather.

The more often a trial is repeated, in different years, the more confident you can be in the results. Typically, the researchers I talk to have several sites over two or more years. Sometimes they’ll cautiously talk about preliminary results, but they’re generally reluctant if they only have one year of data. I think that caution speaks well to their credibility.

Research doesn’t have 
to be original

I mention this because I’ve seen social media critiques of research for not being original. This isn’t necessarily a valid criticism.

During grad school, one of my professors emphasized the value of meta reviews. During a meta review, or meta analysis, researchers go through completed studies on any topic to try to draw a conclusion. This is valuable because on any given topic, you’ll likely find some differing results. By looking at all the studies, you might be able to figure out whether there’s a general agreement among most researchers. You might be able to combine results from smaller studies to come up with a statistically significant result, depending on the research approach. Meta reviews are relevant to both quantitative and qualitative fields.

The caveat is that a meta review has to be well designed, just like any other research project. There has to be a body of previous studies to look at. It’s not easy to figure out which studies to include in the review — do you include weaker studies, so as to create a bigger pool of data, or only include results from well-designed, published studies? Wikipedia tells me there are problems with both.

But even with these potential pitfalls, meta reviews are very useful. If you get frustrated by the media reporting on every single study with a sensational claim about food or health, meta reviews are part of the answer. One study claiming that your daily dose of java will kill you should be eyed with extreme scepticism, if not vitriol.

Despite all this, I have to admit I’m far from expert at spotting problematic studies. I’m not so naïve as to think that I can’t be fooled, or that honest people don’t make mistakes during research. I still have to rely on my sources to a large extent, and try to ask the right questions. Having editors who understand agriculture helps immensely, I think.

About the author

Field Editor

Lisa Guenther is field editor for Grainews based at Livelong, Sask. You can follow her on Twitter @LtoG.

explore

Stories from our other publications

Comments