A standard technique for monocrop growers, especially in the Corn Belt, is to wildly overuse nitrogen-based fertilizers.
This can have all sorts of negative environmental effects, including water pollution and greenhouse gas emissions. But figuring out exactly how much fertilizer to add to a field isn’t easy and many farmers don’t much bother. It’s estimated that we’re now using around 40 times as much nitrogen than 75 years ago, way out of proportion to the population growth in that time period.
There are ways to use the data we have to figure out how much nitrogen fertilizer should be used and what kind of yield and environmental effects can come from changing these amounts. But those models aren’t always accessible. New work from researchers at the University of Minnesota may have a solution.
This work involves what’s called a process-based crop model—complex combinations of tons of data such as weather, climate, soil quality, nutrients, crop variety and inputs—used to predict yields and analyze productivity. These models have been gaining popularity in recent years, but they’re incredibly difficult to calculate. “Their applications are prohibited by expensive computational and data storage costs,” write the Minnesota researchers. That makes them inaccessible to those outside of research or governmental applications.
What the researchers did was create something called a metamodel. This is going to be difficult to understand in an Inception sort of way, but a metamodel is a model of a model. The researchers used the original model, called ecosys, and then used machine learning to figure out the way that model works, how it responds to various data and what kinds of results it spits out. They built, basically, a simplified understanding of the original model and how it behaves, without needing to actually go through the entire, difficult, expensive process of using the original model.
You might expect that this metamodel would be much less accurate than the original model, given that it’s sort of a photocopy of a photocopy, but, in fact, when running it for some randomly selected farms in the Midwest, they managed to account for 98 percent of all variables in the original model—while taking seconds, instead of days, to calculate.
There are still downsides; the metamodel doesn’t account for a bunch of variables that could potentially screw things up, such as the effects of cover cropping or the (low, but still there) possibility of irrigation rather than rainfall. But this is still a really interesting construction; it enables quick and wide-ranging analysis of huge areas of farmland. The researchers actually applied it to 99 counties across the Corn Belt and figured out a strategy to create nearly $400 million in benefits. That was due to a combination of reduction in pollution and savings from using less fertilizer and achieved those benefits despite a loss in yield.
The researchers do say that this probably shouldn’t be used by individual farmers just yet; it needs some more work incorporating more variables and streamlining the system before it’s ready. But it does have the potential to allow an absolutely insane amount of data to be interpreted at unheard-of speeds.