Note: This is the second of a three-column series on the 2007 farm bill. The first article is available here; the third here.
Last week, I argued that it makes sense for society to support farming. Everybody needs to eat, and most would prefer to do so without devastating the environment or exploiting labor.
Well, no one can accuse the United States of failing to commit significant resources to agriculture. Between 1995 and 2005, the Environmental Working Group calculates, the government paid farmers $164.7 billion. That averages to about $16 billion per year — substantially more, for example, than the government spends annually on direct financial aid to college students via Pell Grants.
Given that level of commitment, it’s worth asking what taxpayers are gaining in return. A sound national farm policy might be expected to promote an economically vibrant farm sector that produces a bounty of nutritious food while carefully managing natural resources. But despite lavish cash outlays, U.S. farm policy fails on all of those fronts.
Indeed, existing farm policy, as embedded in the soon-to-expire 2002 farm bill, can be shown to actively work against all of those goals.
For example, nearly a third ($51.3 billion) of total farm payments from 1995 to 2005 went to corn, the most prolific U.S. crop. As grown by U.S. farmers, corn seems an unlikely candidate for massive public support. It causes significant environmental damage in the growing phase, and more than half of U.S. corn production is used as feed for concentrated animal-feeding operations (CAFOs) — an industry whose environmental and social depredations are breathtaking.
As discussion begins on the 2007 farm bill, a hard look at how our subsidy system got the way it is seems in order. By tracing the roots of current dysfunction, we can perhaps find a way toward a farm policy that serves the public interest.
A (New) Deal Gone Sour
In last week’s column, I described a problem faced by farmers: the tendency of prices for their goods to fall over time. In short, U.S. farmers — helped along by the huge agribusiness industry that has arisen to sell them new technologies — tend to churn out food faster than Americans can consume it, and this leads to ever-lower prices.
That situation came to a head during the Great Depression, when an extended bout of overproduction led to falling prices and a severe farm crisis. While millions of Americans went hungry due to lack of funds, farmers were stuck with huge food surpluses. Overall farm income fell by half during the early years of the Depression, and thousands of farmers defaulted on loans.
In response, the Roosevelt administration made farm support a linchpin of the New Deal. To keep prices at a reasonable level, the government tried to manage farm output. The program worked like this: When farmers began to produce too much and prices began to fall, the government would pay farmers to leave some land fallow, with the goal of pushing prices up the following season.
There was an additional mechanism that sought to stabilize prices. In bumper-crop years, rather than allowing the market to be flooded with grain, the government would buy excess grain from farmers and store it. In lean years — say, when drought struck — the government would release some of that stored grain, mitigating sudden price hikes. The overall goal was to stop prices from falling too low (hurting farmers) or jumping too high (squeezing consumers).
Now, you don’t have to be a free-market zealot to identify some problems with this setup. First of all, it concentrated a tremendous amount of authority within the USDA, whose director came to occupy a kind of Agriculture Czar position, wielding power over food production roughly equal to that of the Federal Reserve chair’s sway over interest rates.
But there was a deeper problem: Even all of that federal power was not equal to the task of keeping prices at levels that could sustain farmers’ livelihoods. The reason was simple. The technological revolution in farming that blossomed in the second half of the 20th century — the cascades of synthetic fertilizers and pesticides, the explosion in petroleum-powered heavy machinery, the advent of hybridized and genetically modified seeds — overwhelmed the government’s ability to limit supply.
In short, the government could pay farmers to take acres out of production, but it couldn’t stop farmers from milking every last drop from each remaining acre. In 1935, U.S. farmers devoted 100 million acres to corn, yielding 2 billion bushels. By 1975, farmers were squeezing 5.8 billion bushels out of just 78 million acres. Given such leaps in productivity, it was inevitable that the New Deal paradigm would break down.
And break down it did.
Did We Say Supply? We Meant Demand
In the early 1970s, with grain prices low, farmers growing restive, and a presidential election looming, the USDA took a new approach. Rather than focus on supply management to boost prices, the agency moved boldly to jack up demand. This was a radical idea, because as I mentioned in my last column, demand for food doesn’t tend to rise very fast; people don’t normally eat more when food prices drop.
Since farmers were producing more food than Americans could possibly eat, the USDA aggressively sought overseas markets. Seeking, in part, to coalesce Midwestern support for President Richard Nixon in the 1972 election, USDA Secretary Earl “Rusty” Butz engineered a 30 million ton grain sale to the Soviet Union, financed with $700 million in export credits. Yep, you got it: the U.S. government loaned the Soviet Union the money to buy the grain.
As a price-boosting strategy, the Soviet grain sale succeeded dramatically. The Soviets bought a full quarter of the 1972 wheat crop. Wheat and corn prices surged, wheat by a factor of four. A Midwest drought the following year exerted further upward pressure on prices. Inflated grain prices rippled through the food system, driving the price of meat nearly beyond the reach of middle-class U.S. families. Along with the 1973 OPEC oil embargo, the Soviet grain sale helped spark the “stagflation” that gripped the U.S. economy into the next decade.
Butz responded to the crisis he had helped set in motion by attacking the philosophy behind post-war U.S. agriculture policy. Rather than urge farmers to hold land fallow or store excess grain, he famously exhorted them to plant “fencerow to fencerow” and flood the market with the whole harvest.
No longer would the government purchase grain from farmers in bountiful years and store it for release in years of poor harvests. The way forward, he thundered from his bully pulpit, was to produce as much as possible.
And if such a strategy were to lead to overproduction and low prices, Butz offered two solutions. In the short term, the government would support farmers with direct payments when prices dipped below the cost of production. In the long term, if U.S. consumers couldn’t eat their way through the produce of U.S. farmers, new markets would be opened overseas.
In the decades since, it has become clear that these reforms have proved a failure. The U.S. has moved aggressively to open markets for U.S. goods overseas, but export growth, too, has failed to keep up with ever-rising yields. And what Butz saw as a short-term solution, direct payments to farmers, has morphed into an institution.
The time has come for new models of federal farm policy — the topic of next week’s column.