May 2, 2011
May 2, 2011
Demand and supply certainly matter. But there’s another reason why food across the world has become so expensive: Wall Street greed.
It took the brilliant minds of Goldman Sachs to realize the simple truth that nothing is more valuable than our daily bread. And where there’s value, there’s money to be made. In 1991, Goldman bankers, led by their prescient president Gary Cohn, came up with a new kind of investment product, a derivative that tracked 24 raw materials, from precious metals and energy to coffee, cocoa, cattle, corn, hogs, soy, and wheat. They weighted the investment value of each element, blended and commingled the parts into sums, then reduced what had been a complicated collection of real things into a mathematical formula that could be expressed as a single manifestation, to be known henceforth as the Goldman Sachs Commodity Index (GSCI).
For just under a decade, the GSCI remained a relatively static investment vehicle, as bankers remained more interested in risk and collateralized debt than in anything that could be literally sowed or reaped. Then, in 1999, the Commodities Futures Trading Commission deregulated futures markets. All of a sudden, bankers could take as large a position in grains as they liked, an opportunity that had, since the Great Depression, only been available to those who actually had something to do with the production of our food.
Change was coming to the great grain exchanges of Chicago, Minneapolis, and Kansas City – which for 150 years had helped to moderate the peaks and valleys of global food prices. Farming may seem bucolic, but it is an inherently volatile industry, subject to the vicissitudes of weather, disease, and disaster. The grain futures trading system pioneered after the American Civil War by the founders of Archer Daniels Midland, General Mills, and Pillsbury helped to establish America as a financial juggernaut to rival and eventually surpass Europe. The grain markets also insulated American farmers and millers from the inherent risks of their profession. The basic idea was the “forward contract,” an agreement between sellers and buyers of wheat for a reasonable bushel price — even before that bushel had been grown. Not only did a grain “future” help to keep the price of a loaf of bread at the bakery – or later, the supermarket — stable, but the market allowed farmers to hedge against lean times, and to invest in their farms and businesses. The result: Over the course of the 20th century, the real price of wheat decreased (despite a hiccup or two, particularly during the 1970s inflationary spiral), spurring the development of American agribusiness. After World War II, the United States was routinely producing a grain surplus, which became an essential element of its Cold War political, economic, and humanitarian strategies — not to mention the fact that American grain fed millions of hungry people across the world.
Futures markets traditionally included two kinds of players. On one side were the farmers, the millers, and the warehousemen, market players who have a real, physical stake in wheat. This group not only includes corn growers in Iowa or wheat farmers in Nebraska, but major multinational corporations like Pizza Hut, Kraft, Nestlé, Sara Lee, Tyson Foods, and McDonald’s – whose New York Stock Exchange shares rise and fall on their ability to bring food to peoples’ car windows, doorsteps, and supermarket shelves at competitive prices. These market participants are called “bona fide” hedgers, because they actually need to buy and sell cereals.
On the other side is the speculator. The speculator neither produces nor consumes corn or soy or wheat, and wouldn’t have a place to put the 20 tons of cereal he might buy at any given moment if ever it were delivered. Speculators make money through traditional market behavior, the arbitrage of buying low and selling high. And the physical stakeholders in grain futures have as a general rule welcomed traditional speculators to their market, for their endless stream of buy and sell orders gives the market its liquidity and provides bona fide hedgers a way to manage risk by allowing them to sell and buy just as they pleased.
But Goldman’s index perverted the symmetry of this system. The structure of the GSCI paid no heed to the centuries-old buy-sell/sell-buy patterns. This newfangled derivative product was “long only,” which meant the product was constructed to buy commodities, and only buy. At the bottom of this “long-only” strategy lay an intent to transform an investment in commodities (previously the purview of specialists) into something that looked a great deal like an investment in a stock – the kind of asset class wherein anyone could park their money and let it accrue for decades (along the lines of General Electric or Apple). Once the commodity market had been made to look more like the stock market, bankers could expect new influxes of ready cash. But the long-only strategy possessed a flaw, at least for those of us who eat. The GSCI did not include a mechanism to sell or “short” a commodity.
This imbalance undermined the innate structure of the commodities markets, requiring bankers to buy and keep buying – no matter what the price. Every time the due date of a long-only commodity index futures contract neared, bankers were required to “roll” their multi-billion dollar backlog of buy orders over into the next futures contract, two or three months down the line. And since the deflationary impact of shorting a position simply wasn’t part of the GSCI, professional grain traders could make a killing by anticipating the market fluctuations these “rolls” would inevitably cause. “I make a living off the dumb money,” commodity trader Emil van Essen told Businessweek last year. Commodity traders employed by the banks that had created the commodity index funds in the first place rode the tides of profit…
The Royal Institution of Great Britain has stood on the same site since 1799, and on most days it would seem one of the older and fustier buildings in central London. But on April 6, time did a funny thing: The institution’s 212 years of existence suddenly contracted, and went from seeming unimaginably long to unimaginably short.
“Our sun formed 4.5 billion years ago, but it’s got 6 billion more before the fuel runs out,” Sir Martin Rees, the Astronomer Royal, told the audience seated among the busts and weathered books of the institution’s second-story library. “It won’t be humans who witness the sun’s demise: It will be entities as different from us as we are from a bug.”
The occasion for Rees’s mind-bending assertion was his acceptance of the 2011 Templeton Prize, an annual cash award of $1.7 million, payable to individuals who have made “an exceptional contribution to affirming life’s spiritual dimension” — in Rees’s case, by looking millions of years into the future and venturing a guess as to what might be waiting.
Humans have been interested in the future for millennia, mostly as a subject for theologians. But theologians were, along with everyone else, thinking small. Most humans who have ever lived have died in conditions almost exactly like the ones into which they were born, and without written history had no way to grasp that the future might be different at all. Only now have we gained the scientific knowledge necessary to appreciate how exactly how deep a rabbit-hole the future really is: not just long enough to see empires rise and crumble, but long enough to make all human history so far seem like a sneeze of the gods.
This newfound appreciation for the depths of time has led a handful of thinkers like Rees, a theoretical cosmologist by training, to begin venturing some of humanity’s first real educated guesses about what may lie far, far, far ahead. Serious futurologists are not a large group yet. “It’s a fairly new area of inquiry,” says Nick Bostrom, an Oxford University philosophy professor who heads the school’s Future of Humanity Institute. But they are trying to give a first draft of a map of the future, using the kinds of rigor that theologians and uneducated guessers from previous generations didn’t have at their disposal.
In the history of prediction, there are a few examples
of rigorous attempts to look far into the future — long-term climate-change modelers, say, or radiophysicists who consider where to stash nuclear waste. But more often, Bostrom says, speculation about the future has been “a projection screen, on which we display our hopes and fears.” Think of Karl Marx, laying out a path for history based mostly on his own aspirations, rather than on anything that would today qualify as science. “Even just trying to get it right is something that distinguishes [us] as a small subset,” Bostrom says…
Milwaukee’s Best No Longer: A brewing ethical brouhaha at the Milwaukee Journal- the hazards of politicized science reporting
May 2, 2011
In an era of partisan journalism, some have presumed that at least one area of reporting, science, was insulated from blatant bias. After all, there are facts, and it’s presumably easy to identify when data is being cooked. But that’s naive, and a brewing ethical brouhaha at the Milwaukee Journal Sentinel underscores how the public can be short-changed when ideology, ambition, or hubris takes precedence over a news organization’s public responsibility to report controversies in context.
This incident erupted after a comprehensive review of plastic additive bisphenol A (BPA) by the German Society of Toxicology was published two weeks ago in Critical Reviews in Toxicology, a prestigious international journal. BPA is used to add strength and flexibility to many plastic products, from the protective lining of metal cans to bottles to dental sealants.
Over the past few years, the dominant narrative among select publications—the Journal Sentinel, most notably—is that BPA is dangerous to humans, infants, and pregnant women in particular, because it distorts development. Because of this, some have labeled it an “endocrine disruptor.” Indeed, it does subtly alter the way hormones in our endocrine system work, as do many chemicals, including soy, nuts, wheat, and berries. The “BPA is harmful” thesis never gained mainstream acceptance among scientists—no regulatory panel in the world has recommended restricting BPA based on the evidence, although political bodies have imposed restrictions, partly because of public perceptions stirred by articles in the Journal Sentinel and other publications.
Regulatory Agencies Weigh In
In January 2010, the Food and Drug Administration (FDA) released its second review of BPA in two years, reiterating past conclusions that BPA “is not proven to harm children or adults” and that studies to date support “the safety of current low levels of human exposure to BPA.” It noted that some tests had shown biological activity in animal tests. Under intense public pressure, the FDA said it would join other agencies in reviewing BPA’s effects on fetuses and children. But it expressed skepticism of the “novel” endocrine disruptor hypothesis, stating that rodent studies suggesting some problems were not “experimentally consistent.”
he FDA did not elevate any of its levels of concern, continuing to express “some concerns” over animal tests, which is government “regulatory speak” for “we need more studies.” When asked if children faced health dangers, Joshua Sharfstein, MD, the agency’s principal deputy commissioner, minced no words: “The FDA is not saying that it’s unsafe to use a baby bottle with BPA … FDA does support the use of bottles with BPA because the benefit of nutrition outweighs the potential risk of BPA … If we thought it was unsafe, we would be taking strong regulatory action.”
Many news organizations reported it straight. Leftist site Tree Hugger headlined: “FDA on BPA: It has ‘Some Concern.’ But Not Much.” The Journal Sentinel characterized the FDA’s affirmation of its current regulations as an “about face,” which it clearly was not.
Then last summer, the European Food Safety Authority (EFSA) reviewed more than 800 new studies and rejected the contention that BPA causes human neurological damage, one of the Journal Sentinel’s primary contentions. Still, some scientists, and the media outlets that had relied on their work almost exclusively, were reluctant to reexamine their “BPA is harmful” dogma. A slew of new findings were either played down or ignored entirely by the Journal Sentinel, which had won a passel of awards in 2009 and 2010 and was a Pulitzer finalist for reporting, in starkly black and white terms, that BPA harms humans and should be heavily restricted.
It’s in that context that this new review by a special advisory committee of German toxicologists was widely awaited by regulators around the world. The study embraced the findings of the FDA and the EFSA, declaring that BPA posed no substantive risks. It explained in comprehensive detail how the endocrine disruptor notion managed to convince so many journalists and even some scientists for so many years…