January 1, 2012
On November 30, 2006, executives at Pfizer—the largest pharmaceutical company in the world—held a meeting with investors at the firm’s research center in Groton, Connecticut. Jeff Kindler, then CEO of Pfizer, began the presentation with an upbeat assessment of the company’s efforts to bring new drugs to market. He cited “exciting approaches” to the treatment of Alzheimer’s disease, fibromyalgia, and arthritis. But that news was just a warm-up. Kindler was most excited about a new drug called torcetrapib, which had recently entered Phase III clinical trials, the last step before filing for FDA approval. He confidently declared that torcetrapib would be “one of the most important compounds of our generation.”
Kindler’s enthusiasm was understandable: The potential market for the drug was enormous. Like Pfizer’s blockbuster medication, Lipitor—the most widely prescribed branded pharmaceutical in America—torcetrapib was designed to tweak the cholesterol pathway. Although cholesterol is an essential component of cellular membranes, high levels of the compound have been consistently associated with heart disease. The accumulation of the pale yellow substance in arterial walls leads to inflammation. Clusters of white blood cells then gather around these “plaques,” which leads to even more inflammation. The end result is a blood vessel clogged with clumps of fat.
Lipitor works by inhibiting an enzyme that plays a key role in the production of cholesterol in the liver. In particular, the drug lowers the level of low-density lipoprotein (LDL), or so-called bad cholesterol. In recent years, however, scientists have begun to focus on a separate part of the cholesterol pathway, the one that produces high-density lipoproteins. One function of HDL is to transport excess LDL back to the liver, where it is broken down. In essence, HDL is a janitor of fat, cleaning up the greasy mess of the modern diet, which is why it’s often referred to as “good cholesterol.”
And this returns us to torcetrapib. It was designed to block a protein that converts HDL cholesterol into its more sinister sibling, LDL. In theory, this would cure our cholesterol problems, creating a surplus of the good stuff and a shortage of the bad. In his presentation, Kindler noted that torcetrapib had the potential to “redefine cardiovascular treatment.”
There was a vast amount of research behind Kindler’s bold proclamations. The cholesterol pathway is one of the best-understood biological feedback systems in the human body. Since 1913, when Russian pathologist Nikolai Anichkov first experimentally linked cholesterol to the buildup of plaque in arteries, scientists have mapped out the metabolism and transport of these compounds in exquisite detail. They’ve documented the interactions of nearly every molecule, the way hydroxymethylglutaryl-coenzyme A reductase catalyzes the production of mevalonate, which gets phosphorylated and condensed before undergoing a sequence of electron shifts until it becomes lanosterol and then, after another 19 chemical reactions, finally morphs into cholesterol. Furthermore, torcetrapib had already undergone a small clinical trial, which showed that the drug could increase HDL and decrease LDL. Kindler told his investors that, by the second half of 2007, Pfizer would begin applying for approval from the FDA. The success of the drug seemed like a sure thing.
And then, just two days later, on December 2, 2006, Pfizer issued a stunning announcement: The torcetrapib Phase III clinical trial was being terminated. Although the compound was supposed to prevent heart disease, it was actually triggering higher rates of chest pain and heart failure and a 60 percent increase in overall mortality. The drug appeared to be killing people.
That week, Pfizer’s value plummeted by $21 billion.
The story of torcetrapib is a tale of mistaken causation. Pfizer was operating on the assumption that raising levels of HDL cholesterol and lowering LDL would lead to a predictable outcome: Improved cardiovascular health. Less arterial plaque. Cleaner pipes. But that didn’t happen.
Such failures occur all the time in the drug industry. (According to one recent analysis, more than 40 percent of drugs fail Phase III clinical trials.) And yet there is something particularly disturbing about the failure of torcetrapib. After all, a bet on this compound wasn’t supposed to be risky. For Pfizer, torcetrapib was the payoff for decades of research. Little wonder that the company was so confident about its clinical trials, which involved a total of 25,000 volunteers. Pfizer invested more than $1 billion in the development of the drug and $90 million to expand the factory that would manufacture the compound. Because scientists understood the individual steps of the cholesterol pathway at such a precise level, they assumed they also understood how it worked as a whole.
This assumption—that understanding a system’s constituent parts means we also understand the causes within the system—is not limited to the pharmaceutical industry or even to biology. It defines modern science. In general, we believe that the so-called problem of causation can be cured by more information, by our ceaseless accumulation of facts. Scientists refer to this process as reductionism. By breaking down a process, we can see how everything fits together; the complex mystery is distilled into a list of ingredients. And so the question of cholesterol—what is its relationship to heart disease?—becomes a predictable loop of proteins tweaking proteins, acronyms altering one another. Modern medicine is particularly reliant on this approach. Every year, nearly $100 billion is invested in biomedical research in the US, all of it aimed at teasing apart the invisible bits of the body. We assume that these new details will finally reveal the causes of illness, pinning our maladies on small molecules and errant snippets of DNA. Once we find the cause, of course, we can begin working on a cure.
The problem with this assumption, however, is that causes are a strange kind of knowledge. This was first pointed out by David Hume, the 18th-century Scottish philosopher. Hume realized that, although people talk about causes as if they are real facts—tangible things that can be discovered—they’re actually not at all factual. Instead, Hume said, every cause is just a slippery story, a catchy conjecture, a “lively conception produced by habit.” When an apple falls from a tree, the cause is obvious: gravity. Hume’s skeptical insight was that we don’t see gravity—we see only an object tugged toward the earth. We look at X and then at Y, and invent a story about what happened in between. We can measure facts, but a cause is not a fact—it’s a fiction that helps us make sense of facts.
The truth is, our stories about causation are shadowed by all sorts of mental shortcuts. Most of the time, these shortcuts work well enough. They allow us to hit fastballs, discover the law of gravity, and design wondrous technologies. However, when it comes to reasoning about complex systems—say, the human body—these shortcuts go from being slickly efficient to outright misleading.
Consider a set of classic experiments designed by Belgian psychologist Albert Michotte, first conducted in the 1940s. The research featured a series of short films about a blue ball and a red ball. In the first film, the red ball races across the screen, touches the blue ball, and then stops. The blue ball, meanwhile, begins moving in the same basic direction as the red ball. When Michotte asked people to describe the film, they automatically lapsed into the language of causation. The red ball hit the blue ball, which caused it to move.
This is known as the launching effect, and it’s a universal property of visual perception. Although there was nothing about causation in the two-second film—it was just a montage of animated images—people couldn’t help but tell a story about what had happened. They translated their perceptions into causal beliefs.
Michotte then began subtly manipulating the films, asking the subjects how the new footage changed their description of events. For instance, when he introduced a one-second pause between the movement of the balls, the impression of causality disappeared. The red ball no longer appeared to trigger the movement of the blue ball. Rather, the two balls were moving for inexplicable reasons.
Michotte would go on to conduct more than 100 of these studies. Sometimes he would have a small blue ball move in front of a big red ball. When he asked subjects what was going on, they insisted that the red ball was “chasing” the blue ball. However, if a big red ball was moving in front of a little blue ball, the opposite occurred: The blue ball was “following” the red ball.
There are two lessons to be learned from these experiments. The first is that our theories about a particular cause and effect are inherently perceptual, infected by all the sensory cheats of vision. (Michotte compared causal beliefs to color perception: We apprehend what we perceive as a cause as automatically as we identify that a ball is red.) While Hume was right that causes are never seen, only inferred, the blunt truth is that we can’t tell the difference. And so we look at moving balls and automatically see causes, a melodrama of taps and collisions, chasing and fleeing.
The second lesson is that causal explanations are oversimplifications. This is what makes them useful—they help us grasp the world at a glance. For instance, after watching the short films, people immediately settled on the most straightforward explanation for the ricocheting objects. Although this account felt true, the brain wasn’t seeking the literal truth—it just wanted a plausible story that didn’t contradict observation…
January 1, 2012
In the middle of 2001, I predicted in my book, The Coming Collapse of China, that the Communist Party would fall from power in a decade, in large measure because of the changes that accession to the World Trade Organization (WTO) would cause. A decade has passed; the Communist Party is still in power. But don’t think I’m taking my prediction back.
Why has China as we know it survived? First and foremost, the Chinese central government has managed to avoid adhering to many of its obligations made when it joined the WTO in 2001 to open its economy and play by the rules, and the international community maintained a generally tolerant attitude toward this noncompliant behavior. As a result, Beijing has been able to protect much of its home market from foreign competitors while ramping up exports.
By any measure, China has been phenomenally successful in developing its economy after WTO accession — returning to the almost double-digit growth it had enjoyed before the near-recession suffered at the end of the 1990s. Many analysts assume this growth streak can continue indefinitely. For instance, Justin Yifu Lin, the World Bank’s chief economist, believes the country can grow for at least two more decades at 8 percent, and the International Monetary Fund predicts China’s economy will surpass America’s in size by 2016.
Don’t believe any of this. China outperformed other countries because it was in a three-decade upward supercycle, principally for three reasons. First, there were Deng Xiaoping’s transformational “reform and opening up” policies, first implemented in the late 1970s. Second, Deng’s era of change coincided with the end of the Cold War, which brought about the elimination of political barriers to international commerce. Third, all of this took place while China was benefiting from its “demographic dividend,” an extraordinary bulge in the workforce.
Yet China’s “sweet spot” is over because, in recent years, the conditions that created it either disappeared or will soon. First, the Communist Party has turned its back on Deng’s progressive policies. Hu Jintao, the current leader, is presiding over an era marked by, on balance, the reversal of reform. There has been, especially since 2008, a partial renationalization of the economy and a marked narrowing of opportunities for foreign business. For example, Beijing blocked acquisitions by foreigners, erected new barriers like the “indigenous innovation” rules, and harassed market-leading companies like Google. Strengthening “national champion” state enterprises at the expense of others, Hu has abandoned the economic paradigm that made his country successful.
Second, the global boom of the last two decades ended in 2008 when markets around the world crashed. The tumultuous events of that year brought to a close an unusually benign period during which countries attempted to integrate China into the international system and therefore tolerated its mercantilist policies. Now, however, every nation wants to export more and, in an era of protectionism or of managed trade, China will not be able to export its way to prosperity like it did during the Asian financial crisis in the late 1990s. China is more dependent on international commerce than almost any other nation, so trade friction — or even declining global demand — will hurt it more than others. The country, for instance, could be the biggest victim of the eurozone crisis.
Third, China, which during its reform era had one of the best demographic profiles of any nation, will soon have one of the worst. The Chinese workforce will level off in about 2013, perhaps 2014, according to both Chinese and foreign demographers, but the effect is already being felt as wages rise, a trend that will eventually make the country’s factories uncompetitive. China, strangely enough, is running out of people to move to cities, work in factories, and power its economy. Demography may not be destiny, but it will now create high barriers for growth.
At the same time that China’s economy no longer benefits from these three favorable conditions, it must recover from the dislocations — asset bubbles and inflation — caused by Beijing’s excessive pump priming in 2008 and 2009, the biggest economic stimulus program in world history (including $1 trillion-plus in 2009 alone). Since late September, economic indicators — electricity consumption, industrial orders, export growth, car sales, property prices, you name it — are pointing toward either a flatlining or contracting economy. Money started to leave the country in October, and Beijing’s foreign reserves have been shrinking since September…
Nudge thyself: Economists have more to learn from the natural sciences if they are to claim a realistic model of human behaviour
January 1, 2012
You’ve come to a canteen for lunch: at one end of the counter, you see juicy fat burgers sizzling on a grill and, at the other end, healthy-looking salads. After a little hesitation, you choose the burger. “Cheese and bacon with that?” Well, why not?
Classical economists, perhaps uniquely among members of the human race, would assume you made your decision fully aware of the implications of your actions, that you weighed up those implications and came to the conclusion that, all things considered, the cheese and bacon burger is the better choice. But I for one am rarely so rational and frequently rue my failure to take the healthy option. Considering there are more than a billion people worldwide who are overweight, I’m guessing I’m not alone.
Some economists have realised this and, given the failure of classical models to predict the financial crisis, their young discipline of behavioural economics is now enjoying something of a heyday. They are convinced that accurate models and good policymaking require accurate approximations of real-life human behaviour. They therefore try to take into account our most predictable foibles, such as a tendency to short-term thinking. Economists’ knowledge of these foibles comes from other disciplines – psychology mostly – so they are always playing catch-up. But ever more research on the depths of our irrationality suggests they are still way, way behind.
Take nudging. The idea is that when economists and policymakers understand the ways in which we are predictably irrational, they can tweak policies so that we make the right choices despite ourselves. So, for example, we could make sure the salads are presented at eye-level at the entrance to the canteen, and the burgers are tucked away somewhere at the back.
Since the publication of the book Nudge in 2008, this art of gentle manipulation without restricting freedom of choice has become the new big idea in politics; one of the authors, Richard Thaler, is now a consultant for David Cameron, while the other, Cass Sunstein, has been given a senior position in Barack Obama’s White House. It is an approach to market intervention acceptable to both left and right, promising cheap, practical, “third-way” solutions to many of our pressing social and economic ills.
But what if a lot of people still buy burgers in artery-clogging quantities even if they have to walk past the health food to get to them? I know that every time I go to a certain Swedish furniture megastore, that part of me that speaks up for the salmon salads (displayed first and at eye-level) is easily bludgeoned into silence by the part of me that wants those fatty little meatballs in creamy sauce. If the depths of our irrationality is such that we will hunt down the burgers (or equivalent) wherever they are hidden, then nudging will not be enough.
And this is just what the latest research is showing. Three new books bring the evidence together from a wide range of disciplines: in Deceit and Self-Deception: Fooling Yourself the Better to Fool Others, the eminent biologist Robert Trivers argues that we have evolved to distort our image of reality systematically. Dean Buonomano makes similar claims in Brain Bugs: How the Brain’s Flaws Shape Our Lives, but focuses on his own field, neuroscience, for evidence. And in I’ll Have What She’s Having: Mapping Social Behaviour, two anthropologists and a marketing guru argue that behavioural economics and Nudge theory do not take adequate account of our social nature.
Trivers’ Deceit and Self-Deception is the most original and important of these works. In it, he attempts to construct a grand theory of deception, arguing that we continually paint a distorted picture of the world so that we might more easily get our way with others. So we inflate our achievements, play down our failings and rationalise away our mistakes.
Widely regarded as one of the most important evolutionary theorists of our time, Trivers argues that we have evolved these tendencies over hundreds of thousands of years – and, indeed, share many of them with monkeys.
Deceit and Self-Deception is a remarkable book, thick with ideas, yet relaxed and conversational in tone. Perhaps most remarkable is how ruthlessly Trivers confronts his own self-delusions: among other disarming confessions, you can read about how he was easily conned in Jamaica because he was so confident he understood the local culture, and how he nearly died while showing off in the hope of diverting a young lady’s attention away from his “more muscular nephew”. If we all examined our faults and foibles as honestly as Trivers does, the world probably would, as he hopes, be a more decent place.
The book is vast in scope, covering every aspect of our lives from sex to religion, family to war. But Trivers reserves particular ire for the failings of economic theory: it “acts like a science and quacks like one” he writes, but it is not one. Its key ideas are naive and circular: it assumes we make our choices as rational utility maximisers, for example. And what is utility? It is whatever we, in fact, choose. There is no room in such a theory for me to plan to buy a salad, then persuade myself when faced with the cheeseburger that it is the superior option (“just this once”) only to regret it later. “Yet,” he rages, “such is the detachment of this ‘science’ from reality that these contradictions arouse notice only when the entire world is hurtling into an economic depression based on corporate greed wedded to false economic theory.”
Trivers would like to see economics rebuilt on new foundations: those of evolutionary biology. But many of his specific claims are far from solid. They suffer from the broader weakness of his discipline: that claims about the evolutionary usefulness of this or that trait are notoriously difficult to test and relatively little is known with certainty about our prehistoric past. Some of Trivers’ theories, therefore, go far beyond the evidence – such as his claim that the rate at which new religions emerge is a function of the number of diseases in a given area. Perhaps Trivers, a grand old man of his field, can indulge himself in such speculations, safe in the knowledge that a generation of graduate students will earn their spurs trying to fill in the gaps…
January 1, 2012