The Once and Future Liberalism: We need to get beyond the dysfunctional and outdated ideas of 20th-century liberalism
February 29, 2012
Writing about the onset of the Great Depression, John Kenneth Galbraith famously said that the end had come but was not yet in sight. The past was crumbling under their feet, but people could not imagine how the future would play out. Their social imagination had hit a wall.
The same thing is happening today: The core institutions, ideas and expectations that shaped American life for the sixty years after the New Deal don’t work anymore. The gaps between the social system we inhabit and the one we now need are becoming so wide that we can no longer paper over them. But even as the failures of the old system become more inescapable and more damaging, our national discourse remains stuck in a bygone age. The end is here, but we can’t quite take it in.
In the old system, most blue-collar and white-collar workers held stable, lifetime jobs with defined benefit pensions, and a career civil service administered a growing state as living standards for all social classes steadily rose. Gaps between the classes remained fairly consistent in an industrial economy characterized by strong unions in stable, government-brokered arrangements with large corporations—what Galbraith and others referred to as the Iron Triangle. High school graduates were pretty much guaranteed lifetime employment in a job that provided a comfortable lower middle-class lifestyle; college graduates could expect a better paid and equally secure future. An increasing “social dividend”, meanwhile, accrued in various forms: longer vacations, more and cheaper state-supported education, earlier retirement, shorter work weeks, more social and literal mobility, and more diverse forms of affordable entertainment. Call all this, taken together, the blue model.
In the heyday of the blue model, economists and social scientists assumed that from generation to generation Americans would live a life of incremental improvements. The details of life would keep getting better even as the broad outlines of society stayed the same. The advanced industrial democracies, of which the United States was the largest, wealthiest and strongest, had reached the apex of social achievement. It had, in other words, defined and was in the process of perfecting political and social “best practice.” America was what “developed” human society looked like and no more radical changes were in the offing. Amid the hubris that such conceptions encouraged, Professor (later Ambassador) Galbraith was moved to state, in 1952, that “most of the cheap and simple inventions have been made.”1 If only the United States and its allies could best the Soviet Union and its counter-model, then indeed—as a later writer would put it—History would end in the philosophical sense that only one set of universally acknowledged best practices would be left standing.
Life isn’t this simple anymore. The blue social model is in the process of breaking down, and the chief question in American politics today is what should come next.
One large group, mainly “blue state” self-labeled liberals who think the blue model is the only possible, or at least the best feasible, way to organize a modern society, wants to shore it up and defend it. This group sees the gradual breakup of the blue social model as an avoidable historical tragedy caused by specific and reversible policy errors. Supporters of the model point to the rising inequality and financial instability in contemporary American life as signs that we need to defend the blue system and enlarge it.
Others, generally called conservatives and often hailing from the “red states”, think the model, whatever its past benefits or general desirability, is no longer sustainable and must give way to an earlier, more austere but also more economically efficient pre-“big government” model. Often, backers of this view see the New Deal state as a great wrong turn. Their goal is to repair the errors of the 1930s and return to the more restrictive constitutional limits on Federal power from an earlier time.
But even as the red-blue division grows more entrenched and bitter, it is becoming less relevant. The blue model is breaking down so fast and so far that not even its supporters can ignore the disintegration and disaster it now presages. Liberal Democrats in states like Rhode Island and cities like Chicago are cutting pensions and benefits and laying off workers out of financial necessity rather than ideological zeal. The blue model can no longer pay its bills, and not even its friends can keep it alive.
Our real choice, however, is not between blue or pre-blue. We can’t get back to the 1890s or 1920s any more than we can go back to the 1950s and 1960s. We may not yet be able to imagine what a post-blue future looks like, but that is what we will have to build. Until we remove the scales from our eyes and launch our discourse toward the future, our politics will remain sterile, and our economy will fail to provide the growth and higher living standards Americans continue to seek. That neither we nor the world can afford.
The blue social model rested on a novel post-World War II industrial and economic system. The “commanding heights” of American business were controlled by a small number of sometimes monopolistic, usually oligopolistic firms. AT&T, for example, was the only serious telephone company in the country, and both the services it offered and the prices it charged were tightly regulated by the government. The Big Three automakers had a lock on the car market; in the halcyon days of the blue model there was virtually no foreign competition. A handful of airlines divided up the routes and the market; airlines could not compete by offering lower prices or by opening new routes without government permission. Banks, utilities, insurance companies and trucking companies had their rates and, essentially, their profit levels set by Federal regulators. This stable economic structure allowed a consistent division of the pie. Unionized workers, then a far larger percentage of laborers than is the case today, got steady raises in steady jobs. The government got a steady flow of tax revenues. Shareholders got reasonably steady dividends.
There were problems with the blue model. It abided systematic discrimination against women and minorities, and a case can be made that it depended on that discrimination to some degree. Consumers had little leverage: If you didn’t like the way the phone company treated you, you were free to do without phone service, and if you didn’t like poorly made Detroit gas guzzlers that fell apart in a few years, you could get a horse. The system slowed innovation, too; AT&T discouraged investments in new telecommunications technologies. Rival companies and upstart firms were barred from controlled markets by explicit laws and regulations intended to stabilize the position of leading companies. By some accounts, too, the quarter century after World War II was a period of stultifying cultural conformity. In this prologue to the end of History, some “last men”, from the Beatniks to Lennie Bruce to Andy Warhol to Lou Reed, were already bored, resenting the pressure to conform that the mass consumption, Fordist era entailed.
The blue model began to decay in the 1970s. Foreign manufacturers recovered from the devastation of World War II and in many cases had more efficient and advanced factories than lazy, sclerotic American firms. German and Japanese goods challenged American automobile and electronic companies. The growth of offshore financial markets forced the U.S. financial services industry to become more flexible as both borrowers and lenders were increasingly able to work around the regulations and the oligopolies of the domestic market. Demand for new communications services created an appetite for competition against Ma Bell. The consumer movement attacked regulations designed to protect big companies. As a sign of the times, Ted Kennedy, of all people, cosponsored a bill to deregulate the airlines. Anti-corporate liberals rebelled at the way government power and regulation allowed corporations to give consumers the shaft. The new environmental movement pointed to the problem of privately caused but publicly paid-for externalities like air and water pollution…
February 29, 2012
“It’s worth it to come up here to drink a cafecito and meditate on the world, maybe write a poem,” Evenor Malespín told me on top of San Pedro de Carazo, Nicaragua’s highest hill. “Or even eat a carne asada.” Malespín has three bony, chestnut-colored milk cows, but subsistence farmers such as him can rarely afford to eat beef.
An extinct volcano called Mombacho loomed above us, its forested dome lost in the clouds billowing like a duvet over the relentlessly green earth. Along the volcano’s eastern flank, the blue sheet of Lake Nicaragua stretched toward the horizon. Wind stirred the towering guanacaste and ceiba trees at the base of the hill.
“This is tourism,” Malespín said with a smile as he took off his royal blue baseball cap and wiped his brow with it. His words gave me pause for a moment, since Malespín has lived nearly all of his 61 years around the village of San Pedro, where more than 150 of the 500-plus residents live in extreme poverty, lacking basic necessities such as adequate housing, sanitation, water, and employment. But for small-scale farmers, almost any outing not related to the work of survival counts as sightseeing.
“When there’s too much rain, the beans rot,” Malespín told me. “When there’s a drought, you get a few more beans, because beans need less water. But you don’t get corn, you don’t get rice. So, if it’s not one thing, it’s another.” Malespín let out a full-throated laugh that faded into a whisper: “This is the problem. This is the problem.”
In recent years, alternating extreme drought and heavy rains have been punishing the crops Malespín grows on his eight and a half acres. “Global warming is making the dry season here more intense,” he said, “and rain is very strong early in the rainy season.”
In my seven recent visits to Central America as a writer, teacher, and volunteer, I’ve met farmer after farmer who echoes Malespín. Their stories show how climate change is gradually pushing more people toward poverty and worsening the food insecurity of already-vulnerable people.
In addition to producing new hardships, climate change is making inequalities more extreme by the year. The effects of climate change are being felt in Central America, even though the people there are some of those least responsible for emissions. Demonstrators are everywhere protesting the unfairness of the global economic system. Here is exhibit A.
It was a storm that “doesn’t have a name,” El Salvador’s President Mauricio Funes said of ten consecutive days of rain last October. They were not part of a hurricane or a tropical storm, and therefore didn’t register as extreme weather in the global media. But the storm was a disaster all the same.
When the rains finally slackened, almost 10 percent of both Nicaragua and El Salvador was underwater. El Salvador received nearly five feet of rain, the average yearly total and more than it received during 1998’s record-breaking Hurricane Mitch. In Central America as a whole, October’s rains resulted in at least 123 deaths and more than 300,000 displaced people. Nicaragua, El Salvador, Honduras, and Guatemala each declared states of emergency.
The record intensity was produced by a stalled low-pressure system enhanced by a tropical depression and water temperatures off the coast of El Salvador 0.5–1°C above average. This allowed “more water vapor than usual to evaporate into the air,” according to Climate Progress editor Joe Romm.
For the region’s rural poor, October’s rains mean an even leaner-than-usual “hungry season,” currently underway and lasting about six months. According to a November 2011 report [PDF] by the Risk, Emergency, and Disaster Task Force Inter-Agency Workgroup for Latin America & The Caribbean (REDLAC), 200,000–300,000 Central American farming families lost 30–100 percent of their crops, with a total value of at least $300 million. The Nicaraguan Ministry of Agriculture and Forestry estimates that more than 17,000 acres of crops were destroyed. More than 20,000 farmers and their families, many of them among the approximately 1.5 million Nicaraguans who are already undernourished, lost their food and seed supplies for the next four to ten months.
The rains damaged 13 percent of Nicaragua’s cropland, leaving most farmers feeling lucky. “If it would have rained five days more, all of the crops would have been lost,” Nicaraguan farmer Wilmer Alvarez told me. Still, As much damage as it caused, the October deluge was not unique.
“This was the latest in a long series of annual crises, with a cumulative effect,” Catherine Bragg, Deputy Emergency Relief Coordinator in the UN Office for the Coordination of Humanitarian Affairs, said after her November 2011 tour of flood-ravaged Nicaragua and El Salvador.
In September 2010 weeks of torrential rains drowned or rotted much of Nicaragua’s bean crop, which provides Nicaraguans’ main source of protein. Severe drought in 2009 affected 8.5 million people in Nicaragua, Guatemala, Honduras, and El Salvador. In October 2008 a tropical depression brought floods and landslides that washed away entire fields throughout Central America. And in 2007 a combination of early-season drought and late-season flooding caused poor harvests.
Such weather events, compounded one after the other, have made farmers’ lives, and the livelihoods of the public they feed, increasingly precarious. Malespín, like many farmers around San Pedro, has lost his bean crop for the last four years.
“In ’94, ’95, ’96, I harvested up to 9,000 pounds of beans, and I had my food in abundance,” Malespín said. “Now, this is impossible. Losing is not a joke, because you have to pay for it . . . . You have to spend double on food because everything you lost you have to go out and buy. With the few resources you have from working, instead of buying a pair of pants or a nice pair of shoes, a nice shirt, you buy a few cheap things in order to buy food.”
In rural Nicaragua feeding a family of four a basic diet of rice and beans requires about $22 a week, more than the average farmer earns. So, most farmers grow as much of their own food as possible. In addition to buying food when a crop fails, farmers have to buy seeds for the next year’s crop; with no harvest the previous year, there are no seeds for the future. In 2011 Malespín had to spend more than $43 (roughly two-thirds of a month’s income) to buy enough bean seed to hopefully have enough beans to feed his family in 2012. He has largely given up on growing enough to sell.
“There’s no stability like there used to be,” he explained. “The weather isn’t like it was before.”…
Thomas Sargent’s Rational Expectations: A Nobel Prize winner discovered a way to put actual human beings back into economic theory
February 29, 2012
All scholars strive to make important contributions to their discipline. Thomas J. Sargent irrevocably transformed his.
In the early 1970s, inspired by the groundbreaking work of Robert Lucas, Sargent and colleagues at the University of Minnesota rebuilt macroeconomic theory from its basic assumptions and micro-level foundations to its broadest predictions and policy prescriptions.
This “rational expectations revolution,” as it was later termed, fundamentally changed the theory and practice of macroeconomics. Prior models had assumed that people respond passively to changes in fiscal and monetary policy; in rational-expectations models, people behave strategically, not robotically. The new theory recognized that people look to the future, anticipate how governments and markets will act, and then behave accordingly in ways they believe will improve their lives.
Therefore, the theory showed, policy makers can’t manipulate the economy by systematically “tricking” people with policy surprises. Central banks, for example, can’t permanently lower unemployment by easing monetary policy, as Sargent demonstrated with Neil Wallace, because people will (rationally) anticipate higher future inflation and will (strategically) insist on higher wages for their labor and higher interest rates for their capital.“The more dynamic, uncertain, and ambiguous is the economic environment that you seek to model, the more you are going to have to roll up your sleeves, and learn and use some math. That’s life.”
This perspective of a dynamic, random macroeconomy demanded deeper analysis and more sophisticated mathematics. Sargent pioneered the development and application of new techniques, creating precise econometric methods to test and refine rational expectations theory.
But by no means has Sargent limited himself to rational expectations. Among his dozen books and profusion of research articles are key contributions to learning theory (the study of the foundations and limits of rationality) and economic history, including influential work on monetary standards and international episodes of inflation.
Sargent, a Hoover senior fellow, was awarded the Nobel Prize in economics in 2011, along with Christopher Sims, a professor at Princeton University. Here are excerpts of an interview conducted before they were awarded their shared Nobel.
MODERN MACROECONOMICS UNDER ATTACK
Art Rolnick: You have devoted your professional life to helping construct and teach modern macroeconomics. After the financial crisis that started in 2007, modern macro has been widely attacked as deficient and wrongheaded.
Thomas J. Sargent: I know that I’m the one who is supposed to be answering questions, but perhaps you can tell me what popular criticisms of modern macro you have in mind.
Rolnick: OK, here goes. Examples of such criticisms are that modern macroeconomics makes too much use of sophisticated mathematics to model people and markets; that it incorrectly relies on the assumption that asset markets are efficient in the sense that asset prices aggregate information of all individuals; that the faith in good outcomes always emerging from competitive markets is misplaced; that the assumption of rational expectations is wrongheaded because it attributes too much knowledge and forecasting ability to people; that the recent financial crisis took modern macro by surprise; and that macroeconomics should be based less on formal decision theory and more on the findings of “behavioral economics.” Shouldn’t these be taken seriously?
Sargent: Sorry, Art, but aside from the foolish and intellectually lazy remark about mathematics, all of the criticisms that you have listed reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished. That said, it is true that modern macroeconomics uses mathematics and statistics to understand behavior in situations where there is uncertainty about how the future will unfold from the past. But a rule of thumb is that the more dynamic, uncertain, and ambiguous is the economic environment that you seek to model, the more you are going to have to roll up your sleeves, and learn and use some math. That’s life.“It is just wrong to say that this financial crisis caught modern macroeconomists by surprise.”
Rolnick: Putting aside fear and ignorance of math, please say more about the other criticisms.
Sargent: I have two responses to your citation of criticisms of rational expectations. First, note that rational expectations continues to be a workhorse assumption for policy analysis by macroeconomists of all political persuasions. To take one good example, in the spring of 2009, Joseph Stiglitz and Jeffrey Sachs independently wrote op-ed pieces incisively criticizing the Obama administration’s proposed PPIP (Public-Private Investment Program) for jump-starting private sector purchases of toxic assets. Both Stiglitz and Sachs executed a rational-expectations calculation to compute the rewards to prospective buyers. Those calculations vividly showed that the administration’s proposal represented a large transfer of taxpayer funds to owners of toxic assets. That analysis threw a floodlight onto the PPIP that some of its authors did not welcome.
And second, economists have been working hard to refine rational-expectations theory. For instance, macroeconomists have done creative work that modifies and extends rational expectations in ways that allow us to understand bubbles and crashes in terms of optimism and pessimism that emerge from small deviations from rational expectations.
Rolnick: What about the most serious criticism—that the recent financial crisis caught modern macroeconomics by surprise?
Sargent: Art, it is just wrong to say that this financial crisis caught modern macroeconomists by surprise. That statement does a disservice to an important body of research to which responsible economists ought to be directing public attention. Researchers have systematically organized empirical evidence about past financial and exchange crises in the United States and abroad. Enlightened by those data, researchers have constructed first-rate dynamic models of the causes of financial crises and government policies that can arrest them or ignite them. The evidence and some of the models are well summarized and extended, for example, in Franklin Allen and Douglas Gale’s 2007 book Understanding Financial Crises. Please note that this work was available well before the U.S. financial crisis began in 2007…
February 29, 2012
In 2010, Der Spiegel published a glowing profile of Steve Jobs, then at the helm of Apple. Jobs’s products are venerated in Germany, especially by young bohemian types. Recently, the Museum of Arts and Crafts in Hamburg presented an exhibition of Apple’s products, with the grandiloquent subtitle “On Electro-Design that Makes History”—a good indication of the country’s infatuation with the company. Jobs and Jony Ive, Apple’s extraordinary chief of design, have always acknowledged their debt to Braun, a once-mighty German manufacturer of radios, record players, and coffeemakers. The similarity between Braun’s gadgets from the 1960s and Apple’s gadgets is quite uncanny. It took a Syrian-American college dropout—a self-proclaimed devotee of India, Japan, and Buddhism—to make the world appreciate the virtues of sleek and solid German design. (Braun itself was not so lucky: in 1967 it was absorbed into the Gillette Group, and ended up manufacturing toothbrushes.)
The piece about Jobs in Der Spiegel shed no light on his personality, but it stood out for two reasons. The first was its title: “Der Philosoph des 21 Jahrhunderts,” or “The Philosopher of the Twenty-First Century.” The second was the paucity of evidence to back up such an astonishing claim. Jobs’s status as a philosopher seems to have been self-evident. It is hard to think of any other big-name CEO who could win such an accolade, and from an earnest German magazine that used to publish long interviews with Heidegger. So was Steve Jobs a philosopher who strove to change the world rather than merely interpret it? Or was he a marketing genius who turned an ordinary company into a mythical cult, while he himself was busy settling old scores and serving the demands of his titanic ego?
There are few traces of Jobs the philosopher in Walter Isaacson’s immensely detailed and pedestrian biography of the man. Isaacson draws liberally on previously published biographies, and on dozens of interviews that Jobs gave to the national media since the early 1980s. He himself conducted many interviews with Jobs (who proposed the project to Isaacson), and with his numerous colleagues, enemies, and disciples, but as one nears the end of this large book it’s hard not to wonder what it was that Isaacson and Jobs actually talked about on those walks around Palo Alto. Small anecdotes abound, but weren’t there big themes to discuss?
That the book contains few earth-shattering revelations is not necessarily Isaacson’s fault. Apple-watching is an industry: there exists an apparently insatiable demand for books and articles about the company. Apple-focused blogs regularly brim with rumors and speculation. Ever since its founding—but especially in the last decade, when Apple-worship reached its apogee—Apple has been living under the kind of intense public scrutiny that is usually reserved for presidents. Jobs relished such attention, but only if it came on his own terms. He did his best to manage Apple’s media coverage, and was not above calling influential tech reporters and convincing them to write what he wanted the world to hear. Not only did Jobs build a cult around his company, he also ensured that it had its own print outlets: Apple’s generous subsidy allowed Macworld—the first magazine to cover all things Apple—to come into being and eventually spawn a genre of its own.
As Isaacson makes clear, Jobs was not a particularly nice man, nor did he want to be one. The more diplomatic of Apple’s followers might say that Steve Jobs—bloodthirsty vegetarian, combative Buddhist—lived a life of paradoxes. A less generous assessment would be that he was an unprincipled opportunist-a brilliant but restless chameleon. For Jobs, consistency was truly the hobgoblin of little minds (he saw little minds everywhere he looked) and he did his best to prove Emerson’s maxim in his own life. He hung a pirate flag on the top of his team’s building, proclaiming that “it is better to be a pirate than to join the Navy,” only to condemn Internet piracy as theft several decades later. He waxed lyrical about his love for calligraphy, only to destroy the stylus as an input device. He talked up the virtues of contemplation and meditation, but did everything he could to shorten the time it takes to boot an Apple computer. (For a Buddhist, what’s the rush?) He sought to liberate individual users from the thrall of big businesses such as IBM, and then partnered with IBM and expressed his desire to work only with “corporate America.” A simplifier with ascetic tendencies, he demanded that Apple’s board give him a personal jet so that he could take his family to Hawaii. He claimed he was not in it for the money and asked for a salary of just $1, but he got into trouble with the Securities and Exchange Commission for having his stock options—in a move that gave him millions—backdated. He tried to convince his girlfriend that “it was important to avoid attachment to material objects,” but he built a company that created a fetish out of material objects. He considered going to a monastery in Japan, but declared that, were it not for computers, he would be a poet in the exceedingly unmonastic city of Paris.
How serious was he about that monastery thing? Isaacson recounts well-known anecdotes of Jobs’s quest for spirituality and seems to take them all (and many other things) at face value. The story of Jobs’s youth—his pilgrimage to India, the time he spent living on a farm commune, his fascination with primal scream therapy—does suggest that his interest in spirituality was more than a passing fad. But how long did it last, exactly? Did the more mature Jobs, the ruthless capitalist, feel as strongly about spirituality as his younger self did? Surely there were good reasons for the mature Jobs to cultivate the image of a deeply spiritual person: Buddhism is more than just a religion in America, it is also a brand. And one of Apple’s great accomplishments was to confer upon its devices a kind of spiritual veneer.
Jobs was quite candid about his vanishing interest in matters of spirituality as early as 1985. When a Newsweek reporter inquired if it was true that he had considered going to a monastery in Japan, Jobs gave a frank answer: “I’m glad I didn’t do that. I know this is going to sound really, really corny. But I feel like I’m an American, and I was born here. And the fate of the world is in America’s hands right now. I really feel that. And you know I’m going to live my life here and do what I can to help.” In a more recent interview withEsquire he claimed that he did not pursue the monastery route in part because he saw fewer and fewer differences between living in the East and working at Apple: “Ultimately, it was the same thing.”
Jobs’s engagement with politics was quite marginal—so marginal that, except for him lecturing Obama on how to reset the country, there are few glimpses of politics in this book. He did not hold politicians in anything like awe. We see him trying to sell a computer to the king of Spain at a party, and asking Bill Clinton if he could put in a word with Tom Hanks to get him to do some work for Jobs. (Clinton declined.) When he was ousted from Apple, Jobs may have flirted with the idea of running for office but was probably discouraged by all the pandering it required. “Do we have to go through that political bullshit to get elected governor?” he reportedly asked his publicist. In an interview with Business Week in 1984, he confessed that “I’m not political. I’m not party-oriented, I’m people-oriented.”
But “not political” may be the wrong term to describe him. There is a curious passage in his interview with Wired, in 1996, where he notes:
When you’re young, you look at television and think, There’s a conspiracy. The networks have conspired to dumb us down. But when you get a little older, you realize that’s not true. The networks are in business to give people exactly what they want. That’s a far more depressing thought. Conspiracy is optimistic! You can shoot the bastards! We can have a revolution! But the networks are really in business to give people what they want. It’s the truth…
Brain in a box: Henry Markram wants €1 billion to model the entire human brain. Sceptics don’t think he should get it.
February 28, 2012
It wasn’t quite the lynching that Henry Markram had expected. But the barrage of sceptical comments from his fellow neuroscientists — “It’s crap,” said one — definitely made the day feel like a tribunal.
Officially, the Swiss Academy of Sciences meeting in Bern on 20 January was an overview of large-scale computer modelling in neuroscience. Unofficially, it was neuroscientists’ first real chance to get answers about Markram’s controversial proposal for the Human Brain Project (HBP) — an effort to build a supercomputer simulation that integrates everything known about the human brain, from the structures of ion channels in neural cell membranes up to mechanisms behind conscious decision-making.
Markram, a South-African-born brain electrophysiologist who joined the Swiss Federal Institute of Technology in Lausanne (EPFL) a decade ago, may soon see his ambition fulfilled. The project is one of six finalists vying to win €1 billion (US$1.3 billion) as one of the European Union’s two new decade-long Flagship initiatives.
“Brain researchers are generating 60,000 papers per year,” said Markram as he explained the concept in Bern. “They’re all beautiful, fantastic studies — but all focused on their one little corner: this molecule, this brain region, this function, this map.” The HBP would integrate these discoveries, he said, and create models to explore how neural circuits are organized, and how they give rise to behaviour and cognition — among the deepest mysteries in neuroscience. Ultimately, said Markram, the HBP would even help researchers to grapple with disorders such as Alzheimer’s disease. “If we don’t have an integrated view, we won’t understand these diseases,” he declared.
As the response at the meeting made clear, however, there is deep unease about Markram’s vision. Many neuroscientists think it is ill-conceived, not least because Markram’s idiosyncratic approach to brain simulation strikes them as grotesquely cumbersome and over-detailed. They see the HBP as overhyped, thanks to breathless media reports about what it will accomplish. And they’re not at all sure that they can trust Markram to run a project that is truly open to other ideas.
“We need variance in neuroscience,” declared Rodney Douglas, co-director of the Institute for Neuroinformatics (INI), a joint initiative of the University of Zurich and the Swiss Federal Institute of Technology in Zurich (ETH Zurich). Given how little is known about the brain, he said, “we need as many different people expressing as many different ideas as possible” — a diversity that would be threatened if so much scarce neuroscience research money were to be diverted into a single endeavour.
Markram was undeterred. Right now, he argued, neuroscientists have no plan for achieving a comprehensive understanding of the brain. “So this is the plan,” he said. “Build unifying models.”
Markram’s big idea
Markram has been on a quest for unity since at least 1980, when he began undergraduate studies at the University of Cape Town in South Africa. He abandoned his first field of study, psychiatry, when he decided that it was mainly about putting people into diagnostic pigeonholes and medicating them accordingly. “This was never going to tell us how the brain worked,” he recalled in Bern.
His search for a new direction led Markram to the laboratory of Douglas, then a young neuroscientist at Cape Town. Markram was enthralled. “I said, ‘That’s it! For the rest of my life, I’m going to dig into the brain and understand how it works, down to the smallest detail we can possibly find.’”
That enthusiasm carried Markram to a PhD at the Weizmann Institute of Science in Rehovot, Israel; to postdoctoral stints at the US National Institutes of Health in Bethesda, Maryland, and at the Max Planck Institute for Medical Research in Heidelberg, Germany; and, in 1995, to a faculty position at Weizmann. He earned a formidable reputation as an experimenter, notably demonstrating spike-timing-dependent plasticity — in which the strength of neural connections changes according to when impulses arrive and leave.
By the mid-1990s, individual discoveries were leaving him dissatisfied. “I realized I could be doing this for the next 25, 30 years of my career, and it was still not going to help me understand how the brain works,” he said.
To do better, he reasoned, neuroscientists would have to pool their discoveries systematically. Every experiment at least tacitly involves a model, whether it is the molecular structure of an ion channel or the dynamics of a cortical circuit. With computers, Markram realized, you could encode all of those models explicitly and get them to work together. That would help researchers to find the gaps and contradictions in their knowledge and identify the experiments needed to resolve them…
February 28, 2012
This weekend, the fracas over foreigners in Cairo is set to escalate when hearings begin against 43 workers (including 16 U.S. citizens) charged with operating without a license, receiving unauthorized foreign funds, and engaging in political activity. The drama is seven months in the making. Last July, Egypt’s Ministry of Justice opened an investigation into the activities and funding of numerous (possibly as many as 400) nongovernmental organizations (NGOs). The move came at the behest of Fayza Abul Naga, minister of planning and international cooperation. In the months that followed, her department refused to officially state or confirm any details of the wide-ranging probe. Then, in late December, Egyptian security forces raided the offices of several of the NGOs under investigation. And what began as an effort by one Egyptian minister to assert her control has turned into a game of international brinkmanship that has the potential to upend the security calculus of the Middle East.
After tensions escalated in December, numerous members of Congress made clear that the actions of the Egyptian government could jeopardize the annual $1.55 billion aid package to Egypt — the United States’ second largest, after the $3.1 billion it gives Israel annually. Senator John Kerry (D-Mass.) introduced a resolution calling for an immediate end to the harassment and prosecution of NGO staff. Senator Rand Paul (R-Ky.) went much further when he introduced legislation that would suspend all U.S. aid to Egypt until the matter is resolved. On Monday, a group of U.S. senators including John McCain (R-Ariz.) and Lindsay Graham (R-S.C.) visited Cairo to meet with Field Marshal Mohamed Hussein Tantawi and other Egyptian leaders, trying to relieve some of the tension. They returned home with optimistic messages, suggesting that the Egyptian brass offered strong assurances of a swift resolution to the impasse.
How much faith Washington can put in those assurances, however, remains to be seen. Fueling the United States’ impatience have been Cairo’s confusing, and often conflicting, messages. Unlike the Mubarak era, when there were relatively clear lines of command, the past year in Egypt has been marked by the rapid emergence of multiple centers of power competing for political control. Egypt’s actual foreign policy has been almost indecipherable. For example, after security forces raided the NGO offices, top Egyptian officials, including Tantawi, Prime Minister Kamal Ganzouri, and Foreign Minister Mohamed Amr, were quick to assure the United States that the maltreatment of U.S. citizens would cease, that all seized materials would be immediately returned, and that the offices would be able to reopen. Six weeks later, those have proved to be empty promises.
Conversations in Washington reveal that U.S. officials, by and large, do not believe that their counterparts in Cairo are being intentionally deceptive — they assume that the Egyptians have simply promised more than they can deliver. For years, Abul Naga, who is one of the few top officials remaining from the days of Mubarak, has been opposed to any foreign funding that bypassed her ministry. And now she seems to be targeting U.S. influence specifically. Last week, the Egyptian press quoted Abul Naga as having portrayed the U.S. as trying to hijack Egypt’s revolution. “The United States decided to use all its resources and instruments to contain [the January 25 revolution],” the government’s official news agency, MENA, quoted her as saying, “and push it in a direction that promotes American and also Israeli interests.”
But her charges against the NGOs ring hollow. For instance, the National Democratic Institute and the International Republican Institute have made more than reasonable efforts to comply with Egyptian law. Both groups applied for registration with the Ministry of Foreign Affairs in 2005 and have communicated regularly with the authorities about their activities and programs ever since. Both groups were told repeatedly that their registration would be granted, but it never was and no explanation was given. This experience is characteristic of that of many other organizations that have focused on politically sensitive issues, while groups with more innocuous goals have had their registration granted promptly. It is disingenuous for the Egyptian government to refuse to grant U.S. NGOs registration on political grounds and then claim that the investigation against them is an apolitical matter for the judiciary. Moreover, that many other international organizations operate in Egypt today without official registration underscores the selective, political nature of these attacks.
Members of Egypt’s ruling military council have generally avoided the issue in public, perhaps in order to give them plausible deniability with Washington. Privately, they consistently argue to U.S. officials that they cannot intervene in independent judicial processes. But even if the generals are not the driving force behind the crackdown, it is quite unlikely that the investigation could have moved forward without their support. The military has held executive authority and ultimate decision-making power for the past year — all cabinet ministers were appointed by the generals and report to them…
February 28, 2012
February 28, 2012
February 27, 2012
February 27, 2012
If there’s one thing about which Americans agree these days, it’s that we can’t agree. Gridlock is the name of our game. We have no common ground.
There seems, however, to be at least one area of cordial consensus—and I don’t mean bipartisan approval of the killing of Osama bin Laden or admiration for former Rep. Gabrielle Giffords’s courage and grace.
I mean the public discourse on education. On that subject, Republicans and Democrats speak the same language—and so, with striking uniformity, do more and more college and university leaders. “Education is how to make sure we’ve got a work force that’s productive and competitive,” said President Bush in 2004. “Countries that outteach us today,” as President Obama put it in 2009, “will outcompete us tomorrow.”
What those statements have in common—and there is truth in both—is an instrumental view of education. Such a view has urgent pertinence today as the global “knowledge economy” demands marketable skills that even the best secondary schools no longer adequately provide. Recent books, such as Academically Adrift: Limited Learning on College Campuses, by Richard Arum and Josipa Roksa, and We’re Losing Our Minds: Rethinking American Higher Education, by Richard P. Keeling and Richard H.H. Hersh, marshal disturbing evidence that our colleges and universities are not providing those skills, either—at least not well or widely enough. But that view of teaching and learning as an economic driver is also a limited one, which puts at risk America’s most distinctive contribution to the history and, we should hope, to the future of higher education. That distinctiveness is embodied, above all, in the American college, whose mission goes far beyond creating a competent work force through training brains for this or that functional task.
College, of course, is hardly an American invention. In ancient Greece and Rome, young men attended lectures that resembled our notion of a college course, and gatherings of students instructedby settled teachers took on some of the attributes we associate with modern colleges (libraries, fraternities, organized sports). By the Middle Ages, efforts were under way to regulate the right to teach by issuing licenses, presaging the modern idea of a faculty with exclusive authority to grant degrees. In that broad sense, college as a place where young people encounter ideas and ideals from teachers, and debate them with peers, has a history that exceeds two millennia.
But in several important respects, the American college is a unique institution. In most of the world, students who continue their education beyond secondary school are expected to choose their field of specialization before they arrive at university. In America there has been an impulse to slow things down, to extend the time for second chances and defer the day when determinative choices must be made. When, in 1851, Herman Melville wrote in his great American novel Moby-Dick that “a whaleship was my Yale College and my Harvard,” he used the word “college” as a metaphor for the place where, as we would say today, he “found himself.” In our own time, a former president of Amherst College writes of a young man experiencing in college the “stirring and shaping, perhaps for the first time in his life, [of] actual convictions—not just gut feelings—among his friends and, more important, further down, in his own soul.”
In principle, if not always in practice, this transformative ideal has entailed the hope of reaching as many citizens as possible. In ancient Greece and Rome, where women were considered inferior and slavery was an accepted feature of society, the study of artes liberales was reserved for free men with leisure and means. Conserved by medieval scholastics, renewed in the scholarly resurgence we call the Renaissance and again in the Enlightenment, the tradition of liberal learning survived in the Old World but remained largely the possession of ruling elites.
But in the New World, beginning in the Colonial era with church-sponsored scholarships for promising schoolboys, the story of higher education has been one of increasing inclusion. That story continued in the early national period through the founding of state colleges, and later through the land-grant colleges created by the federal government during the Civil War. In the 20th century, it accelerated with the GI Bill, the “California plan” (a tiered system designed to provide virtually universal postsecondary education), the inclusion of women and minorities in previously all-male or all-white institutions, the growth of community colleges, and the adoption of “need-based” financial-aid policies. American higher education has been built on the premise that human capital is widely distributed among social classes and does not correlate with conditions of birth or social status…
February 27, 2012
For all the talk of American decline, there’s one thing we still make better than anyone on the planet: movies.
In the frenzied final weeks before the Feb. 26 Academy Awards, a curious behind-the-scenes battle was taking place to persuade Hollywood that the leading Oscar contender, The Artist, was an American film — even though a Frenchman wrote and directed it, another Frenchman produced it with French money, and a Frenchman and Frenchwoman are the two leads. Harvey Weinstein, head of The Weinstein Company, which is distributing the acclaimed movie, even persuaded the City of Los Angeles to proclaim January 31 “The Artist Day,” arguing that the movie was shot there. IndeedThe Artist, a black-and-white tale about a silent star’s fall from grace and subsequent return to fame, has a chance to become the first non-Anglo-Saxon film ever to win the Best Picture Oscar, even though it has made just $29 million in the United States since it went into general release on January 20.
Foreign films simply don’t play with American audiences. On average, foreign-language movies make up less than 1 percent of the U.S. box office, says Paul Dergarabedian, president of the box office division of Hollywood.com. In fact, compared to Hollywood productions, foreign films don’t even play that well in their home markets. Despite the relative decline of America and a huge spurt of filmmaking in countries such as Brazil, China, and South Korea, Hollywood still dominates in box offices across the world. James Cameron’s Avatarremains the top-grossing film ever, and when Chinese authorities attempted to remove it from theaters, their actions caused protests. Although some of the world’s top grossing films, like Rio, The Last Samurai, and The Mummy: Tomb of the Dragon Emperor, were shot outside the United States or focus on other countries, all of the world’s top 100 grossing films were Hollywood productions.
Dire predictions about Hollywood’s demise have cropped up almost as frequently as blockbusters. Ever since the silent era and then the advent of television, naysayers have spoken of its impending collapse. In the early 1980s, Chariots of Fire producer David Puttnam said he believed Hollywood’s future would lie in small-budget films that could compete with the rest of the world — only to find that the opposite happened. Despite globalization’s deleterious effect on the U.S. textile, automotive, computer industries, for movies it’s still very much America’s world.
That’s especially true in America itself, where a resistance to foreign film has been helped by Americans’ dislike of subtitles and lack of familiarity with dubbing — unlike such countries as Germany, where dubbing is routine, or France, where locals have a choice between watching a dubbed version or a “version originale.” In the decades prior to 1947, when the Supreme Court told the studios they had to divest themselves of their theater chains, it was against their interests to do anything that might encourage foreign filmmaking — hence sophisticated dubbing technology never caught on. “We’ve tried to dub, but then the critics kill you — and these films play to audiences that pay a lot of attention to reviews,” says Mark Gill, the former president of Warner Independent Pictures.
Because investors don’t expect foreign films to play well in the United States, still by far the world’s largest and most important film market (China and Japan are vying for second place, but each brings in about one-tenth the combined U.S. and Canada box office), they don’t get the same production and advertising budgets that Americans do. At the same time, broadcast television networks refuse to buy foreign-language products, leaving a crucial player in film financing absent when it comes to assembling the kind of multi-source deals that get most non-studio pictures made these days.
“We have a Lebanese film opening in the spring, Where Do We Go Now?, and in Lebanon it’s about to become the top-grossing film ever, beating Titanic,” says Tom Bernard, co-president of Sony Pictures Classics, one of the few companies that continue to back foreign releases in the U.S. Despite this, it will only open on 10 screens here, he says — compared with 3,000-4,000 for major studio releases.
Production values for American films are vastly superior to foreign ones, helped by budgets that can exceed $200 million (100 times the price of many foreign films, and at least 30 times the estimated $6 million-plus budget of Where Do We Go Now?) And the marketing costs of movies have swollen so that even if a foreign film is less expensive than an American one, it is almost impossible to find a wide audience for it in the United States without spending millions of dollars.
There are exceptions, most notably 2000′s Crouching Tiger, Hidden Dragon, which earned $128 million “domestically” — as Hollywood executives like to describe North America — but even that was written and produced by American James Schamus and directed by Taiwanese-American Ang Lee.
Most foreign films remain box-office busts. Schamus and Lee’s subsequent Chinese-language Lust, Caution earned a paltry $4.6 million in the United States, compared to $62.4 million internationally. Iran’s A Separation, the biggest earner so far among the films nominated for best foreign-language picture this year, has earned just $1.6 million domestically.
“For every Crouching Tiger, there are hundreds of foreign films that don’t make any money here,” says Dergarabedian. “In order to make films palatable to an American audience, they have to be in English. That’s why you see American versions of films like The Girl With the Dragon Tattoo. The Scandinavian version was perfectly good, but nobody saw it in the U.S.”…
February 27, 2012
Once a mark of the cultured, language-learning is in retreat among English speakers. It’s never too late, but where to start?
For language lovers, the facts are grim: Anglophones simply aren’t learning them any more. In Britain, despite four decades in the European Union, the number of A-levels taken in French and German has fallen by half in the past 20 years, while what was a growing trend of Spanish-learning has stalled. In America, the numbers are equally sorry. One factor behind the 9/11 attacks was the fact that the CIA lacked the Arabic-speakers who might have translated available intelligence. But ten years on, “English only” campaigns appeal more successfully to American patriotism than campaigns that try to promote language-learning, as if the most successful language in history were threatened.
Why learn a foreign language? After all, the one you already speak if you read this magazine is the world’s most useful and important language. English is not only the first language of the obvious countries, it is now the rest of the world’s second language: a Japanese tourist in Sweden or a Turk landing a plane in Spain will almost always speak English.
Nonetheless, compelling reasons remain for learning other languages. They range from the intellectual to the economical to the practical. First of all, learning any foreign language helps you understand all language better—many
Anglophones first encounter the words “past participle” not in an English class, but in French. Second, there is the cultural broadening. Literature is always best read in the original.
Poetry and lyrics suffer particularly badly in translation. And learning another tongue helps the student grasp another way of thinking. Though the notion that speakers of different languages think differently has been vastly exaggerated and misunderstood, there is a great deal to be learned from discovering what the different cultures call this, that or das oder.
The practical reasons are just as compelling. In business, if the team on the other side of the table knows your language but you don’t know theirs, they almost certainly know more about you and your company than you do about them and theirs—a bad position to negotiate from. Many investors in China have made fatally stupid decisions about companies they could not understand. Diplomacy, war-waging and intelligence work are all weakened by a lack of capable linguists. Virtually any career, public or private, is given a boost with knowledge of a foreign language.
So which one should you, or your children, learn? If you take a glance at advertisements in New York or A-level options in Britain, an answer seems to leap out: Mandarin. China’s economy continues to grow at a pace that will make it bigger than America’s within two decades at most. China’s political clout is growing accordingly. Its businessmen are buying up everything from American brands to African minerals to Russian oil rights. If China is the country of the future, is Chinese the language of the future?
Probably not. Remember Japan’s rise? Just as spectacular as China’s, if on a smaller scale, Japan’s economic growth led many to think it would take over the world. It was the world’s second-largest economy for decades (before falling to third, recently, behind China). So is Japanese the world’s third-most useful language? Not even close. If you were to learn ten languages ranked by general usefulness, Japanese would probably not make the list. And the key reason for Japanese’s limited spread will also put the brakes on Chinese.
This factor is the Chinese writing system (which Japan borrowed and adapted centuries ago). The learner needs to know at least 3,000-4,000 characters to make sense of written Chinese, and thousands more to have a real feel for it. Chinese, with all its tones, is hard enough to speak. But the mammoth feat of memory required to be literate in Mandarin is harder still. It deters most foreigners from ever mastering the system—and increasingly trips up Chinese natives.
A recent survey reported in the People’s Daily found 84% of respondents agreeing that skill in Chinese is declining. If such gripes are common to most languages, there is something more to it in Chinese. Fewer and fewer native speakers learn to produce characters in traditional calligraphy. Instead, they write their language the same way we do—with a computer. And not only that, but they use the Roman alphabet to produce Chinese characters: type in wo and Chinese language-support software will offer a menu of characters pronounced wo; the user selects the one desired. (Or if the user types in wo shi zhongguo ren, “I am Chinese”, the software detects the meaning and picks the right characters.) With less and less need to recall the characters cold, the Chinese are forgetting them. David Moser, a Sinologist, recalls asking three native Chinese graduate students at Peking University how to write “sneeze”:
To my surprise, all three of them simply shrugged in sheepish embarrassment. Not one of them could correctly produce the character. Now, Peking University is usually considered the “Harvard of China”. Can you imagine three phd students in English at Harvard forgetting how to write the English word “sneeze”? Yet this state of affairs is by no means uncommon in China.
As long as China keeps the character-based system—which will probably be a long time, thanks to cultural attachment and practical concerns alike—Chinese is very unlikely to become a true world language, an auxiliary language like English, the language a Brazilian chemist will publish papers in, hoping that they will be read in Finland and Canada. By all means, if China is your main interest, for business or pleasure, learn Chinese. It is fascinating, and learnable—though Moser’s online essay, “Why Chinese is so damn hard,” might discourage the faint of heart and the short of time.
But if I was asked what foreign language is the most useful, and given no more parameters (where? for what purpose?), my answer would be French. Whatever you think of France, the language is much less limited than many people realise…