January 31, 2012
Randy Lavallee is a proud member of the American working class. A New Hampshire resident, he works as a calibration inspector for a jet-engine plant just across the state line in Maine. Four years ago, the plant went through a downsizing that resulted in the layoffs of one-sixth of its 1,600 workers. After the cuts, Lavallee told me, the “CEO and management got big bonuses.”
I met Lavallee, 58, recently in Rochester, New Hampshire, where he lives. A registered Democrat who sometimes votes Republican in presidential races, he is exactly the kind of swing voter who will decide this November’s election. Moreover, given his recent workplace experience, he is exactly the kind of voter who should be receptive to attacks on Mitt Romney’s business history—namely, the layoffs he presided over at Bain Capital. So what, I wondered, did Lavallee think of Romney’s business record? When I asked, he just laughed. “I’d like to have his success,” he said. “I’m just not as ambitious as he is.” But what about the layoffs that followed many of Bain’s deals, even as Romney and his colleagues made big profits? “That’s just how business works. Would I like it if my business shut down? No. But that’s what businesses sometimes need to do.”
A few days later, I called Lavallee back and pressed further, but he held firm. “He’s a businessman, and, if I was a businessman, that’s what I’d do. I’d be in business to make money for me and my company,” he said. I was curious if he knew what Romney’s business involved. “Just from watching TV, it looks like his company would purchase other companies that were going bankrupt, reorganize them, get rid of what’s not needed, and keep the good part of the business,” he replied. But what about when the businesses failed and Bain made money anyway? “They did make an investment and had to recoup the money they could,” he said. If Romney didn’t snare profits like that for himself, Lavallee said sympathetically, he “wouldn’t be in business. He’d be working in a factory like me.”
Again and again on the campaign trail in recent weeks, I spoke to voters whose positive attitudes toward Romney’s wealth and business background surprised me—people who had every reason to resent his success but in fact were inclined to celebrate it. In Council Bluffs, Iowa, I met Dan Strietback, 32, a manager at a Panera café, who told me that, while he knew Bain had laid off plenty of workers, he saw the economic model it was part of as “the best method for the United States of America.” “Everyone’s going to fail now and then,” he said. “It’s when you pick yourself up and carry on.” But what about the sheer scale of Romney’s wealth? “I look at Romney as someone who got some help from his family but worked his butt off for it,” he told me. “It makes me want to work hard, too, and maybe get some money for myself, too.”
Then there was Anne Field of Concord, New Hampshire, a Barack Obama voter who is planning on switching to Romney in November. Field, 64, told me that business has been pretty good at the small plumbing firm where both she and her husband work and that, in any case, she did not blame Obama for the slow recovery. But she is worried about her retirement savings and her daughter’s difficulty finding a job, and she thinks Romney’s business experience is well-suited to a moment when “we need to be tough” and “cut things.” When I asked what she made of the less appealing side of Bain’s work, she shrugged. “That’s what his business was,” she said. “More power to him.”
As the campaign has unfolded, I’ve kept returning to these conversations in my mind. For weeks now, Newt Gingrich has been hammering Romney for having presided over leveraged buyouts at Bain. And, while it remains to be seen whether Gingrich can sustain the momentum from his South Carolina upset, it’s already clear that Obama’s reelection strategy will pursue a similar tack, assuming Romney eventually wins the nomination. The president, it appears, will seek to portray Romney as a plutocrat who made tens of millions of dollars by slicing and dicing companies, regardless of the collateral damage—depicting him, in Mike Huckabee’s memorable 2008 description of his then-rival, as “the guy who fires you.” Obama political guru David Axelrod recently provided a preview of this strategy when he told CNN: “Saving an industry, as the president did, is different than strip-mining companies in order to—in order to profit off of them, which is, in many cases, what Mr. Romney did. … The question is: Is that the philosophy that you want in the White House?”
Among Beltway liberals, there is currently an unquestioned assumption that these attacks will work in the general election: that Romney’s wealth and business background are indeed major political liabilities. But talking to people like Lavallee, Strietback, and Field made me wonder: Would such attacks really stick? Is it actually possible to win an election by portraying your opponent as a plutocrat? Or will many American voters respond to Romney’s financial success with a simple shrug?
AMERICANS are of famously mixed minds when it comes to matters of wealth and fairness. We venerate economic freedom and the self-made man, yet we also harbor populist suspicions of wealthy elites and big business. In a volatile time, our ambivalence on this point has remained remarkably steady: A Pew study released in early January that found a sharp increase in the perception of class conflict also found that respondents had barely budged from four years earlier on the question of whether the rich “are wealthy mainly because they know the right people or were born into wealthy families” or “mainly because of their own hard work, ambition or education.” Forty-six percent chose the former, 43 percent the latter. “What this adds up to, to me, is a public that’s paying attention to these issues and is cross-pressured on these issues,” says Pew’s Paul Taylor. “There is some part of the American public that loves the free enterprise system and believes that the ability to get rich is part of the American Dream, and there is a part of the American public that cares about issues of fairness and that believes that, when things gets out of whack, it’s time to say, ‘Whoa, that’s too much.’”
Research on Americans’ instincts about money and class has mostly been left to social psychologists; but political scientists have started branching into this area, hoping to better understand how wealth and class inform voting. Recently, Meredith Sadin, a doctoral candidate in politics at Princeton, set out to try to gauge how voters respond to candidates’ class backgrounds. She asked voters to rate some imagined congressional candidates, each of whom had been assigned different origins (son of a factory worker or son of a surgeon) and different adult backgrounds (works as an ambulance driver or works as a cardiologist). Not surprisingly, she found that Democratic voters were more likely than Republican voters to attribute negative characteristics to a GOP candidate’s privileged origins or current upper-class status. But her most interesting finding was that independents—those crucial voters who invariably seem to determine the outcome in close races—tended to act like Republicans when it came to candidates’ origins and current wealth.
Both independents and Republicans, for instance, perceived a GOP candidate with current upper-class status as slightly more intelligent than a Republican of unknown status. Independents and Republicans also did not seem to hold an upper-class candidate’s wealthy origins against him. That is, if they knew both the origins and current status of a Republican or Democratic candidate, they didn’t see many differences between one who’d worked his way up and one who’d been born rich—whereas Democratic voters strongly favored one who’d climbed the ladder.
There was one caveat to this last finding: When independents were only given information about a Republican candidate’s origins, and not current class, they felt more warmly toward a candidate with humble roots. But presidential elections are “high information”—voters learn a lot about the candidates—and, in the upcoming general election, voters will know both where the candidates started and where they ended up…
The ability to speak multiple unrelated foreign languages fluently counts among a short list of showstopping talents, like the ability to play a Bach fugue or fly a helicopter (assuming one isn’t a harpsichordist or pilot by profession). It impresses in part because it suggests discipline, time, and effort — and, perhaps, other hidden skills.
But what if the languages came effortlessly? There are, in the history of polyglottism, a few examples of people who seem to have found a way to cheat the system and acquire languages so easily and quickly that what would normally appear a feat of discipline and erudition looks instead like savantism. These hyperpolyglots chitchat fluently in dozens of dialects, and they pick up new ones literally between meals. For the rest of us who have to slave over our verb tables, their talent resembles sorcery.
Michael Erard’s Babel No More is about these hyperpolyglots. It is not about concierges or mâitre d’s who can charm guests in Japanese, English, and French, or about diplomats who get along without a translator in Moscow, Cairo, and Shanghai. Such people are strictly amateur compared to, say, Harold Williams, a New Zealander who attended the League of Nations and is said to have spoken comfortably to each delegate in the delegate’s native tongue, or the American Kenneth Hale, who learned passable Finnish (one of about fifty languages he was reputed to speak convincingly) on a flight to Helsinki and allegedly learned Japanese after a single viewing of theShogun miniseries.
The most famous hyperpolyglot is Giuseppe Mezzofanti, the nineteenth-century Bolognese cardinal who was reputed to speak between thirty and seventy languages, ranging from Chaldaean to Algonquin. He spoke them so well, and with such a feather-light foreign accent, according to his Irish biographer, that English visitors mistook him for their countryman Cardinal Charles Acton. (They also said he spoke as if reading from The Spectator.) His ability to learn a language in a matter of days or hours was so devilishly impressive that one suspects Mezzofanti pursued the cardinalate in part to shelter himself from accusations that he had bought the talent from Satan himself.
Babel No More takes Erard (who has only modest linguistic ability of his own) to Mezzofanti’s library in Bologna, and then on the trail of modern Mezzofantis. Not one can match the ability of the cardinal himself. Many alleged hyperpolyglots turn out to be braggarts — one of them, Ziad Fazah, is now best known for appearing on a Chilean TV show and failing to respond coherently to speakers of half a dozen languages he claimed to know — and the rest are impressive but tend to need practice to keep up their skills. Their languages recede with disuse, and no one succeeds in switching from Abkhaz to Quechua to Javanese in the way Mezzofanti was said to.
Among the more impressive workhorses is Alexander Arguelles, who, at the time of his first meeting with Erard, is an unemployed academic and jogging enthusiast living in California. Arguelles reads novels in Dutch, writes and reads classical Arabic, and translates Korean for cash on the side. But he also spends twelve hours every day learning languages and obsessively cataloguing his progress. In his case, the hyperpolyglottism appears to be simply compulsive behavior.
And so it goes with virtually every hyperpolyglot Erard meets. His book ends up being less an exploration of modern Mezzofantis than a fairly convincing (if uninspiring) brief denying their existence, at least in the mythologized form that their reputations have assumed. Literally thousands of people tested Mezzofanti’s abilities and came away satisfied, so it might seem improbable that he was anything less than a linguistic monster. And yet earwitness testimony is notoriously unreliable, and many people set an absurdly low bar for fluency. (I was once accused of speaking Russian, on the basis of successfully having read a train schedule and bought a ticket in Irkutsk.)
All this is not to say that hyperpolyglots are all frauds. Both Mezzofanti and Kenneth Hale were reluctant to enumerate their languages, and although both conversed happily with many visitors — who were gratified and enchanted by the gesture of linguistic respect — they denied that they were doing anything remarkable or praiseworthy. Hyperpolyglots argue that what they do is not fluent speaking but instead a sort of mechanical reproduction, a robotic trick rather than a human skill. Hale, an MIT professor who died in 2001, is quoted as disputing the idea that he “spoke” fifty languages, limiting his claim to only three, one of them being the Australian Aboriginal language Warlpiri. He distinguished “saying things” from speaking a language and really understanding it. The ability to pretend to converse in a language, and get by, isn’t the same as speaking it fluently…
Charles Murray is back, and the debate about wealth and inequality will never be the same. Readers of the political scientist’s earlier work, especially The Bell Curve and Losing Ground, might assume that with his new book he is returning to the vexed subject of race. He is, but with a twist: Murray’s area of intensive focus (and data mining) is “the state of white America”—and it’s not what you might think.
According to Murray, the last 50 years have seen the emergence of a “new upper class.” By this he means something quite different from the 1 percent that makes the Occupy Wall Streeters shake their pitchforks. He refers, rather, to the cognitive elite that he and his coauthor Richard Herrnstein warned about in The Bell Curve. This elite is blessed with diplomas from top colleges and with jobs that allow them to afford homes in Nassau County, New York and Fairfax County, Virginia. They’ve earned these things not through trust funds, Murray explains, but because of the high IQs that the postindustrial economy so richly rewards.
Murray creates a fictional town, Belmont, to illustrate the demographics and culture of the new upper class. Belmont looks nothing like the well-heeled but corrupt, godless enclave of the populist imagination. On the contrary: the top 20 percent of citizens in income and education exemplify the core founding virtues Murray defines as industriousness, honesty, marriage, and religious observance. Yes, the elites rebelled against bourgeois America in the late 1960s and 1970s, but it wasn’t long before they put away their counterculture garb. Today, they work long hours and raise their doted-upon offspring in stable homes. One of the most ignored facts about American social life is that the divorce rate among the college-educated has been declining since the early 1980s, while their illegitimate children (as they used to be called) remain as rare as pickup trucks in their garages. Murray deems some of the Belmontians’ financial excesses “unseemly,” but for the most part, he finds them law-abiding and civically engaged—taking their children to church or synagogue, organizing petitions for new stoplights or parks, running Little League teams and PTA fundraisers.
The American virtues are not doing so well in Fishtown, Murray’s fictional working-class counterpart to Belmont. In fact, Fishtown is home to a “new lower class” whose lifestyle resembles The Wire more than Roseanne. Murray uncovers a five-fold increase in the percentage of white male workers on disability insurance since 1960, a tripling of prime-age men out of the labor force—almost all with a high school degree or less—and a doubling in the percentage of Fishtown men working less than full-time. Time-use studies show that these men are not using their newly found leisure to fix the dishwasher or take care of the kids. Mostly, they’re watching more television, getting more sleep—and finding trouble. The percentage of Fishtown men in prison quadrupled after 1974, and though crime rates declined there in the mid-1990s, mirroring national trends, they’re still markedly higher than they were in 1970. (Belmont, on the other hand, never experienced significant changes in crime or incarceration rates.) Fishtown folks cannot be said to be clinging to their religion: Murray finds a rise in the percentage of nonbelievers there. In fact, he found the same in Belmont. The difference is that Belmonters continue to join religious institutions and enjoy the benefits of their social capital. About 59 percent of Fishtowners now have no religious affiliation, compared with 41 percent of Belmonters.
Most disastrous for Fishtown residents has been the collapse of the family, which Murray believes is now “approaching a point of no return.” For a while after the 1960s, the working class hung on to its traditional ways. That changed dramatically by the 1990s. Today, under 50 percent of Fishtown 30- to 49-year-olds are married; in Belmont, the number is 84 percent. About a third of Fishtowners of that age are divorced, compared with 10 percent of Belmonters. Murray estimates that 45 percent of Fishtown babies are born to unmarried mothers, versus 6 to 8 percent of those in Belmont.
And so it follows: Fishtown kids are far less likely to be living with their two biological parents. One survey of mothers who turned 40 in the late nineties and early 2000s suggests the number to be only about 30 percent in Fishtown. In Belmont? Ninety percent—yes, ninety—were living with both mother and father. Many experts would define the cause as a dearth of “marriageable” men (see above). The causation goes the other way as well. Men who don’t marry don’t work—or at least, they work less hard. Severed from family life, they don’t attach themselves to community organizations, including churches, and in greatly disproportionate numbers they engage in antisocial, even criminal, behavior…
January 31, 2012
Jonathan Haidt is occupying Wall Street. Sort of. It’s a damp and bone-chilling January night in lower Manhattan’s Zuccotti Park. The 48-year-old psychologist, tall and youthful-looking despite his silvered hair, is lecturing the occupiers about how conservatives would view their ideas.
“Conservatives believe in equality before the law,” he tells the young activists, who are here in the “canyons of wealth” to talk people power over vegan stew. “They just don’t care about equality of outcome.”
Explaining conservatism at a left-wing occupation? The moment tells you a lot about the evolution of Jonathan Haidt, moral psychologist, happiness guru, and liberal scold.
Haidt (pronounced like “height”) made his name arguing that intuition, not reason, drives moral judgments. People are more like lawyers building a case for their gut feelings than judges reasoning toward truth. He later theorized a series of innate moral foundations that evolution etched into our brains like the taste buds on our tongues—psychological bases that underlie both the individual-protecting qualities that liberals value, like care and fairness, as well as the group-binding virtues favored by conservatives, like loyalty and authority.
“He, over the last decade or so, has substantially changed how people think about moral psychology,” says Paul Bloom, a psychologist at Yale University.
Now Haidt wants to change how people think about the culture wars. He first plunged into political research out of frustration with John Kerry’s failure to connect with voters in 2004. A partisan liberal, the University of Virginia professor hoped a better grasp of moral psychology could help Democrats sharpen their knives. But a funny thing happened. Haidt, now a visiting professor at New York University, emerged as a centrist who believes that “conservatives have a more accurate understanding of human nature than do liberals.”
In March, Haidt will publish The Righteous Mind: Why Good People Are Divided by Politics and Religion (Pantheon). By laying out the science of morality—how it binds people into “groupish righteousness” and blinds them to their own biases—he hopes to drain some vitriol from public debate and enable conversations across ideological divides.
Practically speaking, that often means needling liberals while explaining conservatives and religious people, and treading a fine line between provocation and treason. Haidt works in a field so left-wing that, when he once polled roughly 1,000 colleagues at a social-psychology conference, 80 to 90 percent classified themselves as liberal. Only three people identified as conservative. So hanging out in his lab can jar you at first. You’ll be listening to his team talk shop over boar burgers and organic ketchup in Greenwich Village, and then you think—Wait, did Haidt just praise Sarah Palin?
Indeed. “She’s right,” he says, that “it’s not left-right so much as it is the big powerful interests who control everything versus the little people.” And National Review? “The most important thing I read” to get new ideas. And Glenn Beck? “A demonizer,” says Haidt, but one who has “a great sense of humor, so I enjoy listening to him.”
Meanwhile, though Haidt still supports President Obama, he chides Democrats for a moral vision that alienates many working-class, rural, and religious voters. Though he’s an atheist, he lambasts the liberal scientists of New Atheism for focusing on what religious people believe rather than how religion binds them into communities. And he rakes his own social-psychology colleagues over the coals for being “a tribal moral community that actively discourages conservatives from entering” and for making the field’s nonliberal members feel like closeted homosexuals. (See related article, Page B8.)
“Liberals need to be shaken,” Haidt tells me. They “simply misunderstand conservatives far more than the other way around.”
But even as Haidt shakes liberals, some thinkers argue that many of his own beliefs don’t withstand scrutiny. Haidt’s intuitionism overlooks the crucial role reasoning plays in our daily lives, says Bloom. Haidt’s map of innate moral values risks putting “a smiley face on authoritarianism,” says John T. Jost, a political psychologist at NYU. Haidt’s “relentlessly self-deceived” understanding of faith makes it seem as if God and revelation were somehow peripheral issues in religion, fumes Sam Harris, one of “the Four Horsemen“ of New Atheism and author of The End of Faith.
“This is rather like saying that uncontrolled cell growth is a peripheral issue in cancer biology,” Harris e-mails me. “Haidt’s analysis of cancer could go something like this: ‘Sure, uncontrolled cell growth is a big concern, but there’s so much more to cancer! There’s chemotherapy and diagnostic imaging and hospice care and drug design. There are all the changes for good and ill that happen in families when someone gets diagnosed with a terminal illness. … ‘ Yes, there are all these things, but what makes cancer cancer?”
Other questions: What made Haidt go from a religion-loathing liberal to a faith-respecting centrist? And as the 2012 election approaches, will anybody listen?
Researchers have found that conservatives tend to be more sensitive to threats and liberals more open to new experiences. By biology and biography, Haidt seemed destined for the liberal tribe. He grew up in suburban New York as a secular Jew whose mother worshiped FDR. He attended Yale in an era when President Ronald Reagan was routinely mocked on campus. He relishes new adventures like interviewing Hindu priests and laypeople in India, a project that stripped away his hostility to faith and exposed him to a broader palate of moral concerns, such as community and divinity…
January 31, 2012
January 30, 2012
January 30, 2012
For the past 30 years, redistricting in Texas has provided great theater. As the state has gone from one-party Democratic to a Republican stronghold to renewed stirrings of bipartisan competition, the controlling party has exploited the decennial line drawing to lock in gains. And just as certainly, the courts have provided refuge for those on the outs.
The Supreme Court has recognized the problem on a national scale but has been unable to see a solution. The justices have failed to find an easy definition of what is fair, what level of manipulation is permissible, how much greed is tolerable, how many districts should be assigned to this group or that group.
Unfortunately our democracy has done little to bring order to the self-serving spectacle of political insiders trying to cement their advantage, the voters be damned. Fifty years ago the Supreme Court decreed that it would strike down unequal population in districts, but other than translating that into a one-person, one-vote requirement, the Court has done little else. We are told that gerrymandering offends the Constitution, but that nothing can be done about it.
So, following the logic of going where the getting might be good, litigants have learned that partisan grievances only get traction if adorned in the inflammatory garb of racial claims.
Of course, race and politics are difficult to separate. The polarization of the parties nationally yields a heavily minority Democratic party and an overwhelmingly white Republican party. The richest partisan gains follow the lines of race and ethnicity.
Which brings us to the current Texas showdown. Since the last redistricting a decade ago, the state gained nearly four million residents, mostly the result of surges in the minority population. In turn, Texas received an additional four congressional districts. As a general rule, states more easily distribute population gains than losses. But with a divided Congress, every seat has become part of the national battleground. With Republicans in control of the Texas Legislature, the state was carved up to create four districts that they would likely control. So, off to litigation we go, where the story becomes inordinately complicated.
Texas is a “covered jurisdiction” under Section 5 of the Voting Rights Act which means that it cannot put its plan into effect unless it is “precleared” by either the Department of Justice or a special three-judge court in Washington, D.C. This year, for the first time since the VRA was passed in 1965, the Justice Department is headed by Democrats at the time of redistricting. Texas decided to try the D.C. court instead, and the state is now about to go to trial to prove that the new plan is not discriminatory in either its effect or its intent.
Meanwhile, suit was also filed in Texas before a special three-judge federal court claiming that the new plan could not be implemented before it was precleared, that the pre-2010 Census plan on which the lines were based could no longer be used because it failed to account properly for the population of Texas, and that the new plan was in fact discriminatory. That case, too, was scheduled for a quick trial.
In the meantime, some plan had to be in place for the 2012 elections, so the Texas court properly took the reins and then ordered its own plan. The court redrew the state lines, handing a victory to the Democrats, who would now were set to control three of the new seats. That result led to a rushed appeal to the Supreme Court, which last week declared the new plan improper, because it was insufficiently respectful of the state’s redistricting objectives.
As byzantine as this contest may sound, the legal result was more or less in line with prior law. Texas was both prohibited from proceeding with its plan because it had not dispelled the presumption of discrimination, and yet entitled to have courts defer to its policy objectives in redistricting. The real difficulty was to come.
Why do we allow self-serving manipulation by insiders in politics when we strive to constrain it in all other walks of public life?
In order to create a new plan, the Supreme Court held, the Texas court would have to be deferential only to the extent that the state’s objectives were presumptively legitimate. That in turn would require an investigation in Texas into the motives behind the new plan and an assessment of its impact on minority voters. All this will happen quickly, and the Republican gain from the Supreme Court victory may well evaporate in the process.
From the judicial perspective, this is chaos. There are now two three-judge courts—one in Texas, one in D.C.—heading into trial on the same issues in rapid succession. Each court will likely hear the same evidence and, even if there is agreement on the Texas state plan, the resulting waste and disorder calls out for change.
Debate on the VRA tends to focus on whether the intrusion into state processes continues to pass constitutional scrutiny, but this is not the root of the problem. Race is still a defining issue, but Texas history shows that race and politics form a combustible mix in redistricting. It proved to be in the 1980s, when a Republican effort to gerrymander Democrat Martin Frost out of his Dallas-area district was struck down as racially discriminatory by Democratic-leaning judges—even though the challenged districting increased minority representation. So it was again in the 1990s, when a Democratic plan that gave all three new seats to minorities was challenged under the VRA by Republicans in Texas and in the Department of Justice. (I helped represent the state of Texas in that round of litigation.) Then in the 2000s, after multiple efforts, the DeLay gerrymander passed despite Democratic legislators fleeing from the state to stop legislative business by preventing a quorum. A disbelieving Supreme Court finally had to resort to the VRA to find that a contorted district running from Austin to the Mexican border was offensive. The reason was not that it was designed to remove Democrat Lloyd Doggett from Congress, but, oddly, that the plan was thought to unite a Hispanic population of the Austin suburbs with insufficiently culturally aligned brethren from the border valley…
January 30, 2012
By now most everyone has heard about an experiment that goes something like this: Students dressed in black or white bounce a ball back and forth, and observers are asked to keep track of the bounces to team members in white shirts. While that’s happening, another student dressed in a gorilla suit wanders into their midst, looks around, thumps his chest, then walks off, apparently unseen by most observers because they were so focused on the bouncing ball. Voilà: attention blindness.
The invisible-gorilla experiment is featured in Cathy Davidson’s new book, Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn (Viking, 2011). Davidson is a founder of a nearly 7,000-member organization called Hastac, or the Humanities, Arts, Sciences, and Technology Advanced Collaboratory, that was started in 2002 to promote the use of digital technology in academe. It is closely affiliated with the digital humanities and reflects that movement’s emphasis on collaboration among academics, technologists, publishers, and librarians. Last month I attended Hastac’s fifth conference, held at the University of Michigan at Ann Arbor.
Davidson’s keynote lecture emphasized that many of our educational practices are not supported by what we know about human cognition. At one point, she asked members of the audience to answer a question: “What three things do students need to know in this century?” Without further prompting, everyone started writing down answers, as if taking a test. While we listed familiar concepts such as “information literacy” and “creativity,” no one questioned the process of working silently and alone. And noticing that invisible gorilla was the real point of the exercise.
Most of us are, presumably, the products of compulsory educational practices that were developed during the Industrial Revolution. And the way most of us teach is a relic of the steam age; it is designed to support a factory system by cultivating “attention, timeliness, standardization, hierarchy, specialization, and metrics,” Davidson said. One could say it was based on the best research of the time, but the studies of Frederick Winslow Taylor, among others, that undergird the current educational regime (according to Davidson) depend upon faked data supporting the preconceptions of the managerial class. Human beings don’t function like machines, and it takes a lot of discipline—what we call “classroom management”—to make them conform. Crucial perspectives are devalued and rejected, stifling innovation, collaboration, and diversity.
It wasn’t always that way. Educational practices that seem eternal, such as letter grades, started hardly more than a century ago; they paralleled a system imposed on the American Meat Packers Association in the era of The Jungle. (At first the meatpackers objected because, they argued, meat is too complex to be judged by letter grades.) The factory assembly line provided inspiration for the standardized bubble test, which was adopted as a means of sorting students for admission to college. Such practices helped to make education seem efficient, measurable, and meritocratic, but they tended to screen out collaborative approaches to problem-solving.
Drawing on her scholarly work in American literary history, Davidson argued that resistance to technology in education is not new. Every new technology takes time to become accepted by institutional cultures. Writing, for example, was once considered a degenerate, impoverished form of communication; it’s why we know about the teachings of Socrates only from the writings of Plato. When the print revolution produced cheap novels for a mass audience, popular works were regarded as bad for young people, especially women, who secreted books in their skirt “offices.” Following the long trajectory of the Protestant Reformation, you no longer needed someone to tell you what to think: You could read for yourself, draw your own conclusions, and possibly select your own society. Now the Internet offers a radical expansion of that process of liberation: It challenges institutional authority, it’s uncontrolled, and it has the potential to disrupt existing hierarchies, opening up new fields of vision, and enabling us to see things that we habitually overlook.
Browsing the 2012 conference program of the Modern Language Association, which includes nearly 60 sessions involving the digital humanities, Stanley Fish recently observed that “I remember, with no little nostalgia, the days when postmodernism in all its versions was the rage and every other session at the MLA convention announced that in theory’s wake everything would have to change.” Now the isms of prior decades—“multiculturalism, postmodernism, deconstruction, postcolonialism, neocolonialism, racism, racialism, feminism, queer theory”—seem to have retreated. But the ethos and disciplinary range of the digital humanities on display at Hastac suggest that this movement is not a replacement for the old order of “Theory” that reigned in the 80s and 90s so much as it is a practical fulfillment of that movement’s vision of a more inclusive, egalitarian, and decentralized educational culture.
Providing examples of how people have worked collaboratively, using the Internet, to develop effective responses to real-world problems, Davidson made a compelling argument for significant reforms in higher education (many examples are provided in her book). Too many of our vestigial practices, such as the tenure monograph and the large-room lecture, have become impediments to innovative scholarship. Students often learn in spite of our practices, learning more outside of the structured classroom than in it. Google is not making the rising generations stupid, Davidson argued; on the contrary, they rely on it to teach themselves, and that experience is making students aware that invisible gorillas are everywhere—and that one of them is higher education as most of us know it.
I might add, as the cost of traditional education increases beyond affordability for more and more students, that they (and their employers) may increasingly decide that they don’t need us. We need to find more ways to expand and diversify higher education beyond traditional degrees earned in late adolescence. Without abandoning the value of preparing students for citizenship and a rewarding mental life, we need to develop more-flexible systems of transparent long-term and just-in-time credentialing, earned over the course of one’s life in response to changing needs and aspirations. Apparently to that end, Hastac is now supporting the exploration of digital “badges” signifying the mastery of specific skills, experiences, and knowledge.
Whatever the means, there is an emerging consensus that higher education has to change significantly, and Davidson makes a compelling case for the ways in which digital technology, allied with neuroscience, will play a leading role in that change.
Nevertheless, graduate students on Hastac panels—and especially in conversation—complain bitterly that their departments are not receptive to collaborative, digital projects. In most cases, their dissertation committees expect a written, 200-page proto-monograph; that’s nonnegotiable. Meanwhile, assistant professors complain that they can earn tenure only by producing one or perhaps two university press books that, in all likelihood, few people will read, when their energies might be more effectively directed toward online projects with, potentially, far greater impact.
In the context of a talk at Hastac on publishing, one graduate student observed that digital humanists—for some time, at least—must expect to perform double labor: digital projects accompanied by traditional written publications about those projects. The MLA and the American Historical Association have established guidelines for evaluating digital projects, but most faculty members are not yet prepared to put those guidelines into effect. It requires a radical change of perspective for scholars who have invested so much of their lives in written criticism as the gold standard. “The associate professors, especially,” one panelist noted, “judge the next generation by the standards they were expected to meet.” Senior professors seem more prepared to “let the kids do their thing.”…
January 30, 2012
Central plazas were key places for political action in 2011, but historian Jeffrey Wasserstrom says the Town Square Test fails as a method for assessing the divide between democracy and authoritarian.
Many of last year’s most dramatic photographs showed people packing public places to sound off. We saw memorable images of crowds gathering at Tahrir Square to lambast one government then castigate its successor, protesters at Zuccotti Park to voice outrage at Wall Street, and public outcry on the grounds of the Mazu Temple in the South China village of Wukan in December to denounce government land grabs. We saw gatherings in Syria, in Tunisia, in Greece, even in North Korea.
If, as TIME magazine declares, 2011’s Person of the Year was “The Protester,” then 2011’s Place of the Year was the town square. This makes the start of 2012 an ideal time to revisit the “Town Square Test,” which was first spelled out by the former Soviet dissident turned Israeli politician Natan Sharansky in his 2004 book, The Case for Democracy.
Soviet specialist Condoleezza Rice gave the test a boost in 2005 when she praised it in her opening statement during her Senate confirmation hearings to be U.S. Secretary of State; her boss, George W. Bush, extolled it as well.
At the heart of the Town Square Test is the notion that the difference between living in a “free” state and living in a “fear” state is clear and comes down to whether a person can go to the town square and “express his or her views without fear of arrest, imprisonment, or physical harm.”
At first glance, it would seem both an attractive idea and one whose value and wisdom was confirmed by the dramatic events of 2011. Sharansky is clearly onto something when he says we can learn a lot about any country by what people are, and are not, allowed to say and do in public spaces.
On closer inspection, however, a survey of last year’s gatherings in public places around the world actually reveals the fundamental problems with the Town Square Test — despite its superficial appeal, it’s always been far too blunt an instrument to be very useful. And 2011’s events remind us that embracing the test’s simple vision of a world divided neatly into “fear” states and “free” states can lead to a distorted view of political life.
For Bush, Rice, and Sharansky, the Town Square Test fits in with a specific vision of human nature and a specific vision of recent history. They assume that there is a universal desire among people living in “fear” countries to want their nations to become “free” ones. They celebrate the European revolutions of 1989, which often involved mass gatherings in town squares, as having transformed totalitarian countries into democracies.
Washington, Rice said, should use the test to increase the odds that the first decades of the 21stcentury would return the 1989 tide, changing more “fear” states into “free” ones. The White House should identify nations that fail the Town Square Test, then encourage and support efforts by the citizens of those countries to liberate themselves.
How do the events of 2011 fit into this picture? News stories from that dramatic year provided plenty of fresh evidence that people in many parts of the world thirst for a greater degree of freedom and often are willing to take great risks in pursuit of this goal, but in many other ways, the year’s events challenged, rather than reinforced, the Town Square Test worldview.
Consider these five points:
1) The year reminded us that even in liberal democratic states, limits always exist on what one can say and do in the town square. Thanks to American laws against hate speech, for example, and German ones that make expressing pro-Nazi sentiments a crime, there are no countries where people are completely free to say anything they want in public without fear of negative consequences. In addition, as the Occupy Wall Street movement showed, there are often limits to how long one can stay in the town square of a “free” state to express one’s opinion. The best known proponents of the Town Square Test have always taken it for granted that the United States passes it with flying colors; but in 2011 when those in authority thought specific Occupiers tarried a bit too long, force of varying kinds, including most infamously pepper spraying (which became to Occupy what fire hoses had been to civil rights protests), was used to get people out of public spaces, from New York’s Zuccotti Park to University of California campuses at Berkeley and Davis. This was done even though the people cleared from those locales were not engaging in taboo forms of speech.
2) Town Square Test thinking tends to assume that within any country all public spaces are created equal, with similar rules governing their use. This makes it a fairly simple matter to say which nations pass and which fail the test. But in 2011 as always, it was much safer speaking out in some regions than others. In his December 17 New Yorker report on Russian protests in Moscow public spaces, for example, David Remnick makes it clear from interviews with human rights activists and crusading journalists that doing anything seen as challenging the authorities is riskier in Chechnya than in Russia’s capital city, suggesting that there is not just one kind of town square in that country.
This is definitely the case in the People’s Republic of China. For example, it is possible to gather in a Hong Kong park to mourn the victims of the June 4 Massacre of 1989 (that took place near and put an end to the protests in Tiananmen Square) without risking arrest, but arrest is certain if you do the same thing in Tiananmen Square or indeed any public space in Beijing or Shanghai. Yet it is possible to go to parks in Beijing or Shanghai and talk loudly about your disgust with local officials and not get into trouble, while doing the exact same thing in a park in Xinjiang or Tibet would be exponentially riskier.
3) Just as not all town squares in a country are necessarily the same, different rules of town-square freedom may apply to different residents thanks to variables such as race, class, and gender. Historical examples abound, including the limited access to town-square rights that African Americans had in the American South in the Jim Crow era. That the issue is not just of historical significance was driven home by the changing nature of Tahrir Square protests, which by late in the year focused at times on the danger that women faced in expressing opinions in public in a post-Mubarak Egypt. The same country provides evidence of religion as a variable, since the ease with which Egyptian Christians could express grievances without fear in public spaces changed dramatically between early in 2011 and October of that year….
January 30, 2012
This image has been posted with express written permission. This cartoon was originally published at Town Hall.
Race in Brazil: Black Brazilians are much worse off than they should be. But what is the best way to remedy that?
January 29, 2012
IN APRIL 2010, as part of a scheme to beautify the rundown port near the centre of Rio de Janeiro for the 2016 Olympic games, workers were replacing the drainage system in a shabby square when they found some old cans. The city called in archaeologists, whose excavations unearthed the ruins of Valongo, once Brazil’s main landing stage for African slaves.
From 1811 to 1843 around 500,000 slaves arrived there, according to Tânia Andrade Lima, the head archaeologist. Valongo was a complex, including warehouses where slaves were sold and a cemetery. Hundreds of plastic bags, stored in shipping containers parked on a corner of the site, hold personal objects lost or hidden by the slaves, or taken from them. They include delicate bracelets and rings woven from vegetable fibre; lumps of amethyst and stones used in African worship; and cowrie shells, a common currency in Africa.
It is a poignant reminder of the scale and duration of the slave trade to Brazil. Of the 10.7m African slaves shipped across the Atlantic between the 16th and 19th centuries, 4.9m landed there. Fewer than 400,000 went to the United States. Brazil was the last country in the Americas to abolish slavery, in 1888.
Brazil has long seemed to want to forget this history. In 1843 Valongo was paved over by a grander dock to welcome a Bourbon princess who came to marry Pedro II, the country’s 19th-century emperor. The stone column rising from the square commemorates the empress, not the slaves. Now the city plans to make Valongo an open-air museum of slavery and the African diaspora. “Our work is to give greater visibility to the black community and its ancestors,” says Ms Andrade Lima.
This project is a small example of a much broader re-evaluation of race in Brazil. The pervasiveness of slavery, the lateness of its abolition, and the fact that nothing was done to turn former slaves into citizens all combined to have a profound impact on Brazilian society. They are reasons for the extreme socioeconomic inequality that still scars the country today.
Neither separate nor equal
In the 2010 census some 51% of Brazilians defined themselves as black or brown. On average, the income of whites is slightly more than double that of black or brown Brazilians, according to IPEA, a government-linked think-tank. It finds that blacks are relatively disadvantaged in their level of education and in their access to health and other services. For example, more than half the people in Rio de Janeiro’s favelas (slums) are black. The comparable figure in the city’s richer districts is just 7%.
Brazilians have long argued that blacks are poor only because they are at the bottom of the social pyramid—in other words, that society is stratified by class, not race. But a growing number disagree. These “clamorous” differences can only be explained by racism, according to Mário Theodoro of the federal government’s secretariat for racial equality. In a passionate and sometimes angry debate, black Brazilian activists insist that slavery’s legacy of injustice and inequality can only be reversed by affirmative-action policies, of the kind found in the United States.
Their opponents argue that the history of race relations in Brazil is very different, and that such policies risk creating new racial problems. Unlike in the United States, slavery in Brazil never meant segregation. Mixing was the norm, and Brazil had many more free blacks. The result is a spectrum of skin colour rather than a dichotomy.
Few these days still call Brazil a “racial democracy”. As Antonio Riserio, a sociologist from Bahia, put it in a recent book: “It’s clear that racism exists in the US. It’s clear that racism exists in Brazil. But they are different kinds of racism.” In Brazil, he argues, racism is veiled and shamefaced, not open or institutional. Brazil has never had anything like the Ku Klux Klan, or the ban on interracial marriage imposed in 17 American states until 1967.
Importing American-style affirmative action risks forcing Brazilians to place themselves in strict racial categories rather than somewhere along a spectrum, says Peter Fry, a British-born, naturalised-Brazilian anthropologist. Having worked in southern Africa, he says that Brazil’s avoidance of “the crystallising of race as a marker of identity” is a big advantage in creating a democratic society.
But for the proponents of affirmative action, the veiled quality of Brazilian racism explains why racial stratification has been ignored for so long. “In Brazil you have an invisible enemy. Nobody’s racist. But when your daughter goes out with a black, things change,” says Ivanir dos Santos, a black activist in Rio de Janeiro. If black and white youths with equal qualifications apply to be a shop assistant in a Rio mall, the white will get the job, he adds.
The debate over affirmative action splits both left and right. The governments of Dilma Rousseff, the president, and of her two predecessors, Luiz Inácio Lula da Silva and Fernando Henrique Cardoso, have all supported such policies. But they have moved cautiously. So far the main battleground has been in universities. Since 2001 more than 70 public universities have introduced racial admissions quotas. In Rio de Janeiro’s state universities, 20% of places are set aside for black students who pass the entrance exam. Another 25% are reserved for a “social quota” of pupils from state schools whose parents’ income is less than twice the minimum wage—who are often black. A big federal programme awards grants to black and brown students at private universities.
These measures are starting to make a difference. Although only 6.3% of black 18- to 24-year-olds were in higher education in 2006, that was double the proportion in 2001, according to IPEA. (The figures for whites were 19.2% in 2006, compared with 14.1% in 2001). “We’re very happy, because in the past five years we’ve placed more blacks in universities than in the previous 500 years,” says Frei David Raimundo dos Santos, a Franciscan friar who runs Educafro, a charity that holds university-entrance classes in poor areas. “Today there’s a revolution in Brazil.”…
Outrage has predictably followed Twitter’s announcement yesterday that it has developed a system to block (or, as the company euphemistically puts it, “withhold“) specific tweets in specific countries if they violate local law, while keeping the content available for the rest of the world. The hashtag #TwitterBlackout is bursting with calls for a boycott of the microblogging service on Saturday, and headlines like “Twitter caves to global censorship” abound.
But the indignation may be overwrought. The Next Web‘s Anna Heim points out that Twitter users who want to see a blocked tweet can simply change their country setting. In fact, Twitter’s decision to link to instructions on how to change that setting as part of its announcement has some speculating that the company is actually feigning respect for local laws while winking at its users.
“Chances are that Twitter perfectly knows about this workaround,” Heim writes. “Users won’t need to hide their IP address with a proxy: Twitter lets them change it manually, despite the potential loss in hyperlocal ad dollars for the platform.” Indeed, in an email exchange with Foreign Policy, Twitter spokeswoman Rachel Bremer emphasized user control. “Because geo-location by IP address is an imperfect science,” she explained, “we allow users to manually set their country.”
What’s more, Twitter has promised to disclose any information it withholds through a system that looks a lot like Google’s Transparency Report, which tracks requests by government agencies and courts around the world for Google to hand over user data or remove content from its services. Twitter pledges to alert users when their tweets or accounts have been removed, clearly mark withheld content, and post notices on the website Chilling Effects. The company will only remove content in reaction to “valid legal process — we don’t do anything proactively,” Bremer explained. She insisted that Twitter’s commitment to free speech, which “has been demonstrated in our actions since the company was founded,” is “not changing.”
But that’s just the problem. Twitter has long built its brand around free expression. While the company has never joined tech giants such as Google and Microsoft in supporting the Global Network Initiative, which seeks to protect online privacy and free speech, Twitter has championed those values in other ways. CEO Dick Costolo likes to say that Twitter is the “free speech wing of the free speech party,” while former CEO Evan Williams once described the company’s goal as reaching the “weakest signals all over the world,” citing protests in Iran and Moldova as examples. Not only did Twitter famously postpone a planned outage at the height of the Iranian protests in 2009, but when the Egyptian government shut down social networks last year at the start of the revolution, Twitter teamed up with Google to develop a “speak-to-tweet” service. While “Google only promises not to be evil,” Jeff Bercovici writes at Forbes, “Twitter’s devotees have built it up into something much more exalted: a force for global progress and human enlightenment.”
And, so far, Twitter has not done a particularly good job of explaining how this week’s changes will alter its process for removing content and why the company is willing to imperil its brand by implementing the new rules. In announcing the policy, Twitter explained that it will need to “enter countries that have different ideas about the contours of freedom of expression” as it grows. But what does “enter countries” mean for a website theoretically available from anywhere? Spokespeople have since added that there are still countries where Twitter will not operate as a business (read: China, where Twitter is blocked) and that the changes have nothing to do with Saudi Prince AlWaleed bin Talal investing $300 million in the company. But when asked byForeign Policy for an explanation of how notices under the new system might differ from the copyright complaints currently clogging Twitter’s section on Chilling Effects, Bremer declined to comment on “hypothetical situations about when or how we might have to remove content in the future.”…
January 29, 2012
Terrorism isn’t a 20th-century phenomenon, but the circumstances of September 11—the way al Qaeda organised and funded itself and conducted its operations—could only have come out of the globalising world of the 1990s.
That decade began with one of the 20th century’s most unanticipated events, the collapse of the Soviet Union and the crumbling of the Cold War structures that had shaped global politics.
This is a good place to begin because, however we weigh up the reasons for the fall of the Soviet empire, a central element was economic; the incipient impact of emerging technology that led to a revolution in the cost of transferring information and distributing goods. By driving the opening of markets and the rapid dissemination of data, these brought out starkly the inability of autarkic economies to compete with their rivals, and made closed political systems much harder to sustain.
Such changes—the growth of the internet and mobile communications especially—dominated the 1990s. They led to a greater integration of international and domestic economic policy than the world had seen before, a fusing of the external and internal economies.
The disappearance of the barriers between the Cold War blocs of East and West gave a new vitality to regionalism, enabling the EU to expand, ASEAN to embrace Indochina and APEC to be formed.
The emerging economies of Asia set themselves up best to benefit from this globalising world. It was the “Time of the Tigers”, and the Asian miracle accelerated, as rapid growth spread from North to Southeast Asia.
The first big challenge to this emerging story came with the Asian financial crisis from July 1997 onwards. In Thailand, South Korea, Malaysia, Singapore, the Philippines and Indonesia, currency links with the US dollar broke under pressure, sharemarkets collapsed and foreign capital fled. In a single year we saw a reversal of capital inflows to Korea and the ASEAN countries on the order of $100 billion.
Asia experienced the worst slowdown in the developing world for thirty years. But it was the political rather than the economic results of that crisis that mattered most for Australia, and shaped our contemporary environment more directly than September 11.
In May 1998, Soeharto’s new order government fell. But contrary to the pundits who predicted the imminent break-up of Indonesia, a more diffuse and decentralised power structure emerged, shaped by an authentically Indonesian democracy.
This reform and democratisation in turn transformed Southeast Asian regional politics and made possible the development of a deeper and more politically sustainable Australia-Indonesia relationship. And in the aftermath of September 11 and the Bali bombings, it would underpin one of the world’s most successful efforts against terrorism.
And the crisis led directly to the independence of East Timor, the final act in the long process of decolonisation in Asia, which had defined so much of Australia’s foreign policy since the end of the World War II; and provided Australia with pre-September 11 experience in stabilising fragile states.
But perhaps most importantly, the crisis consolidated the rise of China. A lesson Beijing had drawn from the fall of the Soviet Union in 1991 was that economic growth was essential for the party’s continuing hold on power. So, during the 1990s, Jiang Zemin and Zhu Rongji drove greater economic openness and pushed forward market-oriented reforms. In 1997 the loss-making network of state-owned enterprises was opened to diversified ownership, and reform of the bloated public service began. In the same year, the private sector was officially described as “an important part of China’s socialist market economy”.
But China’s capital account remained resolutely closed. So when the Asian crisis came, China was insulated from exposure to international financial markets. Its share of Asia’s foreign direct investment had begun to edge up from 1992, but after the crisis, more foreign investors asked themselves why they should fiddle around on the edges when they could go directly to the world’s most populous market. Investment stampeded northwards from Southeast Asia and China’s total stock of foreign direct investment grew to become second only to that of the United States.
By 2001, China was ready to take the most important step towards its integration with the global economy by joining the World Trade Organisation. Just months before its accession, and facilitated by the same technology which made China’s growth possible, al Qaeda terrorists attacked the World Trade Centre. So globalisation’s two sides—dark and light—were brought into sharp focus.
The 9/11 attacks shook to its core the confidence the US felt in its traditional security between the moats of two great oceans. The world that changed after September 11 was principally the world as the United States felt it to be; and to that perception in varying degrees the world had to adjust.
Threats from radical Islamist terrorist groups against American targets, and even on American soil, weren’t new. Important sections of the national security administration of the United States had been focusing on them during the 1990s. But the World Trade Centre and Pentagon atrocities moved the issue to the very centre of American thought.
For the first time since the Cuban missile crisis, there was an urgent weighing-up of the possibility of some nightmare scenario—not just the prospect of more big terrorist attacks, but the potential use of weapons of mass destruction, even of crude nuclear-explosive devices.
The sense of vulnerability and anger sparked by September 11 made the US much more ready to act against threats to the US homeland and to US global interests. For many Americans, this was worse than Pearl Harbour: an attack on the core symbols of the US economy and government.
The US response answered dramatically the question which had been unresolved since the end of the Cold War of what American grand strategy would be. The “post-Cold War” hiatus was over. The war on terror had begun.
In September 2002, President Bush outlined a National Security Strategy of exceptional ambition. It brought together a series of themes he had spoken about since the attacks on the American homeland, beginning with the belief that “the 20th century ended with a single surviving model of human progress”: freedom, democracy and free enterprise. America’s actions in confronting terrorists and tyrants were not to be undertaken for unilateral gain, but to create “a balance of power that favors freedom”. The terrorist attacks of September 11 had provided a moment of opportunity for the US to “extend the benefits of freedom across the globe”.
That required US power, and it was the goal of the United States, in the president’s words, “to build and maintain our defences beyond challenge … Our forces will be strong enough to dissuade potential adversaries from pursuing a military build-up in hopes of surpassing or equalling, the power of the United States”.
Pre-emptive action would be necessary to prevent the use of WMD by terrorists, as would national missile defence and cooperation with other states on non-proliferation. A common threat would help to bind all the great powers of the world for the first time.
Australia’s response to September 11 was heavily shaped by the coincidental presence in Washington on that day of the prime minister John Howard. The force of his response to the disaster and the first-hand understanding it gave senior Australian policy makers of the impact of the catastrophe, set the scene for a deeper political, security and economic relationship with our major ally.
The ANZUS alliance was invoked for the first time—and remains the basis for our engagement in Afghanistan. Australia was also an early participant in Iraq in 2003. These military commitments, helped by the personal closeness of Bush and Howard, intensified Australia-United States security integration. There is no doubt they also helped congressional passage of the US-Australia Free Trade Agreement.
After the 2002 Bali bombings—killing 202 people, 88 of them Australian—Australia’s regional relationships took on more of the dimensions of the emerging, post-September 11 transnational agenda.
It was understood by then that borders that were increasingly permeable to trade and tourism were also open to threats ranging from terrorists to money launderers, from nuclear proliferators like Pakistan’s Abdul Qadeer Khan to pathogens like the SARS virus.
Counter-terrorism co-operation was an important element in bringing Australia closer to newly democratic Indonesia. Indonesia’s successful prosecution of more than 470 terrorism suspects in public and transparent trials was a major achievement of the counterterrorism effort, and has been a key factor in turning Indonesian public sentiment against terrorism. But other aspects of the transnational agenda such as the Bali processes on people smuggling were also engaged.
One lesson we drew from September 11 and from al Qaeda’s operations in Afghanistan was that if you let problems fester they could become major international security threats. The mode of the time was interventionist.
IF the 10 years before September 11 were marked by the merging of domestic and foreign economic policy, in the decade afterwards it was the amalgamating of domestic and foreign security policy that mattered.
We’ve lived through what will be seen in retrospect as the “national security” decade—a period in which most governments, including Australia’s, revised fundamentally their ideas of what national security is, who is responsible for it, and how it is done…