January 28, 2012
Interrogation. Surveillance. Ethnic profiling. Censorship. The words come from 21st-century headlines, but they have an ancient pedigree. Cullen Murphy on how the Inquisition ignited the modern police state
On a hot autumn day in Rome not long ago, I crossed the vast expanse of St Peter’s Square, paused momentarily in the shade beneath a curving flank of Bernini’s colonnade and continued a little way beyond to a Swiss Guard standing impassively at a wrought-iron gate. He examined my credentials, handed them back and saluted smartly. I hadn’t expected the gesture and almost returned the salute instinctively, but then realised it was intended for a cardinal waddling into the Vatican from behind me.
Just inside the gate, at Piazza del Sant’Uffizio 11, stands a Renaissance palazzo with a ruddy ochre-and-cream complexion. This is the headquarters of the Congregation for the Doctrine of the Faith, whose job, in the words of the Apostolic Constitution Pastor bonus, promulgated in 1988 by Pope John Paul II, is “to promote and safeguard the doctrine on faith and morals throughout the Catholic world”. Pastor bonus goes on: “For this reason, everything which in any way touches such matter falls within its competence.” It is an expansive charge. Every significant document or decision emanating from anywhere inside the Vatican must get a sign-off from the CDF. The Congregation has been around for a very long time, although until the Second Vatican Council it was called something else: the Sacred Congregation of the Holy Office. From the lips of old Vatican hands, one still hears shorthand references to “the Holy Office”, much as one hears “Whitehall”, “Foggy Bottom” or “the Kremlin”.
But before the Congregation became the Holy Office, it went by yet another name: as late as 1908, it was known as the Sacred Congregation of the Universal Inquisition. Lenny Bruce once joked that there was only one “the Church”. The Sacred Congregation of the Universal Inquisition was the headquarters of the Inquisition – the centuries-long effort by the Church to deal with its perceived enemies, within and without, by whatever means necessary, including the most brutal ones available.
The palazzo that today houses the Congregation was originally built to lodge the Inquisition when the papacy, in 1542, amid the onslaught of Protestantism and other noxious ideas, decided that the Church’s intermittent and far-flung inquisitorial investigations needed to be brought under some sort of centralised control – a spiritual Department of Homeland Security, as it were. The Inquisition had begun in the Middle Ages, to deal with Christian heresies, and been revived in Iberia, under state control, to deal with Jews and Moors. Pope Paul III considered the task of his new papal Inquisition so urgent that construction on the basilica of St. Peter’s was suspended and the labourers diverted so that work could be completed on its headquarters. At one time the palazzo held not only clerical offices but also prison cells.
The Congregation for the Doctrine of the Faith inherited more than the Inquisition’s DNA and its place on the organisational charts. It also inherited much of the paper trail. The Inquisition records are kept mainly in the palazzo itself, and for four and a half centuries that archive was closed to outsiders. Then, in 1998, to the surprise of many, the Vatican decided to make the archive available to scholars.
Any archive is a repository of what some sliver of civilisation has wrought, for good or ill. This one is no exception. The archive may owe its existence to the Inquisition, but it helps explain the world that exists today. In our imaginations, we offhandedly associate the term “inquisition” with the term “Dark Ages”. But consider what an inquisition – any inquisition – really is: a set of disciplinary procedures targeting specific groups, codified in law, organised systematically, enforced by surveillance, exemplified by severity, sustained over time, backed by institutional power and justified by a vision of the one true path. Considered that way, the Inquisition is more accurately seen not as a relic but as a harbinger.
The opening of the archive at the Vatican is one more development in what has, during the past several decades, become a golden age of Inquisition scholarship. Until the appearance of Henry Charles Lea’s magisterial History of the Inquisition of the Middle Ages, in the late 19th century, most writing about the Inquisition had consisted of bitter polemics by one side or another. In recent years, using materials newly available in repositories outside the Vatican, and now including those of the Holy See itself, historians throughout Europe and the Americas have produced hundreds of studies that, taken together, revise some traditional views of the Inquisition.
To begin with, the notion of “the Inquisition” as a monolithic force with a directed intelligence – “an eye that never slumbered”, as the historian William H Prescott once phrased it – is no longer tenable. Rather, it was an enterprise that varied in virulence and competence from place to place and era to era. “The Inquisition” remains a convenient shorthand term, but there were many inquisitions. Another finding of modern research is that, insofar as their procedures were concerned, Inquisition tribunals often proved more scrupulous and consistent than the various secular courts of the time. Of course, the bar here is low. Modern scholarship has also revised the casualty figures. Some older estimates of the number of people burned at the stake by the Inquisition range to upwards of a million; the actual number may be closer to ten thousand – perhaps two per cent of those who came before the Inquisition’s tribunals for any reason. Whatever the number killed, the Inquisition levied penalties on hundreds of thousands of people, and the fear and shame instilled by any individual case rippled outward to affect a wide social circle. Little wonder that the Inquisition has left such a lasting imprint.
But from between the lines the new scholarship has some larger lessons to offer. The Inquisition can be viewed as something greater and more insidious than an effort pursued over centuries by a single religious institution. It was enabled by the broader forces that brought the modern world into existence, and that make inquisitions of various kinds an inescapable feature of modern life. Inquisitions advance hand-in-hand with civilisation itself.
It’s a troubling conclusion but an inescapable one. Here’s the central question: why did the Inquisition come into being when it did? Intolerance, hatred and suspicion of one group by another had always existed. Throughout history, these realities had led to persecution and violence. But the ability to sustain a persecution – to give it staying power by giving it an institutional life – did not appear until the Middle Ages. Until then, the tools to stoke and manage those embers of hatred did not exist. Once the tools do exist, inquisitions become a fact of life. They are not confined to religion; they are political as well. The targets can be large or small. An inquisition impulse can quietly take root in the very systems of government and civil society that order our lives.
The tools are these: there needs to be a system of law, and the means to administer it with a certain amount of uniformity. Techniques must be developed for conducting interrogations and extracting information. Procedures must exist for record-keeping, and for retrieving information after records have been compiled and stored. An administrative mechanism – a bureaucracy – is required, along with a cadre of trained people to staff it. There must be an ability to send messages across significant distances, and also an ability to restrict the communications of others – in a word, censorship…
Egypt’s future rests with two familiar powers playing very unfamiliar roles: The military and the Muslim Brotherhood. Prepare for another year of struggle.
January 28, 2012
January 25th and the Revolution Egypt has made.
Ain Sukhna is stunningly beautiful. After a two-hour drive east from Cairo through the featureless desert, the road rolls toward the steel blue waters of the Gulf of Suez. Nestled beneath ocher-colored hills, the town is a string of industrial buildings, ramshackle half-built structures, and the weekend villas of Cairo’s well-heeled. This is where the falool — the former officials, businessmen, and intellectuals who, for almost three decades, rationalized for the Mubarak regime — fled when their leader fell. With its manicured lawns, pristine infinity pools, and towpaths to the beach, Ain Sukhna couldn’t be more different from the threadbare and creaking Egypt that former President Hosni Mubarak bequeathed to his people.
The falool remain convinced that Mubarak’s fall was a tragic error that will bring lasting ruin to their country. They still believe the refrain that was so familiar on the eve of the uprising — that Egypt was an emerging democracy with an emerging economy. They cannot understand how their fellow Egyptians failed to grasp how good Mubarak was. According to their circular logic, Mubarak’s progressive politics brought about his demise: had Mubarak not been a modernizer and democratizer, the protests never would have been permitted in the first place. Hence Suzanne Mubarak’s furtive phone calls to her courtiers, reportedly asking, “Doesn’t anyone see the good we did?”
Indeed, the Egyptian people do not. But the despot’s wife might be forgiven for thinking that the numbers were on her side. Between October 14, 1981, when Mubarak first assumed the Egyptian presidency, and February 11, 2011, when he stepped down, the country ostensibly made progress. Foreign direct investment increased. Gross domestic product grew. According to the World Bank, life expectancy, child immunizations, household expenditures, and the number of telephones per household all rose, suggesting that Mubarak’s reign made Egyptians healthier and wealthier.
Any vindication the former first lady might find in the raw numbers, however, would be profoundly hollow. The World Bank’s surveys used data provided by Egyptian officials, whose methods and rigor were subject to politics. There have long been rumors that the World Bank kept two sets of books on Egypt — one for public consumption, statistics that backed claims that Egypt was at the economic takeoff stage, and another that revealed a far more complicated and challenged country.
That was the heart of the problem: the gap between Mubarak’s manufactured reality and the real Egypt. What did it matter when Egyptian officials touted 2008 as a banner year for foreign direct investment if, at the same time, Egyptians were forced to stand in long lines for bread? Mubarak’s patronage machine could hold conference after conference trumpeting reforms and the coming transition to democracy. But when the People’s Assembly (the lower house of Egypt’s parliament) repeatedly renewed the country’s decades-old emergency law, bloggers, journalists, politicians, judges, and activists of all stripes rushed to tell the tale of an Egypt in which life was far more circumscribed by the iron grip of a national security state. That story resonated. Few, if any, believed the regime’s happy talk. And those who pointed out its contradictions were subject to brutality.
Mubarak, for his part, pushed back hard. Harking back to October 1973 and the heroic crossing of the Suez Canal, he said that he would propel Egypt’s “crossing into the future.” But his rhetoric stood in stark contrast to the rattan canes and metal truncheons he unleashed on his critics. Isolated at the presidential compound in Heliopolis, or at his retreat in Sharm el-Sheikh, Mubarak never appreciated the irony that his repression only reinforced the arguments of his critics. With each crackdown, he only widened the gap between principle and practice.
This week, a democratically elected parliament chose its first speaker, Mohamed Saad el-Katatni, the leader of the Muslim Brotherhood’s Freedom and Justice Party, opening a new chapter in the country’s history. But a year after the uprising began, distortions from the past haunt the future. Egyptians are learning what social scientists have long understood: uprisings can bring down leaders, but changing institutions is hard. It is not just redrafting laws and regulations but also reforming those uncodified norms that have been derived from decades of practice. For instance, in Egypt there is neither a constitutional article nor an official decree that links the armed forces to the presidency, yet that office has always been in the hands of the officers. For all the change that has come to Egypt in the last year, the people vying for leadership are all too familiar, and many of the restrictive laws constraining NGOs and the press remain firmly in place.
Egypt’s activists are certainly correct in saying that their revolution remains unfinished. Even as Mubarak, his sons Gamal and Alaa, and a raft of lieutenants, including the former interior minister, Habib al-Adly, are all on trial, others are on the run in London, Dubai, and Beirut. This perverse political order in which institutions are rigged to serve the elite remains intact.
Yet how to finally finish the job? The instigators of the uprising have taken a principled stand against the ruling Supreme Council of the Armed Forces and its leader, Field Marshal Hussein Tantawi, because they believe the military is a counterrevolutionary force. But the activists’ permanent revolution has had diminishing returns. They may have started the revolt, but as the first phase of Egypt’s transition comes to a close they are finding themselves marginalized….
January 28, 2012
OF all the challenges faced by college and high school students, few inspire as much angst, profanity, procrastination and caffeine consumption as the academic paper. The format — meant to force students to make a point, explain it, defend it, repeat it (whether in 20 pages or 5 paragraphs) — feels to many like an exercise in rigidity and boredom, like practicing piano scales in a minor key.
And so there may be rejoicing among legions of students who have struggled to write a lucid argument about Sherman’s March, the disputed authorship of “Romeo and Juliet,” or anything antediluvian. They have a champion: Cathy N. Davidson, an English professor at Duke, wants to eradicate the term paper and replace it with the blog.
Her provocative positions have lent kindling to an intensifying debate about how best to teach writing in the digital era.
“This mechanistic writing is a real disincentive to creative but untrained writers,” says Professor Davidson, who rails against the form in her new book, “Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn.”
“As a writer, it offends me deeply.”
Professor Davidson makes heavy use of the blog and the ethos it represents of public, interactive discourse. Instead of writing a quarterly term paper, students now regularly publish 500- to 1,500-word entries on an internal class blog about the issues and readings they are studying in class, along with essays for public consumption.
She’s in good company. Across the country, blog writing has become a basic requirement in everything from M.B.A. to literature courses. On its face, who could disagree with the transformation? Why not replace a staid writing exercise with a medium that gives the writer the immediacy of an audience, a feeling of relevancy, instant feedback from classmates or readers, and a practical connection to contemporary communications? Pointedly, why punish with a paper when a blog is, relatively, fun?
Because, say defenders of rigorous writing, the brief, sometimes personally expressive blog post fails sorely to teach key aspects of thinking and writing. They argue that the old format was less about how Sherman got to the sea and more about how the writer organized the points, fashioned an argument, showed grasp of substance and proof of its origin. Its rigidity wasn’t punishment but pedagogy.
Their reductio ad absurdum: why not just bypass the blog, too, and move right on to 140 characters about Shermn’s Mrch?
“Writing term papers is a dying art, but those who do write them have a dramatic leg up in terms of critical thinking, argumentation and the sort of expression required not only in college, but in the job market,” says Douglas B. Reeves, a columnist for the American School Board Journal and founder of the Leadership and Learning Center, the school-consulting division of Houghton Mifflin Harcourt. “It doesn’t mean there aren’t interesting blogs. But nobody would conflate interesting writing with premise, evidence, argument and conclusion.”
The National Survey of Student Engagement found that in 2011, 82 percent of first-year college students and more than half of seniors weren’t asked to do a single paper of 20 pages or more, while the bulk of writing assignments were for papers of one to five pages.
The term paper has been falling from favor for some time. A study in 2002 estimated that about 80 percent of high school students were not asked to write a history term paper of more than 15 pages. William H. Fitzhugh, the study’s author and founder of The Concord Review, a journal that publishes high school students’ research papers, says that, more broadly, educators shy away from rigorous academic writing, giving students the relative ease of writing short essays. He argues that part of the problem is that teachers are asking students to read less, which means less substance — whether historical, political or literary — to focus a term paper on.
“She’s right,” Mr. Fitzhugh says of Professor Davidson. “Writing is being murdered. But the solution isn’t blogs, the solution is more reading. We don’t pay taxes so kids can talk about themselves and their home lives.”
He proposes what he calls the “page a year” solution: in first grade, a one-page paper using one source; by fifth grade, five pages and five sources.
The debate about academic writing has given rise to new terminology: “old literacy” refers to more traditional forms of discourse and training; “new literacy” stretches from the blog and tweet to multimedia presentation with PowerPoint and audio essay.
“We’re at a crux right now of where we have to figure out as teachers what part of the old literacy is worth preserving,” says Andrea A. Lunsford, a professor of English at Stanford. “We’re trying to figure out how to preserve sustained, logical, carefully articulated arguments while engaging with the most exciting and promising new literacies.”…
January 28, 2012
January 28, 2012
Mixed Portrait of Freshman Political Views: Their beliefs may lean liberal, but their politics tell a different story
January 27, 2012
New research reveals that college freshmen hold increasingly liberal views on key social issues like same-sex marriage and rights for illegal immigrants. But the progressive viewpoints haven’t translated into significantly greater levels of activism or heightened enthusiasm for national politics.
Those findings, published Thursday in an annual survey from the University of California at Los Angeles, paint a complicated election-year portrait of the country’s newest prospective voters. Are they progressive-minded and eager to embrace more-tolerant social views? Are they cynical products of a sour economy and a fractious political era, bent on punishing the establishment by staying home on Election Day? Or are they simply more inclined to favor civic engagement on a local level—volunteering in their communities, say—over national politics?
Or are they all of the above?
The research, done each year by the Cooperative Institutional Research Program at UCLA’s Higher Education Research Institute, along with other recent reports, provides some clarity, but only to a point. Consider these trends: In 1997, the year that UCLA researchers first began asking freshmen for their views on same-sex marriage, slightly more than half of all respondents said they supported it. In the latest survey, that percentage had reached an all-time high of 71 percent. (For more on how students’ views on social issues have changed over time, see related charts.)
Other findings from this year’s survey point to whether students act on those political beliefs.
Ten percent of respondents said they had worked on a local, state, or national campaign during the past year, placing them on the low end of a figure that has fluctuated between 8 and 15 percent over the past four decades.
At a time when angst over student debt and demonstrations linked to the Occupy movement have ignited some campuses, only 6 percent of respondents said they anticipated taking part in student protests while in college. (In the late 1960s, those numbers were, perhaps surprisingly, even lower: In 1968, 5 percent of respondents said they planned to take part in protests. The figure has never topped 9 percent.)
Numbers, of course, tell only part of the story. For every statistic that portends an apathetic future for today’s young voters, there is a student whose behavior augurs something quite different.
“I used to hate politics like crazy,” says Kavita Singh, the founder and lone member—so far—of the Youth for Ron Paul chapter at Southwestern University, in Texas. Growing up in a conservative Indian family in California’s left-leaning Bay Area, Ms. Singh said her view on politics during her early high-school years was simple: “What does it matter?”
“But I eventually I got into it,” she recalls. By the time she arrived at Southwestern in the fall of 2010, Ms. Singh, who is now 19 and majoring in economics, was a self-proclaimed libertarian.
She soon joined the campus’s libertarian group, did a marketing internship for a school-choice organization, and last month worked remotely to register voters for Ron Paul’s campaign in Louisiana. This semester, she is attempting to drum up support for Representative Paul on Southwestern’s tightly knit campus of 1,400 or so students.
In doing so, Ms. Singh has unwittingly acquired a reputation on campus as “the Libertarian.”
“People have been just very curious about me as a person,” she says. “They come up to me and they say, ‘You’re a woman and you’re not white and you’re not a racist or a bigot, so why are you a libertarian? Why do you believe what you believe?”
‘Politics Is Personal’
It’s a question that presidential candidates might well consider as they battle their way toward November. The recent research on freshmen, for starters, could provide hints on how to recapture the youthful vigor that defined the 2008 race.
Most freshmen responding to the UCLA survey will be eligible to vote for the first time in the forthcoming election. And they appear to have different views from arriving students in the past, says John H. Pryor, the report’s lead author
“What might be a more polarizing issue among the general population might not be polarizing for this population,” he says. “So even though more people are espousing these liberal views, they’re not necessarily thinking, ‘OK, I have this liberal view, therefore I’m a liberal.’”
Indeed, many of the hot-button social and political issues the survey asked students about yielded responses that lean liberal. Yet there have been no major shifts in the percentages of students who identify themselves as liberal or conservative.
The proportion who viewed themselves as “liberal” has varied from a high of 38 percent, in 1971 to a low of 19 percent, in 1981; in the newest survey, it was about 28 percent. “Conservative” students, who constitute about 21 percent of the 2011 respondents, have seen their representation fluctuate from 14 percent in the early 1970s to 24 percent in 2006.
Most students, it is clear, see themselves as someplace in between: In the survey’s 45-year history, the largest proportion of students have consistently characterized their views as “middle of the road.” (In the latest survey, 47 percent do.)…
January 27, 2012
In this moment of economic challenge, it can be difficult to keep our problems in perspective. The scale of the financial crisis and the subsequent recession, the weakness of the recovery, the persistence of high unemployment, and the possibility of yet another shock — this time originating in Europe — have left Americans feeling deeply insecure about their economic prospects. Unfortunately, too many politicians, activists, analysts, and journalists (largely, but not exclusively, on the left) seem determined to feed that insecurity in order to advance an economic agenda badly suited to our actual circumstances. They argue not that a financial crisis pulled the rug out from under our enviably comfortable lives, but rather that our lives were not all that comfortable to begin with. A signal feature of our economy in recent decades, they contend, has been pervasive economic risk — a function not of the ups and downs of the business cycle, but of the very structure of our economic system. According to this view, no American is immune to dreadful economic calamities like income loss, chronic joblessness, unaffordable medical bills, inadequate retirement savings, or crippling debt. Most of us — “the 99%,” to borrow the slogan of the Occupy Wall Street protestors — cannot escape the insecurity fomented by an economy geared to the needs of the wealthy few. Misery is not a marginal risk on the horizon: It is an ever-present danger, and was even before the recession. But compelling though this narrative may be to headline writers, it is fundamentally wrong as a description of America’s economy both before and after the recession. When analyzed correctly, the available data belie the notions that this degree of economic risk pervades American life and that our circumstances today are significantly more precarious than they were in the past. Even as we slog through what are likely to be years of lower-than-normal growth and higher-than-normal unemployment, most Americans will be only marginally worse off than they were in past downturns. The story of pervasive and overwhelming risk is not just inaccurate, it is dangerous to our actual economic prospects. This systematic exaggeration of our economic insecurity saps the confidence of consumers, businesses, and investors — hindering an already sluggish recovery from the Great Recession. It also leads to misdirected policies that are too zealous and too broad, overextending our political and economic systems. The result is that it has become much more difficult to solve the specific problems that do cry out for resolution, and to help those Americans who really have fallen behind. Only by moving beyond this misleading exaggeration, carefully reviewing the realities of economic risk in America, and restoring a sense of calm and perspective to our approach to economic policymaking can we find constructive solutions to our real economic problems. INCOME AND EMPLOYMENTPerhaps the broadest measure of economic insecurity is the risk of losing a job or experiencing a significant drop in income. And the idea that this risk has been increasing dramatically in America over the past few decades has been absolutely central to the narrative of insecurity. It has fed into a false nostalgia for a bygone age of stability, one allegedly supplanted (since at least the 1980s) by an era of uncertainty and displacement. President Obama offered a version of this story in his 2011 State of the Union address:
Many people watching tonight can probably remember a time when finding a good job meant showing up at a nearby factory or a business downtown. You didn’t always need a degree, and your competition was pretty much limited to your neighbors. If you worked hard, chances are you’d have a job for life, with a decent paycheck and good benefits and the occasional promotion. Maybe you’d even have the pride of seeing your kids work at the same company. That world has changed. And for many, the change has been painful. I’ve seen it in the shuttered windows of once booming factories, and the vacant storefronts on once busy Main Streets. I’ve heard it in the frustrations of Americans who’ve seen their paychecks dwindle or their jobs disappear — proud men and women who feel like the rules have been changed in the middle of the game.
The tales of both this fabled golden age and the dramatic rise in the risk of declining incomes and job loss are, to put it mildly, overstated. But this popular story did not originate with President Obama: It has been a common theme of left-leaning scholars and activists for many years. The clearest recent example of this trope may be found in the popular 2006 book The Great Risk Shift, by Yale political scientist Jacob Hacker. Hacker’s fundamental argument was that economic uncertainty has been growing dramatically since the 1970s, leaving America’s broad middle class subject to enormous risk. Using models based on income data, he argued that income volatility tripled between 1974 and 2002; the rise, he claimed, was particularly dramatic during the early part of this period, as volatility in the early 1990s was 3.5 times higher than it had been in the early 1970s. Hacker’s conclusion — that the middle class has, in recent decades, been subjected to horrendous risks and pressures — quickly became the conventional wisdom among many politicians, activists, and commentators. As a result, it has come to define the way many people understand the American economy. But that conclusion turned out to be the product of a serious technical error. Attempting to replicate Hacker’s work in the course of my own research, I discovered that his initial results were highly sensitive to year-to-year changes in the small number of families reporting very low incomes (annual incomes of under $1,000, which must be considered highly suspect). Hacker was forced to revise the figures in the paperback edition of his book. Nevertheless, he again overstated the increase over time by reporting his results as a percentage change in dollars squared (that is, raised to the second power) rather than in dollars and by displaying his results in a chart that stretched out the rise over time. While he still showed an increase in volatility of 95%, my results using the same basic methodology indicated an increase of about 10%. In July 2010, a group led by Hacker published new estimates purporting to show that the fraction of Americans experiencing a large drop in income rose from about 10% in 1985 to 18% in 2009. But the 2009 estimate was a rough projection, and would have been a large increase — up from less than 12% in 2007. Then, in November 2011, an updated report (produced by a reconfigured team of researchers led by Hacker) abandoned the previous year’s estimates and argued that the risk of a large income shock rose from 13% or 14% in 1986 to 19% in 2009. Where did these assertions come from? The team’s 2010 claim was again based on a failure to adequately address the problem of unreported income in the data. Their November 2011 claim, meanwhile, used a different data set that was much less appropriate for looking at income loss (because it does not identify the same person in different years, does not follow people who move from their homes, and suffers greatly from the problem of unreported income). In fact, the most reliable data regarding income volatility in recent decades (including the data used by Hacker in his early work and in 2010) suggest a great deal of stability when analyzed correctly. The chart below shows the portion of working-age adults who, in any given year, experienced a 25% decline in inflation-adjusted household income (a common definition of a large income drop, and the basis for Hacker’s recent estimates). Viewed over the past four decades, this portion has increased only slightly, even though it has risen and fallen within that period in response to the business cycle…
January 27, 2012
The massive victory of the Islamist parties in the Egyptian general elections received its official imprimatur last weekend, and the country appeared headed for a major constitutional tussle between the ruling Supreme Military Council and the emergent parliament.
Egypt announced that, after three bouts at the polls and a number of individual run-off elections, the main 498-member lower house of parliament, the People’s Assembly, which convened this week, will have 235 representatives of the Muslim Brotherhood and 121 from the Salafist al-Nour party and its affiliates. Together they will hold 71 percent of the seats—47.18 percent for the Brotherhood and 24.29 percent for al-Nour). The house will contain another ten “moderate” Islamists from the New Center Party. The centrist and traditional al-Wafd Party will have thirty-six members, and the liberal bloc will have thirty-three seats. The “Revolution Continues” party, representing the leaders of the Facebook and Tweeter generation that featured so prominently in the demonstrations that ultimately toppled the old regime, won only 2 percent of the vote.
Given the nature of the gradual democratic takeover of the state by the Muslim Brothers, many observers see the victory of Hamas‚ the Palestinian offshoot of the Brotherhood, in the 2006 Palestinian general elections as the true herald of the revolutionary change in the Egyptian polity (and perhaps of the so-called Arab Spring in general, given its evident Islamist trajectory).
Fresh mass demonstrations are scheduled this week in Cairo’s Tahrir Square, marking the one-year anniversary of the demonstrations that overthrew the regime of Hosni Mubarak, who ruled Egypt since 1981. The demonstrators likely will press the army to relinquish its hold on power and subordinate itself to the popular will, meaning accept parliamentary oversight and control of its budget and operations. But many liberal Egyptians suspect that the Brotherhood and the army have already secretly struck a power-sharing deal that will sideline both the secularist liberals and the al-Nour Salafists. If so, the protests will be symbolic and pro forma and will pass quietly.
At the end of this week, Egypt will hold its first elections for parliament’s upper house, the Shura Council. After these are completed, the two houses are scheduled to set up a committee to formulate the country’s new constitution. The military, headed by General Tantawi, will likely seek to retain its independence from civilian control and possibly its actual control of the state. Elections for the presidency are scheduled for June. The Brotherhood months ago announced that it will not field a candidate from the party ranks—but, given its electoral success, there can be little doubt that it will either eventually put forward a candidate of its own or advance the cause of a straw man of its choosing.
Observers expect the Muslim Brotherhood, which is likely to form a coalition government with the small centrist-secular parties rather than with its Islamist competitors from al-Nour, to focus in the coming months and years on sorting out Egypt’s internal problems—consolidating its hold on power, battling the flight of foreign investors, reducing unemployment, shoring up crumbling infrastructure and reviving foreign tourism. Thus, it probably will forego its traditional foreign-policy agenda of breaking with the West and annulling the 1979 peace treaty with Israel. The Egyptian economy can ill afford the loss of the annual American foreign-aid subsidy of $1.5 billion…
January 27, 2012
January 26, 2012
Adrift on the Nile: The recent revolution that began in Tahrir Square has taken Egypt into uncharted waters
January 26, 2012
I WAS PRESENT in Warsaw, Berlin, Budapest, and Prague in 1989 when non-violent revolutions swept the Communists from power, creating a brand new model of regime change. I stood in Wenceslas Square as hundreds of thousands of people rattled their keys, unleashing an eerie, shimmering sound into the air, chanting, “Your time is up!” I had lived among the Czechs for a decade in the 1970s, and I felt the power of their relief as the hated regime slipped into history.
So, not surprisingly, I was intrigued by the instant media punditry comparing the bloodless revolutions in central Europe with the recent wave of Arab uprisings in the Middle East. Even on television, I could see similarities between Prague 1989 and Cairo 2011: the peacefulness of protesters; the prominent role played by young people; the sparkling displays of public eloquence and wit; the sudden release from fear and the rebirth of civic pride; the infectious jubilation when the regime was finally brought down. But I saw big differences as well.
In 1989, British historian Timothy Garton Ash, having a celebratory beer with Václav Havel, observed that in Poland it had taken ten years to overthrow the system, in Hungary ten months, and in East Germany ten weeks; Czechoslovakia would perhaps take ten days. He was simplifying, of course, yet his remark captured something of the truth of the moment: Soviet-style Communism was a unified system run, with some minor local variations, from Moscow, and its collapse overturned the old Cold War domino theory — the belief that if Communism were not contained militarily it would spread to other countries. The revolutions of 1989 marked the end of an era, and provided an occasion for joy and optimism to everyone who had lived so long in the shadow of nuclear Armageddon.
Even from my armchair in front of the television, I could see that the events in Tahrir Square were charged with a different energy and a different meaning. Without knowing much about the misery Hosni Mubarak had inflicted on his country, I could still feel the enormous, pent-up frustration of protesters who, day after day, pushed back against the police, braving tear gas, truncheons, armoured cars, rubber bullets, and buckshot, not to mention the stones, Molotov cocktails, and bullets unleashed against them by the regime’s thugs and sharpshooters. Hundreds died and many more were injured. The battle of Tahrir Square looked and felt like a real revolution.
Yet the outcome remained far from clear. Mubarak was gone, but he was instantly replaced by an interim military junta that promised to step down after elections later in the year. The military had allowed the revolution to take its course — one of the slogans in Tahrir Square was “The people and the army are one hand!” — but as a governing body it was ham-fisted and slow, and the popular trust it enjoyed at first soon began to fray. The 1989 revolutions had been swift and decisive, their outcomes never really in doubt; Egypt’s revolution appeared to be bogging down, and had succeeded only in comparison with those in Libya, Syria, Yemen, and Bahrain, where the violence continued unabated.
Meanwhile, less optimistic analogies had begun to surface. Drawing parallels to abortive revolutions that swept through Europe in 1848 implied that the Arab revolts were vulnerable to suppression, at least in the short run. Comparisons with the 1979 revolution in Iran suggested that they could lead to nasty Islamic theocracies across the region. The Communist countries of central Europe all had unified opposition movements that were almost like governments-in-waiting and enjoyed Western support, whereas the Arab Awakening had no such coherence and seemed to make many neighbouring countries wary, even fearful. I could understand why an absolute monarchy like Saudi Arabia might feel threatened, or why Israel might worry about the future of its relationship with a democratic Egypt. But why were so many pundits outside the Middle East worried? And why in Prague, of all places, were those who had been on the front lines in 1989 asking whether the Arabs were ready for democracy? Didn’t we believe, in general, that even an imperfect democracy was better than none? Or had that belief now become so battered that we no longer trusted it?
I wanted to learn more, which is how I found myself in Cairo in March, six weeks to the day after the fall of Mubarak.
TO A NEWCOMER, the Egyptian capital can feel overwhelming — overwhelmingly brown, overwhelmingly dusty, overwhelmingly noisy, and overwhelmingly crowded. During the day, the major roads and elevated highways are jammed with bleating, blaring bumper-to-bumper traffic that appears to obey no known rules. And yet, except in rush hour, vehicles move efficiently. Walking is an adventure, and merely crossing the road (there are no crosswalks and few traffic lights) can seem like an extreme sport. The secret, I discovered, is to be bold: make your intentions clear, step out into the flow of traffic, and wait for the cars to stop, slow down, or flow harmlessly around you as you make your way to the other side alive. This experience holds a lesson: In Egypt, not everything that appears chaotic or dangerous is necessarily chaotic or dangerous. Even in matters as basic as driving habits, there is an unwritten social contract everyone understands.
Two-thirds of Greater Cairo’s population, which is approaching 20 million, live in what are euphemistically called “informal areas,” tracts of densely crowded concrete and brick buildings, some many storeys high, tightly clustered along narrow streets and laneways without regard for plans or building codes or zoning bylaws, often without access to utilities or policing. The people who live and work in these areas are mostly poor, getting by on the equivalent of a few dollars a day. And yet these are not, strictly speaking, slums or ghettos, and the streets feel relatively safe.
In downtown Cairo, which is almost European in spirit and design, the main streets teem with life, especially after dark. Clusters of boisterous young men hang out on the sidewalks, while young women walk by, arm in arm, ignoring them, or pretending to. Most women cover themselves in public, usually with a hijab or head scarf — one of many signs that Islam has made inroads into what was once a more secular society. The amplified calls to prayer that punctuate the city’s din five times a day reinforce this impression. But, as their driving habits demonstrate, Egyptians have an ambiguous relationship with rules, both religious and secular. Many young women wear colourful, outrageously flamboyant hijabs, almost pharaonic in their puffed-up splendour, which seem intended to attract rather than discourage male attention. And while they also observe the diktat against visible flesh, they frequently wear tight-fitting jeans and long-sleeved sweaters that leave little to the imagination. (Sexual harassment is a serious problem in Egypt; I was told that as more women cover themselves, the incidence of assaults has actually increased.)
I heard a joke in Cairo that encapsulated the Egyptian habit of flouting the law: “We pretend to obey the rules, and they pretend to enforce them.” It reminded me of one they told in central Europe before the fall of Communism: “We pretend to work, and they pretend to pay us.” Put side by side, the two jokes help to explain the differences between the two societies on the cusp of revolution: the anarchic vibrancy of Egypt versus the homogeneous monotony of central Europe.
When the former Polish dissident Adam Michnik contemplated the devastation that remained after decades of Communism, he came up with a memorable metaphor: Communism turned an aquarium of living fish into fish soup, he said. Our challenge is to turn the fish soup back into an aquarium of living fish.
When the Communists took power in Eastern Europe after World War II, they adopted the Soviet model and set about destroying the traditional institutions of civil society. When they were done, virtually nothing was left standing: no private property, no market economy, no independent businesses; the media entirely under state control; the churches, Catholic and Protestant, eviscerated. A single political party called the shots, and a massive security apparatus backed it all up. This was Michnik’s fish soup, and the problem confronting the new leaders after the revolution was how to bring their societies back to life. Yet they faced the future with some important assets: a high literacy rate, no real poverty, and ex-leaders who had not robbed the country blind, mainly because the centrally controlled economy produced little worth stealing (“We pretend to work, and you pretend to pay us”).
Egypt was still a colourful aquarium, despite the efforts of Gamal Abdel Nasser, the country’s first modern military dictator, to make Soviet-style fish soup of it. Anwar al-Sadat, his successor, attempted to remedy Nasser’s excesses by opening up the economy. So, in turn, did Hosni Mubarak, and today the results can be seen everywhere. Upscale Cairo neighbourhoods boast opulent neon malls selling Western clothing, cars, and services; international corporations like FedEx and Vodafone have put down roots; and Tahrir Square’s most prominent commercial landmark is a KFC outlet.
Cairo’s traditional economy seemed vigorous as well. In the narrow streets beyond the downtown core, I saw block after block of tiny workshops and wholesale outlets producing and selling plastic piping, car repair tools, packing materials, belt buckles, shoe parts, picture frames, bolts of cloth, bales of raw cotton, and on and on — all of it supporting a cottage industry economy that apparently operates beyond regulation (“We pretend to obey the rules, and they pretend to enforce them”). Judging from the number of newspapers and magazines, a lively press exists in Cairo, livelier now that censorship has been relaxed and pro-Mubarak editors have been let go. The judiciary, I was told, remains relatively independent, and the universities — once strictly monitored by the government — show signs of rousing themselves to a new, autonomous life: the American University in Cairo has just launched a new periodical called the Cairo Review of Global Affairs, devoting its inaugural issue to “The Arab Revolution.” Scholars at Al-Azhar University, whose pronouncements carry an almost papal authority in the Sunni Muslim world, have been calling on Egypt to establish “a democratic state based on a constitution that satisfies all Egyptians.”…
Rediscovering Justice: “If conservatives are to speak to the nation’s longing for a fuller notion of justice, they will have to offer a better and truer understanding of man”
January 26, 2012
Americans are in a disagreeable mood. Polls show pessimism about the country’s future at record highs, trust in government at record lows, and a deep distaste for political incumbents of both parties. It is tempting to attribute this discontent to the economy, and surely the jobless rate has much to do with Americans’ disquiet. But more than unemployment troubles America. Voters have been telling pollsters for years, well before the epic economic collapse, that they believe the country is far off track. It is not just that middle- and working-class Americans cannot seem to move ahead or that too many schools are failing. It is not only that we seem persistently unable to face our ruinous budget deficit or reform our ill-designed entitlement system.
Americans increasingly feel there is a profound and widening distance between our most cherished ideals and the reality of our national life. In some fundamental way, Americans believe, the nation is disordered. Barack Obama’s promise to address that disorder — to practice a reformist, even transformative politics — is what got him elected three years ago. Instead, Obama pursued an agenda of government aggrandizement. Americans want that aggrandizement reversed, but they want more. They want to put their country back in order and make society reflect again their deepest moral commitments, to recover a shared sense of belonging and purpose.
We used to have a word to describe the order we long for: justice. The West’s greatest thinkers, no less than its major religious traditions, have insisted again and again on the centrality of justice. “Justice is the end of government,” James Madison wrote in Federalist No. 51. “It is the end of civil society.” Madison was echoing Aristotle, who argued that justice is the purpose of political community. Though today we often think of justice only in reference to crime and punishment, Aristotle understood that there is far more to justice than that: He contended that justice means arranging society in the right way, in accord with how humans are made and meant to live. The just society is one that permits its citizens to exercise their noblest gifts, to reach their highest potentials, to flourish. Thus while all partnerships aim at some good, Aristotle taught, the political partnership “aims at the most authoritative good of all,” at justice.
We no longer think of justice in this manner, partly because for the better part of a century the term has been hijacked by the left. In the last hundred years, justice became oddly synonymous with labor unions and planned economies and then the anti-American radicalism of the 1960s. It is now too often taken to describe egalitarian economics. But the left’s notion of justice has turned out to be both shallow and calamitous. The left’s agenda has not delivered justice, and indeed, it has blinded us to the fact that justice is what we lack.
While liberals advocated their distorted notion of justice, conservatives abandoned the concept altogether, instead emphasizing freedom and independence in contrast to the left’s egalitarianism. Freedom and independence are valuable things, indispensable in fact, but they are worthwhile precisely because they are just — they are right for the human person. There can be no true freedom apart from a just society. And it will no longer do for conservatives to advocate the former without the latter.
Conservatives must do more than promise to downsize government and let each individual go his own way. They must offer a better vision of a better society, a vision of political justice, with an agenda to match. This is how conservatives can speak to the country’s deepest needs, and this is how conservatives can summon the nation again to its highest potential. For if justice is the supreme achievement of a free people, to call Americans to justice is to call them to greatness.
JUSTICE AND GOVERNMENT
To reclaim the quest for justice, conservatives must first clarify for themselves what justice really means. They can start by rejecting the left’s wrongheaded view.
The liberal vision of justice can be traced to the French Revolution, with its cry of liberté, égalité, fraternité. The middle term was the decisive one: The revolutionaries insisted on the absolute equality of citizens as the touchstone of a just society. No distinctions of rank or wealth were to be permitted, in theory anyway, because no such distinctions were natural to man. The revolutionaries distrusted civil society, with its myriad little groups and private associations, as a redoubt of inequality and “unnatural” distinction. Fraternité — brotherhood — was to be achieved instead through the state, which would put every citizen on equal footing and provide a source of common identity. Only then would liberty, too, be possible.
A century later, Karl Marx gave the Jacobins’ égalité a distinctly materialist turn. Human beings are the products of their material conditions, he said; human identity is determined by the means of production. For Marx, an equality of goods and things was the key to bettering mankind.
American liberals are neither Jacobins nor Marxists, but some of the claims of both figure prominently in contemporary liberal thought. For the modern American left, justice is indeed most basically about equality. And equality is about material things. In his famous book A Theory of Justice, Harvard philosophy professor John Rawls contended that each individual is entitled to the same basic goods as every other. New York University philosophy professor Ronald Dworkin, another liberal icon, has similarly argued that a just society will afford its citizens “equality of resources.” When, during the 2008 presidential campaign, Barack Obama told Joe the Plumber that part of government’s job was to “spread the wealth around,” he was reaching for this same egalitarian idea.
But why exactly is every citizen entitled to the same basic level of material well-being? Here modern liberals offer a conventionally 21st-century answer: Every individual, they say, has the right to be happy. This equal right to happiness, where happiness is understood as individual satisfaction, is the ultimate source of modern liberalism’s commitment to individual equality. Because every person has a right to pursue what brings him pleasure, every person deserves the resources to make that pursuit possible. The business of government, therefore, is to deliver material equality. Liberals champion the state as the agent of equality, the state as the source of community, and the state as the sponsor of individual happiness.
This leftist vision of justice has proved enormously influential — but ultimately empty. Enshrining individual satisfaction as the end goal of life has left our public dialogue myopic and self-centered. It has impoverished our understanding of the common good by suggesting that all we as citizens have in common is the right to pursue our individual ends. In the name of guaranteeing equality, it has fostered dependency. In the name of individual choice, it has hollowed out civil society, replacing voluntary associations with the state.
In short, the left’s view of justice has led directly to our present crisis. Edmund Burke’s verdict on the French revolutionaries in 1790 is a fitting epithet for modern Progressives and liberals: They “are so taken up with their theories of the rights of man, that they have totally forgot his nature.”
If conservatives are to speak to the nation’s longing for a fuller notion of justice, they will have to offer a better and truer understanding of man. They will need to remember the ancients’ dictum that the just society is one in accord with human nature. The liberal account of justice pays virtually no attention to individuals’ uniquely human talents and capacities, but these are precisely the key to justice. Despite the innumerable differences between one individual and another, there is a fairly definite set of activities in which most people say they find deep fulfillment: working, inventing, creating, building, serving, teaching, raising a family. All these pursuits have something in common. They all involve the application of human effort to a sphere of the world in order to improve it. The Biblical tradition calls this “exercising dominion,” as in the opening of Genesis, when God gives humans authority over the created order with the responsibility to tend and care for it. In more secular terms, we might call it governing.
To govern is to exert a guiding influence on something or someone else, to manage or direct or shape things. We usually think of it in a political context, but there is nothing inherently political about governing. It can describe any responsible, constructive exercise of care or authority. And understood in this way, it fairly describes many of man’s highest capacities. When an entrepreneur takes an idea and turns it into a business, he is marshaling his talents to build something new; he is governing. When a composer drafts a concerto, he is applying his gifts to the world to create beauty where it did not exist before; he is governing. When a teacher trains a student or a parent rears a child, he directs the child for the child’s improvement — he governs…
January 26, 2012
Walking into McCormick Place, Chicago’s half-hangar, half-labyrinth convention center, I looked at the schedule to find that I had just missed “Canadians Do Cremation Right.” The 130th National Funeral Directors Conference, was underway; held each year in a different city, the conference brings together funeral directors from across the country for three days of presentations, trade talk, awards and camaraderie. After shaking off my initial disappointment at having missed the Canadian talk, I scanned the remaining workshops. After passing on “Marketing Your Cemetery: Connecting With Your Community” and “Managing Mass Fatality Situations,” I circled “The Difference Is In The Details,” an embalming workshop.
The small film company I sometimes work for was planning a feature on new trends in funerals, and I had flown out for the weekend to try to meet some of the younger, hipper funeral directors at the conference. One of these was Ryan, a round man with a wide smile and an impeccable hair-part, whose car, he told me, has a bumper sticker that says “Let’s Put The ‘Fun’ Back In Funeral.” He started his career as a funeral director, but had since moved into the lucrative field of “death care industry” consultation, where he works with funeral directors on ways to expand their businesses. In one of our conversations he tells me, “The worst thing I’ve heard a funeral director say is ‘we’ve always done it this way.’” Later, I tell him my plan to attend that evening’s “Funeral Directors Under 40: A Night on the Town” event. Without missing a beat, he lowered his voice and said, “Funeral directors are notoriously heavy drinkers. There will definitely be some hook-ups.”
The funeral industry is in the midst of a transition of titanic proportions. America is secularizing at a rapid pace, with almost 25% of the country describing itself as un-church. Americans, embracing a less religious view of the afterlife, are now asking for a “spiritual” funeral instead of a religious one. And cremation numbers are up. Way up. In liberal, secular states, specifically in the Pacific Northwest, cremation rates have steadily increased to more than half of disposals, up from the low single digits in 1990. The rest of the nation had also experienced steady gains in cremation since 2000 (except in the Bible Belt, where cremation rates remained relatively low). The rate of cremation has skyrocketed as Americans back away from the idea that Jesus will be resurrecting them straight from the grave. And so in the past twenty years, funeral directors have had to transform from presenters of a failed organism, where the sensation of closure is manifest in the presence of the deceased body, to the arbitrators of the meaning of a secular life that has just been reduced to ash. Reflecting this trend, this year’s NFDA conference was, for the first time in its history, held jointly with the Cremation Association of North America (CANA).
Talking with funeral directors at the conference, I began to realize the scope of the crisis spurred by the rise of cremation and its new importance. As one former funeral director said, “If the family wanted a cremation, we’d say ‘That’ll be $595,’ hand them the urn and show them the door. Not anymore though.” The industry is scrambling to find a way to add value-added cremation services to remain solvent.
This tension about how best to innovate was in evidence at the first presentation I attended, titled “How To Step Up Your Game.” The presenter worked for a consulting firm that specialized in business strategies and management—the funeral industry was his particular subject of expertise. He launched into his talk with a story about a recent trip to Disney World with his daughter. While walking through the park, he realized how much the funeral industry could learn from the attraction. At Disney World, every interaction had been scripted and rehearsed, down to the greetings from the custodians. Experiences are controlled. Likewise, he said, every funeral should offer the same experience for everyone, whether cremated or open-casket. If, say, the customer was having an open-casket service with a priest and an organist, there should also be a corresponding service for someone, possibly secular, who has just been cremated. If priests are no longer always present to say platitudes over the dead, funeral directors would have to develop a corresponding basic, secular service to stand in as a reverent farewell. Thus they’d take a much larger role in the memorial, acting more like mainstream event planners and offering such amenities as video tributes, arranging for music and other points of the new-age burial.
“No more outsourcing the healing to ministers, because that isn’t really going to work anymore,” the presenter continued. Religion answered the question of authenticity, the sense that the memorial was genuine and prescribed. The minister, God’s shepherd, was on hand to see the soul to heaven. But in a society that has grown suspicious and distant from religion, this no longer is sufficient. Now it’s up to the funeral directors to provide that sense of authenticity, of closure, a way to deal with the impossibility of understanding death. The presenter continued with a slideshow of forward-looking funeral homes: huge windows with sunlight streaming in, glossy ceramic tables holding both the urn and catered health food—they looked not unlike high-end yoga studios. As he clicked back and forth between a picture of an old-fashioned, stuffy, sunless viewing room, replete with heavy velvet curtains and faux-gold candelabras, to the new, health-club-reminiscent Remembrance Room, it became clear: the funeral industry is being gentrified. I looked around the room. The audience was incredibly diverse, which, was true of the conference overall and which makes sense: every community has its own funeral home, each with its own loyal followings, its own special services that a cross-town rival doesn’t offer. But here they were, being told to act more like Disney World, and everyone was taking notes…