August 18, 2011
Let’s go back to the beginning — all the way to Adam and Eve, and to the question: Did they exist, and did all of humanity descend from that single pair?
According to the Bible (Genesis 2:7), this is how humanity began: “The Lord God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living soul.” God then called the man Adam, and later created Eve from Adam’s rib.
Polls by Gallup and the Pew Research Center find that four out of 10 Americans believe this account. It’s a central tenet for much of conservative Christianity, from evangelicals to confessional churches such as the Christian Reformed Church.
But now some conservative scholars are saying publicly that they can no longer believe the Genesis account. Asked how likely it is that we all descended from Adam and Eve, Dennis Venema, a biologist at Trinity Western University, replies: “That would be against all the genomic evidence that we’ve assembled over the last 20 years, so not likely at all.”
Researching The Human Genome
Venema says there is no way we can be traced back to a single couple. He says with the mapping of the human genome, it’s clear that modern humans emerged from other primates as a large population — long before the Genesis time frame of a few thousand years ago. And given the genetic variation of people today, he says scientists can’t get that population size below 10,000 people at any time in our evolutionary history.
To get down to just two ancestors, Venema says, “You would have to postulate that there’s been this absolutely astronomical mutation rate that has produced all these new variants in an incredibly short period of time. Those types of mutation rates are just not possible. It would mutate us out of existence.”
Venema is a senior fellow at BioLogos Foundation, a Christian group that tries to reconcile faith and science. The group was founded by Francis Collins, an evangelical and the current head of the National Institutes of Health, who, because of his position, declined an interview.
And Venema is part of a growing cadre of Christian scholars who say they want their faith to come into the 21st century. Another one is John Schneider, who taught theology at Calvin College in Michigan until recently. He says it’s time to face facts: There was no historical Adam and Eve, no serpent, no apple, no fall that toppled man from a state of innocence.
“Evolution makes it pretty clear that in nature, and in the moral experience of human beings, there never was any such paradise to be lost,” Schneider says. “So Christians, I think, have a challenge, have a job on their hands to reformulate some of their tradition about human beginnings.”
‘Fundamental Doctrines Of The Christian Faith’
To many evangelicals, this is heresy.
“From my viewpoint, a historical Adam and Eve is absolutely central to the truth claims of the Christian faith,” says Fazale Rana, vice president of Reasons To Believe, an evangelical think tank that questions evolution. Rana, who has a Ph.D. in biochemistry from Ohio University, readily admits that small details of Scripture could be wrong.
“But if the parts of Scripture that you are claiming to be false, in effect, are responsible for creating the fundamental doctrines of the Christian faith, then you’ve got a problem,” Rana says.
Rana and others believe in a literal, historical Adam and Eve for many reasons. One is that the Genesis account makes man unique, created in the image of God — not a descendant of lower primates. Second, it tells a story of how evil came into the world, and it’s not a story in which God introduced evil through the process of evolution, but one in which Adam and Eve decided to disobey God and eat the forbidden fruit…
August 18, 2011
I have some good news—kick back, relax, enjoy the rest of the summer, stop worrying about where your life is and isn’t heading. What news? Well, on 24th September, we can officially and definitively declare that postmodernism is dead. Finished. History. A difficult period in human thought over and done with. How do I know this? Because that is the date when the Victoria and Albert Museum opens what it calls “the first comprehensive retrospective” in the world: “Postmodernism—Style and Subversion 1970-1990.”
Wait, I hear you cry. How do they know? And what was it? Postmodernism—I didn’t understand it. I never understood it. How can it be over?
You are not alone. If there’s one word that confuses, upsets, angers, beleaguers, exhausts and contaminates us all, then it is postmodernism. And yet, properly understood, postmodernism is playful, intelligent, funny and fascinating. From Grace Jones to Lady Gaga, from Andy Warhol to Gilbert and George, from Paul Auster to David Foster Wallace, its influence has been everywhere and continues. It has been the dominant idea of our age.
So what was it? Well, the best way to begin to understand postmodernism is with reference to what went before: modernism. Unlike, say, the Enlightenment or Romanticism, postmodernism (even as a word) summons up the movement it intends to overturn. In this way, postmodernism might be seen as the delayed germination of an older seed, planted by artists like Marcel Duchamp, during modernism’s high noon of the 1920s and 1930s. (Seen in this light, the start-date that the V&A offers for postmodernism—1970—is quite late.)
Thus, if modernists like Picasso and Cézanne focused on design, hierarchy, mastery, the one-off, then postmodernists, such as Andy Warhol and Willem de Kooning, were concerned with collage, chance, anarchy, repetition. If modernists such as Virginia Woolf relished depth and metaphysics, then postmodernists such as Martin Amis favoured surface and irony. As for composers, modernists like Béla Bartók were hieratic and formalist, and postmodernists, like John Adams, were playful and interested in deconstructing. In other words, modernism preferred connoisseurship, tended to be European and dealt in universals. Postmodernism preferred commodity and America, and embraced as many circumstances as the world contained.
In the beginning, postmodernism was not merely ironical, merely gesture, some kind of clever sham, a hotchpotch for the sake of it. It became these things later in lesser works by lesser artists: Michael Nyman, Takashi Murakami, Tracey Emin and Jonathan Safran Foer. Rather, in the beginning artists, philosophers, linguists, writers and musicians were bound up in a movement of great force that sought to break with the past, and which did so with great energy. A new and radical permissiveness was the result. Postmodernism was a high-energy revolt, an attack, a strategy for destruction. It was a set of critical and rhetorical practices that sought to destabilise the modernist touchstones of identity, historical progress and epistemic certainty.
Above all, it was a way of thinking and making that sought to strip privilege from any one ethos and to deny the consensus of taste. Like all the big ideas, it was an artistic tendency that grew to take on social and political significance. As Ihab Hassan, the Egyptian-American philosopher, has said, there moved through this (our) period “a vast will to un-making, affecting the body politic, the body cognitive, the erotic body, the individual psyche, the entire realm of discourse in the west.”
Architecture is perhaps the easiest way to see some of these ideas in practice. In London, the Sainsbury Wing of the National Gallery (1991) is typical: the classical facets all stand in counterpoint to one another, offsetting and undermining and re-emphasising other more vernacular features like the gaping warehouse-door style entrances and the high non-windows; some of the columns are visible from one direction only; there’s redundancy; everything is over-determined and mannered; styles clash, mix, mingle.
The most contentious example of postmodern design, however, is the AT&T building in New York which was completed in 1984. The story of its reception is symbolic. In essence, the AT&T was considered a betrayal of everything positive and progressive that had been achieved since the war. It was a dissent from the implicit modernist notion that we would all march forward together into those bright and boxy skyscrapers glinting so functionally in the sun. What was this classical pediment with a circle shape cut out of the centre? What were the vast arched entryways and the pink granite detailing? The architect in question was the great Philip Johnson, the same Philip Johnson, it should be remembered, who was previously America’s most celebrated champion of modernism. Johnson died in 2005 but I met his artistic collaborator, Judith Grinberg, the woman who worked on the original drawings with him, and she recounted the impact of the building as we walked through its mighty halls.
“The terrible roar of objection centred on the top—the broken pediment,” she explained. “They hated it. There were people fighting each other in the pages of the press: aggressive, personal, vindictive, often nothing to do with architecture. Some people petitioned. Others denounced us. A lot of people attacked the authorities that had allowed construction… it went on and on…”
August 18, 2011
The Great Recession has accelerated the hollowing-out of the American middle class. And it has illuminated the widening divide between most of America and the super-rich. Both developments herald grave consequences. Here is how we can bridge the gap between us.
IN OCTOBER 2005, three Citigroup analysts released a report describing the pattern of growth in the U.S. economy. To really understand the future of the economy and the stock market, they wrote, you first needed to recognize that there was “no such animal as the U.S. consumer,” and that concepts such as “average” consumer debt and “average” consumer spending were highly misleading.
In fact, they said, America was composed of two distinct groups: the rich and the rest. And for the purposes of investment decisions, the second group didn’t matter; tracking its spending habits or worrying over its savings rate was a waste of time. All the action in the American economy was at the top: the richest 1 percent of households earned as much each year as the bottom 60 percent put together; they possessed as much wealth as the bottom 90 percent; and with each passing year, a greater share of the nation’s treasure was flowing through their hands and into their pockets. It was this segment of the population, almost exclusively, that held the key to future growth and future returns. The analysts, Ajay Kapur, Niall Macleod, and Narendra Singh, had coined a term for this state of affairs: plutonomy.
In a plutonomy, Kapur and his co-authors wrote, “economic growth is powered by and largely consumed by the wealthy few.” America had been in this state twice before, they noted—during the Gilded Age and the Roaring Twenties. In each case, the concentration of wealth was the result of rapid technological change, global integration, laissez-faire government policy, and “creative financial innovation.” In 2005, the rich were nearing the heights they’d reached in those previous eras, and Citigroup saw no good reason to think that, this time around, they wouldn’t keep on climbing. “The earth is being held up by the muscular arms of its entrepreneur-plutocrats,” the report said. The “great complexity” of a global economy in rapid transformation would be “exploited best by the rich and educated” of our time.
Kapur and his co-authors were wrong in some of their specific predictions about the plutonomy’s ramifications—they argued, for instance, that since spending was dominated by the rich, and since the rich had very healthy balance sheets, the odds of a stock-market downturn were slight, despite the rising indebtedness of the “average” U.S. consumer. And their division of America into only two classes is ultimately too simple. Nonetheless, their overall characterization of the economy remains resonant. According to Gallup, from May 2009 to May 2011, daily consumer spending rose by 16 percent among Americans earning more than $90,000 a year; among all other Americans, spending was completely flat. The consumer recovery, such as it is, appears to be driven by the affluent, not by the masses. Three years after the crash of 2008, the rich and well educated are putting the recession behind them. The rest of America is stuck in neutral or reverse.
Income inequality usually shrinks during a recession, but in the Great Recession, it didn’t. From 2007 to 2009, the most-recent years for which data are available, it widened a little. The top 1 percent of earners did see their incomes drop more than those of other Americans in 2008. But that fall was due almost entirely to the stock-market crash, and with it a 50 percent reduction in realized capital gains. Excluding capital gains, top earners saw their share of national income rise even in 2008. And in any case, the stock market has since rallied. Corporate profits have marched smartly upward, quarter after quarter, since the beginning of 2009.
Even in the financial sector, high earners have come back strong. In 2009, the country’s top 25 hedge-fund managers earned $25 billion among them—more than they had made in 2007, before the crash. And while the crisis may have begun with mass layoffs on Wall Street, the financial industry has remained well shielded compared with other sectors; from the first quarter of 2007 to the first quarter of 2010, finance shed 8 percent of its jobs, compared with 27 percent in construction and 17 percent in manufacturing. Throughout the recession, the unemployment rate in finance and insurance has been substantially below that of the nation overall.
It’s hard to miss just how unevenly the Great Recession has affected different classes of people in different places. From 2009 to 2010, wages were essentially flat nationwide—but they grew by 11.9 percent in Manhattan and 8.7 percent in Silicon Valley. In the Washington, D.C., and San Jose (Silicon Valley) metro areas—both primary habitats for America’s meritocratic winners—job postings in February of this year were almost as numerous as job candidates. In Miami and Detroit, by contrast, for every job posting, six people were unemployed. In March, the national unemployment rate was 12 percent for people with only a high-school diploma, 4.5 percent for college grads, and 2 percent for those with a professional degree.
Housing crashed hardest in the exurbs and in more-affordable, once fast-growing areas like Phoenix, Las Vegas, and much of Florida—all meccas for aspiring middle-class families with limited savings and education. The professional class, clustered most densely in the closer suburbs of expensive but resilient cities like San Francisco, Seattle, Boston, and Chicago, has lost little in comparison. And indeed, because the stock market has rebounded while housing values have not, the middle class as a whole has seen more of its wealth erased than the rich, who hold more-diverse portfolios. A 2010 Pew study showed that the typical middle-class family had lost 23 percent of its wealth since the recession began, versus just 12 percent in the upper class.
The ease with which the rich and well educated have shrugged off the recession shouldn’t be surprising; strong winds have been at their backs for many years. The recession, meanwhile, has restrained wage growth and enabled faster restructuring and offshoring, leaving many corporations with lower production costs and higher profits—and their executives with higher pay.
Anthony Atkinson, an economist at Oxford University, has studied how several recent financial crises affected income distribution—and found that in their wake, the rich have usually strengthened their economic position. Atkinson examined the financial crises that swept Asia in the 1990s as well as those that afflicted several Nordic countries in the same decade. In most cases, he says, the middle class suffered depressed income for a long time after the crisis, while the top 1 percent were able to protect themselves—using their cash reserves to buy up assets very cheaply once the market crashed, and emerging from crisis with a significantly higher share of assets and income than they’d had before. “I think we’ve seen the same thing, to some extent, in the United States” since the 2008 crash, he told me. “Mr. Buffet has been investing.”
“The rich seem to be on the road to recovery,” says Emmanuel Saez, an economist at Berkeley, while those in the middle, especially those who’ve lost their jobs, “might be permanently hit.” Coming out of the deep recession of the early 1980s, Saez notes, “you saw an increase in inequality … as the rich bounced back, and unionized labor never again found jobs that paid as well as the ones they’d had. And now I fear we’re going to see the same phenomenon, but more dramatic.” Middle-paying jobs in the U.S., in which some workers have been overpaid relative to the cost of labor overseas or technological substitution, “are being wiped out. And what will be left is a hard and a pure market,” with the many paid less than before, and the few paid even better—a plutonomy strengthened in the crucible of the post-crash years…
Read it all.
August 18, 2011
This image has been posted with express written permission. This cartoon was originally published at Town Hall.