November 30, 2010
November 30, 2010
This weekend we got another fat load of WikiLeaks, based on purloined diplomatic cables to and from the U.S. State Department. As happened when Julian Assange’s muckraking endeavor leaked U.S. military data from Iraq earlier this year, conservatives are outraged, and some call, as before, for the expeditious arrest of Assange, or fantasize about his assassination.
Rightbloggers generally take a two-pronged approach to the leaks: They believe the new document dump is an unpardonable breach of U.S. security — except to the extent that it may be used to denigrate the Obama Administration, it which case they feel it deserves wider dissemination.
It’s not as if rightbloggers have been alone in denouncing Wikileaks, as mainstream media outlets from the New York Times on down have attacked Assange from all directions — while sopping up his revelations on the basis of their newsworthiness.
But that is an old, time-honored form of journalistic hypocrisy: Using hot news to draw readers with one hand, and tut-tutting its shameful provenance with the other. Rightbloggers have added a few new wrinkles to the game.
Back when Assange leaked the Iraq War data, for example, they dismissed the revelations of bad behavior by our Iraqi allies (“they appear to illustrate the inherent — and forseeable — problems with the nation-building strategy we pursued in Iraq and are still pursuing in Afghanistan,” soothed The American Spectator), and cheerfully plucked the bits that supported their own interests.
The documents suggested to them that a previous, speculative accounting by The Lancethad overestimated real Iraqi casualties of the war, and that the discovery of some old chemical weapons proved that Saddam had WMDs after all. Counter-arguments could be made that The Lancet was measuring different kinds of casualties than the leaked documents addressed, and that the discovered chemical weapons did not constitute a real threat to the United States (“Later investigation revealed those contents to be vitamins“). But for rightbloggers the message was clear: “… the two biggest scoops from the latest document dump are that the infamous Lancet study was bogus, and that WMDs were found in Iraq in quantity.”
They apparently thought Assange had made these revelations by accident or out of self-sabotage, as he was of the “Left” and thus was leaking on his own cause. “I delight in the unintended consequence Assange’s revelations has produced,” said Melanie Morgan. “It seems to be the Left contradicting itself in the propaganda arena,” said Right Pundits. “The WikiLeaksters seem to have inadvertently done history a bit of a favor in the their obsession,” said NewsBusters, in dispelling “leftist folklore.”
None of this altered their feeling that by leaking this info Assange was aiding the enemy, and possible guilty of murder.
“Gosh, isn’t it nice that the enemy will be able to identify Iraqis who died by name and whose side they were fighting on, so they can go after their families, either to kill them or recruit them, depending on the circumstances?” said BizzyBlog. “What a guy this Mr. Assange is.” “Julian Assange: Jerkoff troop killer,” wrote The North Star National.
National Review‘s Jonah Goldberg asked, “Why wasn’t Assange garroted in his hotel room years ago?” Goldberg asserted that the leaks were “going to get people killed, including brave Iraqis and Afghans who’ve risked their lives and the lives of their families to help us.” Nonetheless, he lamented, “Even if the CIA wanted to take him out, they couldn’t without massive controversy. That’s because assassinating a hipster Australian Web guru as opposed to a Muslim terrorist is the kind of controversy no official dares invite.”
(Goldberg tried to hop out of his own overheated logic train at the end — “Ultimately, I don’t expect the U.S. government to kill Assange, but I do expect them to try to stop him” — and complained, when called out on his homicidal fantasy, that “there’s nothing in the quote at Balloon Juice to justify the claim I call for [Assange's] murder.” To shore up his position, he challenged a writer at Gawker to a fistfight.)
Last weekend the diplomatic leaks was released, and with them came the usual calls for Assange’s death and/or detention. “Julian Assange, Why is He Still Breathing?” asked Paladin’s Page. “Assange should be looking at the inside of a container on a ship doing lazy racetracks around the Indian Ocean,” said Blackfive. “I won’t think twice if Julian Assange meets the cold blade of an assassin,” said Donald Douglas. Etc.
The Obama Administration denounced the leaks but, having not the stones to send a cold-bladed assassin to preempt Assange, failed to prevent them, which rightbloggers declared proof of the Kenyan Pretender’s malfeasance or worse…
November 30, 2010
For the millions of Americans who opposed the war in Iraq, including Barack Obama, Afghanistan was the good war—“The War We Need to Win,” as candidate Obama titled a key foreign-policy speech he gave in August 2007. Iraq, Obama said, was a sickeningly misguided gift to Osama bin Laden: “a U.S. occupation of undetermined length, at undetermined cost, with undetermined consequences.” Iraq posed no threat to American national security; Afghanistan and Pakistan did. Obama vowed to wind down the war we didn’t need to win in order to ramp up the one we did. He was elected president for many reasons, but that pledge was among the most important.
Today, with the twelve-month review of Obama’s strategy scheduled for the coming weeks, the war in Afghanistan is his—and it doesn’t feel very good. What it feels like, increasingly, is Vietnam, especially to people who formed their views of American military power, and indeed of America itself, in opposition to the Vietnam War. Obama has poured in 50,000 more troops, at a cost of about $100 billion a year. Half of the 1,400 American combat deaths in Afghanistan have occurred since Obama became president. American lives and treasure seem to be disappearing into the quicksand of a country governed by a corrupt regime whose indifference to the public good fuels the insurgency the U.S. is seeking to repress. Bulletins from an optimistic commanding general about enemy body counts and liberated villages fill us with hope for a moment—until we read the dismal news accounts from the front. The loudest voice in favor of pushing on, mocking the advocates of phased withdrawal, belongs to John McCain, the gung ho Vietnam vet.
The comparison is so painfully obvious and has been made so persistently, at least on the left, that we owe it to ourselves to think hard about the stakes in Afghanistan. The central lesson many Americans took away from Vietnam, and from the proxy wars the U.S. fought all over the world during the Cold War, was that our political leaders exaggerated, or even fabricated, the stakes. LBJ said that if Vietnam fell, the rest of Asia would fall with it. President Reagan said that if we didn’t stand up to the Sandinistas in Nicaragua, Central America would go Communist. They were wrong; the domino theory was a red herring. But what about Afghanistan? In the West Point speech in December 2009 in which he announced his plans to send 30,000 more troops, Obama asserted that the border region of Afghanistan and Pakistan “is the epicenter of the violent extremism practiced by Al Qaeda,” and thus the threat to American national security “will only grow if the region slides backward and Al Qaeda can operate with impunity.” White House officials say that the upcoming review will not lead to any change of course. U.S. and NATO officials have agreed that troops will fully hand off combat duties to Afghan forces only by the end of 2014, and even that is not a hard date. If Obama is right about the stakes, then he may be right about the strategy. Or is he hyping the danger, the way LBJ and Reagan—and George W. Bush—once did? And even if the fight really does matter, is victory, or however we choose to define success, even possible?
Let’s back off for a moment to consider how liberals in America have come to think about war. Woodrow Wilson was the first American president faced with the challenge of persuading the American people to fight a war against an enemy that did not directly menace our territory. At the time, pacifism was virtually the default position of American liberals. Blood-and-thunder Republicans like Teddy Roosevelt were eager to fight the Hun; Wilson’s “base,” as we would say today, was not. Before a joint session of Congress on April 2, 1917—in one of the great speeches of American history—Wilson argued that German submarine warfare had thrown down a challenge “to all mankind,” and that the U.S. must respond not out of “revenge or the victorious assertion of the physical might of the nation, but only the vindication of right.” Wilson appealed to idealism as well as to an idealistic conception of America’s national security: German autocracy and bellicosity, he said, undermined the world order America was seeking to build. Thus Wilson’s ringing declaration: “The world must be made safe for democracy.”
It took another generation for U.S. interests to become so global, and for advances in technology to bring the world so close together, that another war halfway across the world could be seen as an immediate and dire threat to American national security. After Pearl Harbor, FDR didn’t have to reissue Wilson’s appeal to high-mindedness; Americans left and right united behind a threat to their way of life…
Why was Germany being so intractable? Dan Fried even traveled to Berlin to hand deliver proposals from Washington — and was snubbed. Every attempt by the US special envoy to coerce Germany into taking Guantanamo detainees seemed predestined to fail. German Foreign Minister Wolfgang Schäuble was “very skeptical,” US Ambassador Philip Murphy cabled back home in frustration.
The Americans had similar problems with several countries. In September 2009, US President Barack Obama was keen to finally fulfill his promise to close the Guantanamo detention center on Cuba and send all the remaining prisoners to destinations around the globe. But nobody wanted them — neither his countrymen nor his allies. And least of all the Germans.
Fried’s position was not unlike a merchant in a bazaar, forced to haggle over the conditions under which countries would take prisoners initially considered extremely dangerous but now deemed harmless. He promised a range of attractive enticements: Money, development aid and even political capital like a visit by Obama himself — or at least an invitation to the White House.
The negotiations were correspondingly lively. Potential recipient countries feigned doubt and provided detailed descriptions of the potential dangers they could face by accepting Islamists. The primary aim, it becomes clear from the US dispatches, was that of driving the price up as high as possible.
‘Negative Reaction of the Chinese Government’
Even the Germans joined in the haggling, though Berlin had been particularly strident in calling for the closure of Guantanamo. Wolfgang Schäuble, a member of Chancellor Angela Merkel’s Christian Democrats (CDU) and the country’s interior minister until late October 2009, repeatedly rejected American overtures.
Berlin was particularly reluctant to take 17 Uighurs, originally from China, despite the fact that 500 of their ethnic brethren already lived in Munich, the largest such community in Europe. The Uighur community in Munich expressed a willingness to accept them into its midst. But Germany wouldn’t allow it. Islamists from Guantanamo are too dangerous, Schäuble insisted. In fact, Washington suspected there was another reason: Germany’s fear of China, which wanted the men back itself so it could pursue terrorism charges against them. One US dispatch contains the analysis that Germany’s “reluctance about Uighurs is due to the expected negative reaction of the Chinese government.”
Chinese diplomats told the US State Department in no uncertain terms that Beijing would consider the sending of Uighurs to Germany “a slap in the face.” The balance of power had shifted so markedly that the German government would rather risk snubbing its long-established ally in Washington than suffer the wrath of the Communist regime in Beijing.
In December 2009 Fried, expressed his sympathy for Berlin’s plight and proposed a different deal: What about a humanitarian case? Could Germany at least take one mentally disturbed Uighur and his care-taker brother?
Fried hoped for a breakthrough — and hoped it could be provided by Germany’s new interior minister, Thomas de Maizière, likewise of the CDU, who took office following German general elections in September 2009. “In contrast to former Interior Minister Schäuble,” a dispatch from last December reads, “current Interior Minister de Maizière has not (and is unlikely to) flouted security concerns about cases in the press.” So encouraged was Fried by the new minister that he proposed even more candidates in addition to the Uighur brothers. The new candidates including a Syrian and a Palestinian, the only two Guantanamo detainees ultimately accepted by Berlin, more than six months later.
Even so, Fried’s visit to the German Interior Ministry was initially disappointing. Although de Maizière briefly dropped in on Fried’s negotiations with an undersecretary, no progress was made on the Uighur brothers. Instead, the report says the Germans merely stressed the importance of “keeping the current discussions and review of the detainees confidential.”
Officially, Berlin still had security concerns.
The envoy President Obama sent to the German Chancellery had even less success at his meeting with Christoph Heusgen, Chancellor Angela Merkel’s security advisor. “Heusgen was not optimistic that China would demonstrate any understanding for the two humanitarian cases,” the relevant dispatch reads. Germany was not eager to “irritate” China by being the only country that takes Uighurs.
‘Productive Internal Meetings’
Fried returned home empty-handed. Two months later the Americans made another attempt. On February 8, 2010, Ambassador Murphy asked the German Interior Ministry whether any progress had been made on the matter. “The US request is still being reviewed,” de Maizière wrote back formally. “The ministry is having productive internal meetings on the issue.” The decision would take a couple more weeks.
Luckily for the sick Uighur, his brother and the US, not all of Washington’s allies were pursuing the same obstructionist strategy. Despite being in the midst of trade negotiations with China, tiny Switzerland expressed its willingness to take the two brothers in March. Still, Switzerland has a good reason to be friendly toward Washington: The US was unhappy about the fact that major Swiss banks had helped rich Americans evade taxes.
Other countries were also cooperative — sometimes even just offering suggestions. King Abdullah of Saudi Arabia, for example, related a brainstorm of his to John Brennan, Obama’s chief counter-terrorism advisor. One could implant chips into the former detainees containing information about them and allowing them to be tracked. The system worked with horses and falcons, the king was quoted as saying in a dispatch. Brennan indicated that such a procedure would likely encounter legal difficulties in the US. “Horses don’t have good lawyers,” Brennan told him.
US envoy Fried openly reported back to his government about which countries were willing to take former Guantanamo detainees — and, more importantly, at what price.
Bulgaria, for example: The Interior Ministry in Sofia expressed willingness to accept two men, albeit on condition that the US got rid of visa requirements for Bulgarian tourists and businessmen and helped with relocation expenses. Fried proposed “a symbolic amount in the neighborhood of $50,000 – $80,000 per detainee…”
November 30, 2010
November 30, 2010
November 29, 2010
November 29, 2010
IT HIT just over a year ago, as ambassadors, ministers and heads of state were preparing to descend on Copenhagen for a climate summit years in the making. The blogosphere, American cable news and, in time, the rest of the media lit up with discussions of a swathe of e-mails from the moderately obscure Climatic Research Unit (CRU) of the University of East Anglia. A person or persons still unknown had posted this e-mail archive, as well as other computer files from CRU, on to a server in Russia, and sent messages to various climate sceptic blogs designed to tip them off to the treasures therein.
A year on, the shadow of climategate, as it was unhelpfully but inevitably named, remains palpable. Governor Arnold Schwarzenegger clearly had it in mind when he recently said “Last year we had a tremendous setback because some of the science and some of the numbers were manipulated and that is very damaging because it gives the other side a way in.” This is a climategate narrative that seems quite popular among many people who, like Schwarzenegger, remain committed to the need for action against global warming—and very popular among people who take the opposite view: that a significant chunk of science had been frankly fraudulent, and that the discovery of this fraud had had a very bad impact on the fight against global warming. Its popularity, though, does not make this story right. Climategate was not about the manipulation of numbers: and the setback for the green cause Mr Schwarzenegger espouses was not climategate, but Copenhagen.
The climategate e-mails led to three inquiries in the United Kingdom. All of them were flawed in different ways. None of them, though, gave credence to the idea that “science and numbers were manipulated”. In a report into those inquiries for Britain’s Global Warming Policy Foundation, an organisation opposed to action on climate change and critical of the quality of the science behind that case, Andrew Montford, a blogger with the same predispositions as the Foundation, sums up the principal climategate allegations in a way that shows them to be much more about process than about manipulated findings. He cites an exclusion of sceptical views from the literature; a misrepresentation of primary research, and its uncertainties, in some secondary presentations; a lack of openness to requests for information and a willingness to contravene Britain’s freedom of information act; a discordance between what the scientists said in private and what they said in public. Fraud in basic science and primary data of the sort Schwarzenegger spoke of, and which is commonly said to have been revealed, does not make the list.
Alleged flaws—in one case, an expressly alleged fraud—in the scientific work of the CRU researchers and some of those they corresponded with were common currency among critical bloggers well before the emails were leaked. Questions about the validity of reconstructions of mediaeval climate based on treerings, about why some treerings are taken to be good records of temperature at some points in history but not in the recent past, about cherry-picking of data, about the traceability or otherwise of Chinese weather station data and so on had all been aired long before. The climategate e-mails offered little if any new information that might move these debates on in either direction.
What they offered was colour—catchphrases like “hide the decline”—and context. There was clear evidence of circled wagons, shared distaste for the scientist’s critics, and unwillingness to conform to the quite high standards of opennness that the freedom of information act—and the ideals of their calling—seek to impose on scientists. A lot—lost, indeed—of science would look just the same if its privacy were similarly breached (and many other areas of human endeavour would look as bad or worse); but to accept that this is the way of world does ittle to minimise the damage. People do not want to believe that scientific knowledge of high and lasting value is messy and human in the making; scientific culture does its best to insulate then from that belief. The middle of a media storm is not the place to wheel out sociologists and historians who might educate them on the subject…
November 29, 2010
THE LEGACY of colonialism still casts a long shadow over the world. In the United States, the right-wing Tea Party takes its name from an anticolonial revolt against British taxation on a chilly December day in Boston in 1773. Beginning in 2001, American soldiers occupied Afghanistan, and two years later Iraq, using “hearts and minds” and counterinsurgency strategies adapted from those tried by imperial powers in colonies like Kenya and Algeria. In Kenya, citizens went to the ballot box in August to demand a new constitution, rejecting a political system handed down by the British that strangled democracy and nurtured ethnic violence.
Nations like America and Kenya share the scars of colonialism. They also share a man named Barack Obama—president to one, distant kinsmen to the other. Since Obama became president, a lot of noise has been made regarding his global connections. We all know the story by now: born to a white American from Kansas and a black Luo from Kenya, raised in Hawaii, traveled to Indonesia for four years at the age of six, and schooled at Columbia and Harvard. It is an exceptional biography, but one used by critics to label him a secret Muslim, an unabashed socialist, and worst of all, a global citizen. But recently, it’s Obama’s heritage in Kenya, a place he has only visited three times, that has provided fodder for disgraceful distortions about his colonial past and present politics.
In September, Newt Gingrich wondered aloud to the National Review Online whether Obama might be “so outside our comprehension, that only if you understand Kenyan, anti-colonial behavior, can you begin to piece together [his actions]?” His comments ignited a mini media firestorm. The Los Angeles Times op-ed staff described his comments as “factually insane.” Conservative David Frum was less subtle, calling it “a brazen outburst of race-baiting in the service of partisan politics.” The ensuing hullaballoo certainly gave Gingrich some street-cred among the far Right of the Republican Party and its appendage, the Tea Party. But his comments were part of a broader effort to paint the president as exotic, non-American, and Other. In fact, Gingrich’s comments were drawn from and in hearty support of an article written days earlier by Dinesh D’Souza for Forbes.
D’SOUZA CLAIMS that “anticolonialism” defines the president and guides his decisions like a divining rod. Anticolonialism is a specialty of D’Souza’s. “I know a great deal about anticolonialism,” he tells us, “because I am a native of Mumbai, India. I am part of the first generation to be born after my country’s independence from the British.” Born fourteen years after Indian independence, he certainly never had any first-hand experience overthrowing colonial rule. But having lived in India for his first seventeen years, D’Souza certainly witnessed his nation struggle as the seeds of conflict sown over the course of British colonial rule came to fruition. Members of his family even spoke to him of their efforts to “resist and overthrow the oppressors.” Rather than embrace this heritage, D’Souza has rejected it outright and transformed it into a partisan political weapon.
So what does a man who claims to have experienced anticolonialism think it looks like? D’Souza offers us a definition: “Anticolonialism is the doctrine that rich countries of the West got rich by invading, occupying and looting poor countries of Asia, Africa and South America.” And even when former colonies “secure political independence,” anticolonialists believe, according to D’Souza, that “they remain economically dependent on their former captors.” It is not a totally inaccurate definition, but it is a rather gross, racialized oversimplification. D’Souza divides the world into two warring factions: the greedy white West and the exploited nonwhite Rest. It is a racist formula, drawing a line in the minds of his audience, and asking them to pick a side: us versus them.
D’Souza’s thesis on Obama falters, mainly because he does not actually have any evidence to support his claims. According to D’Souza, examples of Obama’s anticolonialist mindset abound: seeking the expiration of the Bush tax cuts for those earning $250,000 or more a year, supporting the religious freedom of a Muslim group to build a cultural center near Ground Zero, and attempting to end America’s dependence on foreign oil. D’Souza supports the claim that these are “anticolonial” policies with amateurishly associative arguments, lining up two things and imagining a connection. The same is true of D’Souza’s explanation for why anticolonialism guides Obama: because the father was an anticolonialist, so too must be the son. Apparently, an aversion to colonialism is genetic. It makes you wonder: if our worldviews were all drawn from stuff our dads say, then what would people think of us? If someone asked D’Souza’s family members for their views on British colonial rule in India, would we learn that D’Souza is in fact a closet anticolonialist?
D’Souza claims that the proof of Obama Senior’s, and therefore Obama Junior’s, anticolonialism lies in an article by the former written for the East Africa Journal in July 1965, entitled “Problems Facing Our Socialism.” The article is not particularly exciting, at least from the perspective of a historian studying Kenya. Obama’s father was not a major player in the politics of a newly independent Kenya. Rather the Harvard educated, absentee-father of our president is a straw man, stuffed with all manner of odious attributes that men like Gingrich and D’Souza associate with the son. But maybe we can excuse D’Souza. He is not a serious student of history, whether of Kenya, Africa, European imperialism, or colonialism. Newt Gingrich, however, cannot be excused so easily.
WHEN NEWT GINGRICH, a serious contender for the 2012 Republican presidential nomination, whole-heatedly endorsed such a bizarre and erroneous characterization of his would-be opponent, it took some inside the beltway by surprise. Marc Ambinder of the Atlantic lamented how an “intensely smart man” could wallow so thoroughly in such muckraking. Gingrich is certainly an intelligent man; he earned a PhD in history at Tulane University. So how are we to understand Gingrich’s endorsement of D’Souza’s characterization? It is not simply the case of a canny politician looking to secure his base or appeal to the baser instincts of a lunatic fringe. Ideas about colonialism and Africa are central to Gingrich’s own intellectual past and present political vision. He was, at one time, well-versed in the history of Africa and European colonial rule—an expert by academic standards. While we might dismiss D’Souza as a conservative rabble-rouser talking about race, without using the word “race,” we cannot do the same for Gingrich.
In the waning years of the 1960s, a young Newton Leroy Gingrich began conducting research for his doctoral dissertation at Tulane. His topic: a history of Belgian colonial rule and education policy in the Congo. In the introduction to his dissertation, Gingrich explains that he wants to explore the effects of European colonialism on Africa: “what kind of exploitation, for what reasons, and at what price.” Yet PhD Gingrich is not interested in the form, function, or cost of exploitation; rather he seeks to unearth the purposefully buried benefits of Belgian colonial rule in the Congo. He feels that the Congolese need to know their own past, especially the good aspects of colonialism, not just the bad. He worries that the racial, radical politics of the 1960s have distorted the role of colonialism in Africa and its impact…
November 29, 2010
November 28, 2010
I used to wake up in the middle of the night, here in Istanbul, wondering how I’d pay my bills. As I’ve noted in City Journal, the demand for foreign news is shrinking. The wire services provide coverage from Turkey at low operating costs. To be honest, I also spend a lot of money on things I can’t afford, like my cleaning lady. She’s been working for me for five years and has three kids, so I can’t fire her. If I go down, she’ll go down, and so will my landlord, the guy who sells cleaning supplies to my cleaning lady, and the Iranian refugee who does my odd jobs. The ripple effect on the local economy, in other words, would be calamitous.
Then I saw the great news about GM’s success and I stopped worrying. Because GM and I are in the same position, and things seem to be working out splendidly for them.
You see, about a month ago, I asked my mother to bail me out. I knew she’d do it. She’s done it before. She sent me money she’s been saving toward my retirement. I resolved to stop spending money on stupid things. (There was really no excuse for that lamp, Mom, I know. Sorry! In my defense, I was sure there was a genie in it.)
With my mom paying my rent, I’ve been able to charge less for what I write and stay in the black. Voilà, I’m selling a cheaper product (for now) than Reuters and AP. That will teach them where to stuff their “good investment decisions” and their “economies of scale.” I fired the guy who does my odd jobs—it was painful, but it had to be done. So, congratulations to me! I’m making it in this tough business climate, with a little help from Mom. America’s back! And if I’m broke again in a year, I’ll hit her up again. (Don’t forget, Mom, that you really have no choice: no matter what you do, I’m still going to be a huge financial drag on you. If I fail, I’ll end up coming home with all my cats. You don’t want me sleeping on your couch, do you? And you sure don’t want to see what my cats would do to that couch. Antique, I believe it is?)
All of this is, alas, a perfectly accurate description of my financial life. The reader may wonder about my mom’s wisdom in going along with this plan. That’s between me and her—she loves me, and it’s her money, not yours. The money that went to GM was yours, however. And I suppose you must love GM as if it’s your profligate kid, because surely you could not be so credulous as to believe these reports about the spectacular success of the bailout.
There was a rush to buy GM shares last Thursday, when the company, which emerged from bankruptcy restructuring last summer, held an IPO. The company has been drowned in taxpayer cash. It’s going to be fine in the short term. No one should be surprised by this. Anyone—and any company—can get back in the black in the short term if someone gives it a ton of money. And who wouldn’t want to invest in a company that everyone knows won’t be allowed to go down? All the merchants in my neighborhood would lend me money, too, if I asked, confident that their loan would be repaid. A “generous American mom” sounds pretty good to them.
Of course, GM is paying back its new loans, though this doesn’t help investors who hold old GM stock; that’s worthless. By the way, I’m also considering stiffing my creditors. The GM example proves that it will result in an immediate improvement of my balance sheet. GM’s production numbers have been increasing, and mine have, too: it’s a lot easier to write when you’ve got peace of mind. Whether anyone will buy the stuff I’m writing, God knows, but my word count is definitely up, and that, apparently, is the number that matters.
Note that GM is still producing those gas-guzzling pickups and SUVs that no one seemed to want before. Great news for me: I’ll just keep writing about the arcana of Turkish constitutional politics. It’s what the marketshould want. Turkish politics are fascinating. I don’t know what’s wrong with Americans. If they understood what was good for them, they’d want to be better informed about Turkey. (They’d want that Volt electric car, too. I hear it’s much better for the environment.)…
November 28, 2010
Nudity in contemporary art is tolerated increasingly in countries where it wasn’t in the past.
Although the nude is widely considered a foundation of Western art history, this is clearly not the case in many other cultures around the world, where religious and social traditions often prohibit depictions of the body. Nevertheless, contemporary artists from many countries in the Middle East and Asia are now exploring nudity, sometimes to connect with erotic themes in pre-Islamic periods and sometimes as an act of open rebellion against social and political conditions.
There has recently been a growing acceptance of work involving nudity, as many of these countries have developed their own contemporary-art markets. But in some places, especially Iran, the penalties can be formidable and frightening.
Challenging the censors is Ramin Haerizadeh, who, in his digital photo series “Men of Allah,” casts himself as a performer in a harem, cavorting naked in configurations reminiscent of Persian tapestries. Until 2009, he was able to make these works while living in his native Tehran. But when they were featured in “Unveiled: New Art from the Middle East” at the Saatchi Gallery in London that year, Iran’s Ministry of Intelligence and National Security began harassing local galleries to determine the artist’s whereabouts. They even raided a collector’s home, seizing several of Haerizadeh’s works and threatening the collector with four months in prison. Friends warned the artist, who was in Paris with his brother Rokni, a painter, for the opening of their show at Galerie Thaddeus Ropac. They never returned to Iran, fleeing to Dubai, where they now live and show with Gallery Isabelle van den Eynde.
To most viewers, Ramin Haerizadeh’s images would seem more whimsical and lyrical than provocative. In today’s Iran, however, where a strict reading of Islamic law forbids depictions of the body, an artist can face imprisonment or even execution for making such bold statements. But that doesn’t mean there aren’t artists in Iran, or other Islamic countries in the region, who incorporate nudes into their work. In fact, there are many, drawing upon influences ranging from Persian miniatures to Jeff Koons.
“There is a strong erotic tradition in Iranian art, such as art from the Safavid empire in the early 17th century that is full of erotic images, not all of them heterosexual,” says art historian Edward Lucie-Smith who, along with dealer Janet Rady, curated “Iranian Bodies” at the Werkstattgalerie in Berlin this year. “This was the point of doing the show—to demonstrate that there was a real continuity based on erotic feeling in Iranian culture,” he says. “Also to show that women artists in Iran are often bolder than the men.” The exhibition—which featured works by Haerizadeh, as well as psychedelic photo collages by Fereydoun Ave, paintings of people submerged in bathtubs by Mitra Farahani, mannequins pierced by and balancing on a bar by Narmine Sadeg, and surrealistic self-portraits by Nikoo Tarkhani—provoked outrage back home. Gholam-Ali Taheri, the head of Tehran’s Museum of Contemporary Art, denounced the work as “decadent,” and a story decrying the artists spread throughout the media, but the artists themselves were not harassed by the authorities.
Tarkhani, in her bold paintings, portrays herself as bald with her body fragmented against a backdrop of blue tiles. “I am not talking about Islam or any other religion,” she says. “I am only talking about social conditions which I have experienced up close. I think of this nudity as a feminist cry of Iranian art; it is a way of expressing freedom from traditions and rules that kept us women indoors.” At the same time, she elaborates, “I put ancient Persian patterns on the tiles as a way to localize the figure in my paintings…”
November 28, 2010
November 28, 2010
November 27, 2010
Of the newly elected Tea Party senators, Mike Lee, a 39-year-old Republican from Utah, has the most impeccable establishment legal credentials: the son of Rex Lee, a solicitor general under President Reagan, he attended law school at Brigham Young and later clerked for Samuel Alito on the U.S. Court of Appeals and then the Supreme Court. But on the campaign trail, especially during his heated primary battle with the three-term Republican incumbent Bob Bennett, Lee offered glimpses of a truly radical vision of the U.S. Constitution, one that sees the document as divinely inspired and views much of what the federal government currently does as unconstitutional.
Lee proposed to dismantle, on constitutional grounds, the federal Departments of Education, and Housing and Urban Development. He insisted that “the Constitution doesn’t give Congress the power to redistribute our wealth” and vowed to phase out Social Security. He proposed repealing the 16th Amendment, which authorizes the progressive federal income tax, and called the 17th Amendment, which allows senators to be elected by popular vote rather than by state legislatures, a “mistake.” He pledged to end “the unauthorized federal occupation” of Utah land, insisting that Congress lacks the constitutional power to designate federally protected wilderness unless the relevant state legislature approves. He embraced “nullification,” the idea that states have the right — and indeed the duty — to disregard federal laws, like the new health-care-reform bill, that they say are unconstitutional. Lee, who is a Mormon and a social conservative, also has equated the founding fathers’ invocations of a deist God with the moral values of the Mormon Church. “As your U.S. senator,” he promised during the campaign, “I will not vote for a single bill that I can’t justify based on the text and the original understanding of the Constitution, no matter what the court says you can do.”
Like the Tea Party movement itself, Lee’s constitutional vision may appear to be an incohesive mixture of libertarianism and social conservatism, of opposition to federal power and support for tearing down the wall of separation between church and state. In fact, however, it represents an exotic but, in its own way, coherent idea of the Constitution, one that is consistent with certain familiar strains of legal conservatism and constitutional scholarship but at the same time is genuinely eccentric and extreme. Much of the Tea Party movement’s more-strident rhetoric, seen in light of this constitutional vision, may be best understood not as scattershot right-wing hostility to government but as a comprehensive, if startling, worldview about the proper roles of government and faith in American life.
Many of the positions Lee outlined on the campaign trail appear to be inspired by the constitutional guru of the Tea Party movement, W. Cleon Skousen, whose 1981 book, “The 5,000-Year Leap,” argued that the founding fathers rejected collectivist “European” philosophies and instead derived their divinely inspired principles of limited government from fifth-century Anglo-Saxon chieftains, who in turn modeled themselves on the Biblical tribes of ancient Israel. Skousen, a Mormon who died in 2006 at 92, was for years dismissed by many mainstream conservatives, including William F. Buckley Jr., as a conspiracy-mongering extremist; he was also eventually criticized by the Mormon Church. A vocal supporter of the John Birch Society, Skousen argued that a dynastic cabal, including international bankers like the Rockefellers and J. P. Morgan, conspired to manipulate both Communism and Fascism to promote a one-world government.
Skousen’s vision of the Constitution was no less extreme. Starting more than 60 years ago with his first book, “Prophecy and Modern Times,” he wrote several volumes about the providential view of the U.S. Constitution set out in Mormon scripture, which sees the Constitution as divinely inspired and on the verge of destruction and the Mormon Church as its salvation. Skousen saw limited government as not only an ethnic idea, rooted in the Anglo-Saxons, but also as a Christian one, embodied in the idea of unalienable rights and duties that derive from God, and he insisted that the founders’ “religious precepts turned out to be the heart and soul of the entire American political philosophy.”
In 2009, after years of obscurity, Skousen’s ideas were unexpectedly rediscovered by Glenn Beck, who was given a copy of “The 5,000-Year Leap” by a friend. As a result of Beck’s endorsement, the book became a best seller and a Tea Party favorite. Beck’s endorsement also revitalized the National Center for Constitutional Studies, which Skousen founded under another name in 1971 and which offered seminars on his books. During the 1990s, the center typically offered no more than a dozen seminars a year; this past year, it offered more than 200 to Tea Party groups across the country…
November 27, 2010
Do you think that there is a computer screen sitting in front of you right now?
It would certainly seem so if you are reading these words online, but in fact you are not actually “seeing” the computer screen in front of you. What you see are photons of light bouncing off the screen (and generated by the internal electronics of the screen itself), which pass through the hole in the iris of your eye, through the liquid medium inside your eye, wending their way through the bipolar and ganglion cells to strike the rods and cones at the back of your retina. These photons of light carry just enough energy to bend the molecules inside the rods and cones to change the electrochemical balance inside these cells, causing them to fire, or have what neuroscientists call an “action potential.”
From there the nerve impulse races along the neural pathway from the retina to the back of the brain, leaping from neuron to neuron across tiny gaps called synaptic clefts by means of neurotransmitter substances that flow across those gaps. Finally, they encounter the visual cortex, where other neurons record the signals that have been transduced from those photons of light, and reconstruct the image that is out there in the world.
Out of an incomprehensible number of data signals pouring in from the senses, the brain forms models of faces, tables, cars, trees, and every conceivable known (and even unknown — imagined) object and event. It does this through something called neural binding. A “red circle” would be an example of two neural network inputs (“red” and “circle”) bound into one percept of a red circle. Downstream neural inputs, such as those closer to muscles and sensory organs, converge as they move upstream through convergence zones, which are brain regions that integrate information coming from various neural inputs (eyes, ears, touch, etc.) You end up perceiving a whole object instead of countless fragments of an image. This is why you are seeing an entire computer screen with a meaningful block of text in front of you right now, and not just a jumble of data.
At any given moment there are, in fact, hundreds of percepts streaming into the brain from the various senses. All of them must be bound together for higher brain regions to make sense of it all. Large brain areas such as the cerebral cortex coordinate inputs from smaller brain areas such as the temporal lobes, which themselves collate neural events from still smaller brain modules such as the fusiform gyrus (for facial recognition). This reduction continues all the way down to the single neuron level, where highly selective neurons — sometimes described as “grandmother” neurons — fire only when subjects see someone familiar. Other neurons only fire when an object moves left to right across one’s visual field. Still other neurons only fire when an object moves right to left across the visual field. And so on, up the networks, goes the binding process. Caltech neuroscientists Christof Koch and Gabriel Kreiman, in conjunction with UCLA neurosurgeon Itzhak Fried, for example, have even found a single neuron that fires when the subject is shown a photograph of Bill Clinton (PDF) and no one else!
The models generated by biochemical processes in our brains constitute “reality.” None of us can ever be completely sure that the world really is as it appears, or if our minds have unconsciously imposed a misleading pattern on the data. I call this belief-dependent realism. In my forthcoming book, The Believing Brain, I demonstrate the myriad ways that our beliefs shape, influence, and even control everything we think, do, and say about the world. The power of belief is so strong that we typically form our beliefs first, then construct a rationale for holding those beliefs after the fact. I claim that the only escape from this epistemological trap is science. Flawed as it may be because it is conducted by scientists who have their own set of beliefs determining their reality, science itself has a set of methods to bypass the cognitive biases that so cripple our grasp of the reality that really does exist out there.
According to the University of Cambridge cosmologist Stephen Hawking, however, not even science can pull us out of such belief dependency. In his new book, The Grand Design, co-authored with the Caltech mathematician Leonard Mlodinow, Hawking presents a philosophy of science he calls “model-dependent realism,” which is based on the assumption that our brains form models of the world from sensory input, that we use the model most successful at explaining events and assume that the models match reality (even if they do not), and that when more than one model makes accurate predictions “we are free to use whichever model is most convenient.” Employing this method, Hawking and Mlodinow claim that “it is pointless to ask whether a model is real, only whether it agrees with observation…”
November 27, 2010
It’s been a challenging time for the climate change story on just about every front. A year ago, the unauthorized release of a cache of controversial e-mails written by prominent climate scientists created a media firestorm just before the United Nations climate-change summit in Copenhagen. The international effort to strike a treaty that would limit greenhouse-gas emissions went down in flames. It’s been a slow burn ever since, for scientists and journalists alike.
After the intense media attention to Copenhagen in late 2009, the amount of climate-change coverage in 2010 declined significantly in some major American newspapers—to a four-year low—with the focus increasingly on domestic and foreign politics, according to a recent survey using Lexis-Nexis. The U.S. Senate tossed climate-change legislation onto the pyre, and recent mid-term elections brought a slew of Republicans to town that don’t believe the climate science and are likely to fight federal action. Meanwhile, the Gulf oil spill comprised the bulk of environmental coverage and consumed the time of many reporters who also cover climate science and policy.
With a new UN climate meeting starting Monday in Cancun, environment reporters and climate scientists alike are regrouping, lowering expectations for the Mexico meeting and figuring out how to cover climate change going forward.
“There’s a tremendous difference,” says Juliet Eilperin, The Washington Post’s chief environment reporter. Copenhagen was a “cliff-hanger,” with a “sense of anticipation and excitement,” she recalled: “While there was uncertainty about what Copenhagen would produce, people thought something significant was going to happen.” But going into the two-week Cancun deliberations, “it feels like there is absolutely no momentum…. What will there even be to cover in Cancun in terms of public policy or reader interest?”
Like many of her colleagues, Eilperin has scaled back her own coverage of not only the Cancun meeting—she’s only going for the second week and may be joined by a Mexico City correspondent—but of climate-change policy in general. With climate legislation dead for now in Washington, D.C. “there’s a little more room for covering other environmental issues,” she said, citing plans to expand her reportage in areas like oceans and wilderness.
At The New York Times, Erica Goode, editor of the paper’s seven-person environment cluster, says coverage of Cancun will certainly be scaled way back from that of Copenhagen, sending Washington correspondent John Broder to Mexico as the paper’s primary person covering the proceedings. “Obviously, the situation has changed dramatically from a year ago. A year ago the issue was still front and center on the administration agenda, and there was a lot of expectation for what might happen…. There is not a lot expected at Cancun.”
But, says Goode, the larger climate-change story is still high on the Times’s agenda, as evidenced by a new series, “Temperature Rising,” which will “focus on the central arguments in the climate debate and examine the evidence for global warming and its consequences.” The series launched on November 13 with a massive front-page Sunday package (and multimedia online graphics) on the state of the science and impact of sea level rise from melting glaciers. It was a return to days of yore, with an enterprising Justin Gillis, who replaced Andrew Revkin as the paper’s chief environmental science reporter in May, reporting from a helicopter flying over Greenland.
Goode says that the “back-to-basics” series was intended “as a huge service to readers to step back and do richer explanatory pieces that take a hard look at the evidence…. Some readers don’t understand what the whole debate is about.” At least two more pieces are expected this year in the Gillis series, with more to come in 2011. According to Goode, the series had been put on hold because of the Gulf oil spill, among other things, which gobbled up space in the paper and reporting time that might have otherwise gone to climate change.
The Times’s new series was cited by Harvard climate scientist Dr. James J. McCarthy as a good example of putting important climate science in perspective—an approach he said has been missing in recent climate coverage. “Over the past few years, coverage of climate science in the U.S. media has been disappointing,” he said in an interview. Stories tended to inflate “juicy quips from stolen private e-mail exchanges,” but barely mentioned the “subsequent, thorough investigations by universities and academies that found no evidence of wrong doing.”
The challenge ahead, of course, is finding new angles to freshen up the climate story after a tough year in which the amount of climate coverage showed a steep slide after Copenhagen in major newspapers like the Times and the Post. The number of stories mentioning climate change or global warming dropped to a four-year low in both papers in the third quarter of 2010. While this certainly reflected a diversion of resources to the oil spill, the amount of climate-change coverage has been declining all year long, according to a Lexis-Nexis search by Carolyn McGourty, a fellow at Harvard University’s Belfer Center for Science and International Affairs working with this correspondent.
The amount of climate-change coverage first shot up in the spring of 2006, following the release of Al Gore’s film, “An Inconvenient Truth.” Peak coverage occurred in early 2007, accompanying the release of the Intergovernmental Panel on Climate Change’s (IPCC) Fourth Assessment report documenting scientific knowledge about the widespread hazards that rising greenhouse gas emissions pose to the planet. Coverage remained steadily high throughout 2008, fluctuated in mid-2009, and jumped up again at the end of last year when “Climategate” and the Copenhagen conference collided. The one-year anniversary of those two pivotal events has, not surprisingly, produced a bumper crop of articles reflecting on the lessons learned by journalists and scientists involved in climate change communication and coverage…
November 27, 2010