In December 2011 a tiny but wondrous Chicago program of the Illinois Humanities Council (IHC) launched an online auction to raise needed cash. The Public Square, which promotes dialogue about political, social, and cultural issues, was celebrating its tenth anniversary, and my wife, Bernardine Dohrn, and I offered our own prize to a winning bidder: a lavish dinner for six.
We’ve done the dinner thing two dozen times over the years—for a local baseball camp, a law students’ public interest group, immigrant-rights organizing, and a lot of other worthy work—and we’ve typically raised a few hundred dollars. There were many more attractive items on the auction list: Alex Kotlowitz was available to edit twenty pages of a non-fiction manuscript, Gordon Quinn to discuss documentary film projects over dinner, and Kevin Coval to write and spit an original poem.
We paid little attention as the online auction launched and then inched onward—a hundred dollars, two hundred, and then three—even when a right-wing blogger picked it up and began flogging the Illinois Humanities Council for “supporting terrorism” by giving taxpayer money to my wife and me, two founding members of the Weather Underground. He was a little off on the concept because we were actually donating money and services to them, not the other way around, but this was a typical turn for the fact-free, faith-based blogosphere, so we paid it no mind.
There was a little “Buy Instantly” button on our dinner item that someone could select for $2,500, which seemed absurdly high. But in early December TV celebrity and conservative bad boy Tucker Carlson clicked his mouse, and we were his.
I loved it immediately. Surely he had some frat boy prank up his sleeve—a kind of smug and superior practical joke or an ad hominem put-down—but so what? We’d just raised more for the Public Square in one bid than anyone thought would be raised from the entire auction. We won!
Well, not so fast—this did mean we had to prepare dinner for Carlson plus five, and that could become messy. But, maybe it wouldn’t, and anyway, we argued, it’s just a couple of distasteful hours at most, and, then bingo! Cash the check.
Right wing blogs erupted, with some writers tickled by Carlson’s sense of humor and others earnestly saluting his courage and daring in service to “the cause” for his willingness to sit in close quarters with us—radical leftists and enemies of the state. But others took a grimmer view: “Don’t do it, Tucker,” they pled. “This will legitimize and humanize two of America’s greatest traitors.”
Carlson got a congratulatory letter from the IHC that offered ten potential dates for dinner and noted that “all auction items were donated to the IHC [which] makes no warranties or representations with respect to any item or service sold” and that “views and opinions expressed by individuals attending the dinner do not reflect those of the Illinois Humanities Council, the National Endowment for the Humanities, or the Illinois General Assembly.” I imagined the exhausted scrivener bent over his table copying that carefully crafted, litigation-proof language—does it go far enough?
Carlson chose February 5- Super Bowl Sunday.
We were besieged by friends clamoring to come to dinner. “I’ll serve drinks,” wrote one prominent Chicago lawyer, “or, if you like, I’ll wear a little tuxedo and park the cars. Please let me come!”
All our friends saw the event as theater, but not everyone was delighted with the show. A few called Carlson and company “vipers” and argued that we should never talk to people like them. We disagreed; talk can be good. Others began distancing themselves from us, wringing their hands the moment they saw themselves mentioned on the right-wing blogs and instantly, almost instinctively, assuming a defensive crouch.
Dinner with Tucker Carlson seemed cheery and worthwhile compared to counseling a bunch of cringing liberals.
Things quickly got weirder. Two IHC board members resigned, complaining that the organization was now affiliated with people who “advocate violence”—presumably Bernardine and me, not Carlson or his friends. The paid stenographers at the Chicago Tribune duly reported the two resignations by quoting the outraged quitters and leaving it at that.
(Regarding the art and science of fact-checking: had the Tribune in fact checked the facts, the fact-checker would have checked the fact that the quitters used the phrase “advocates violence.” Check. Had he or she dug a little deeper, the fact-checker might have discovered that, yes, we’d been described that way before, even in the pages of the Tribune. Check. And so it goes in the hermetically sealed, narcissistic echo chamber—a characterization becomes a fact with enough repetition. Oh, and for the record, we don’t advocate violence—we’re not with NATO or G8. Check.)
Some winced and stooped; no one was moved publicly to defend the idea that dialogue, controversy, and conversation are essential to the culture of democracy and to the vitality of the humanities, and no one condemned this most knee-jerk instance of demonization and far-fetched guilt-by-association.
Dinner with Carlson seemed cheery and worthwhile compared to counseling a bunch of cringing liberals. Where is the backbone or the principle? No wonder the cadre of right-wing keyboard flamethrowers feels so disproportionately powerful. Liberals seem forever willing to police themselves into an orderly line right next to the slaughterhouse…
March 31, 2012
In a dimly lighted conference room in the Palo Alto, Calif., offices of Smule, a maker of music apps, Ge Wang was sitting in a meeting with his colleagues, humming, singing and making odd whooshing noises into the microphone of an iPad, checking the screen, and then pounding fugues of code into an attached laptop. Poking at his devices, he reminded me of a child obliviously amusing himself while the grown-ups natter on around him. Nobody else in the meeting seemed to notice Wang’s behavior as they listened to a debriefing about recent updates to Smule’s Mini Magic Piano app.
When the guy at the head of the table mentioned that the graphics on the welcome page now subtly pulse, Wang looked up. “Yeahhhh,” he said. “Classic Smule,” he added in a mutter to nobody in particular. “Everything needs to pulse.” Then he blew into his iPad mic and banged some more code.
Wang, who is 34 and a founder of the company, often leaves an impression of childlike distractedness. But in fact he’s distressingly productive. He was coding in someone else’s meeting in July because he had just two hours to prepare for a presentation on a new Smule product, code-named “Project Oke.” His company has been remarkably successful, but the app-o-sphere is more competitive than it used to be, and there was a lot riding on his coming up with another hit — ideally by year’s end.
Wang likes to say that he has two full-time jobs, and they seem wholly distinct. At Stanford University, where he is an assistant professor, he teaches a full course load through the Center for Computer Research in Music and Acoustics (usually referred to as CCRMA, pronounced “karma”), presiding over a highly experimental “orchestra” that performs with cleverly customized laptops, cellphones and other electronics. It’s very cutting edge and, in terms of audience, very rarefied. At Smule, a profit-driven, private company that recently raised its second round of venture-capital financing, he devises applications bought by millions.
Founded in 2008, Smule released several apps in rapid succession, but its breakthrough was the Ocarina. Exploiting the iPhone’s microphone as well as its touch-screen interface, Wang converted the device into an easy-to-play flute-like instrument. In what has become a Smule signature, the app also included a representation of the globe, with little dots that light up to show where in the world someone is playing the app at that moment. With a tap, you can listen. It’s also possible to arrange a duet with an Ocarina user thousands of miles of way, whom you’ve never met. The Ocarina was downloaded half a million times, at 99 cents a pop, in its first couple of months, making it the top-selling app for three straight weeks; a new artist selling that many downloads of a single today would probably end up on the cover of Rolling Stone.
The common aim of Smule’s products is to prod nonmusicians into making music and to interact with others doing the same. There are singing apps like I Am T-Pain and Glee Karaoke, and digital versions of instruments like Magic Piano and Magic Fiddle. What connects these easy-to-use diversions to Wang’s more abstruse gear-tinkering is the exploration of expressive sound via technology: everyone can make music, he believes, and everyone should.
It’s hard to overestimate how much Smule’s strategy revolves around Wang himself. Before the first Project Oke demo, I asked another Smule employee what the app would consist of, how it would work. He shrugged. “Right now,” he said cheerfully, “it’s all in Ge’s brain.”
What marched out of Wang’s brain at that first Project Oke demo in July was a cute robot, singing and dancing. The app, now known as Sing, Robot, Sing!, is likely to be in Apple’s App Store early next year, depending on how quickly the final version moves through the approval process.
There it will join what has become a bewildering array of products in the “music” category. This includes services like Spotify and Pandora that are analogous to radio, and games like Tap Tap Revenge, which involve tapping dots on your phone’s screen in sync with songs. Artists routinely release phone and tablet applications that include remix-it-yourself options. Reality Jockey, based in London, has created “reactive music” apps that respond to sounds in the listener’s environment as well as user actions. There are sophisticated instrumentlike apps that require technical skill or musical knowledge to master, and apps that recreate that ultimate amateur form, karaoke.
You could think about these apps on a continuum from the enduring (making something that aspires to art) to the ephemeral (a time-killing game). Smule sits somewhere in the middle. (“Smule” is a shortened version of Sonic Mule, a reference to a character in Isaac Asimov’s “Foundation Trilogy” who influences others without their knowledge, disrupts existing power structures and builds an empire.) Smule’s apps have instrumentlike functions, meaning they can be used to create new, expressive sounds, but they also feel like games. Wang is essentially trying to trick users into making music without quite realizing it. “He’s always had this notion that everybody is musical but they’re just too embarrassed to do anything about it,” says Perry Cook, a computer-music pioneer who was Wang’s adviser at Princeton and today consults for Smule. “Of course, the karaoke solution to that is to get everybody drunk,” he adds…
There are many reasons for believing the brain is the seat of consciousness. Damage to the brain disrupts our mental processes; specific parts of the brain seem connected to specific mental capacities; and the nervous system, to which we owe movement, perception, sensation and bodily awareness, is a tangled mass of pathways, all of which end in the brain. This much was obvious to Hippocrates. Even Descartes, who believed in a radical divide between soul and body, acknowledged the special role of the brain in tying them together.
The discovery of brain imaging techniques has given rise to the belief that we can look at people’s thoughts and feelings, and see how ‘information’ is ‘processed’ in the head. The brain is seen as a computer, ‘hardwired’ by evolution to deal with the long vanished problems of our hunter-gatherer ancestors, and operating in ways that are more transparent to the person with the scanner than to the person being scanned. Our own way of understanding ourselves must therefore be replaced by neuroscience, which rejects the whole enterprise of a specifically ‘humane’ understanding of the human condition.
In 1986 Patricia Churchland published Neurophilosophy, arguing that the questions that had been discussed to no effect by philosophers over many centuries would be solved once they were rephrased as questions of neuroscience. This was the first major outbreak of a new academic disease, which one might call ‘neuroenvy’. If philosophy could be replaced by neuroscience, why not the rest of the humanities, which had been wallowing in a methodless swamp for far too long? Old disciplines that relied on critical judgment and cultural immersion could be given a scientific gloss when rebranded as ‘neuroethics’, ‘neuroaesthetics’, ‘neuromusicology’, ‘neurotheology’, or ‘neuroarthistory’ (subject of a book by John Onians). Michael Gazzaniga’s influential study, The Ethical Brain, of 2005, has given rise to ‘Law and Neuroscience’ as an academic discipline, combining legal reasoning and brain imagining, largely to the detriment of our old ideas of responsibility. One by one, real but non-scientific disciplines are being rebranded as infant sciences, even though the only science involved has as yet little or nothing to say about them.
It seems to me that aesthetics, criticism, musicology and law are real disciplines, but not sciences. They are not concerned with explaining some aspect of the human condition but with understanding it, according to its own internal procedures. Rebrand them as branches of neuroscience and you don’t necessarily increase knowledge: in fact you might lose it. Brain imaging won’t help you to analyse Bach’s Art of Fugue or to interpret King Lear any more than it will unravel the concept of legal responsibility or deliver a proof of Goldbach’s conjecture; it won’t help you to understand the concept of God or to evaluate the proofs for His existence, nor will it show you why justice is a virtue and cowardice a vice. And it cannot fail to encourage the superstition which says that I am not a whole human being with mental and physical powers, but merely a brain in a box.
The new sciences in fact have a tendency to divide neatly into two parts. On the one hand there is an analysis of some feature of our mental or social life and an attempt to show its importance and the principles of its organisation. On the other hand, there is a set of brain scans. Every now and then there is a cry of ‘Eureka!’ — for example when Joshua Greene showed that dilemmas involving personal confrontation arouse different brain areas from those aroused by detached moral calculations. But since Greene gave no coherent description of the question, to which the datum was supposed to suggest an answer, the cry dwindled into silence. The example typifies the results of neuroenvy, which consist of a vast collection of answers, with no memory of the questions. And the answers are encased in neurononsense of the following kind:
‘The brains of social animals are wired to feel pleasure in the exercise of social dispositions such as grooming and co-operation, and to feel pain when shunned, scolded, or excluded. Neurochemicals such as vasopressin and oxytocin mediate pair-bonding, parent-offspring bonding, and probably also bonding to kith and kin…’ (Patricia Churchland).
As though we didn’t know already that people feel pleasure in grooming and co-operating, and as though it adds anything to say that their brains are ‘wired’ to this effect, or that ‘neurochemicals’ might possibly be involved in producing it. This is pseudoscience of the first order, and owes what scant plausibility it possesses to the fact that it simply repeats the matter that it fails to explain. It perfectly illustrates the prevailing academic disorder, which is the loss of questions.
Traditional attempts to understand consciousness were bedevilled by the ‘homunculus fallacy’, according to which consciousness is the work of the soul, the mind, the self, the inner entity that thinks and sees and feels and which is the real me inside. We cast no light on the consciousness of a human being simply by redescribing it as the consciousness of some inner homunculus. On the contrary, by placing that homunculus in some private, inaccessible and possibly immaterial realm, we merely compound the mystery.
As Max Bennett and Peter Hacker have argued (Philosophical Foundations of Neuroscience, 2003), this homunculus fallacy keeps coming back in another form. The homunculus is no longer a soul, but a brain, which ‘processes information’, ‘maps the world’, ‘constructs a picture’ of reality, and so on — all expressions that we understand, only because they describe conscious processes with which we are familiar. To describe the resulting ‘science’ as an explanation of consciousness, when it merely reads back into the explanation the feature that needs to be explained, is not just unjustified — it is profoundly misleading, in creating the impression that consciousness is a feature of the brain, and not of the person.
Perhaps no instance of neurononsense has been more influential than Benjamin Libet’s ingenious experiments which allegedly ‘prove’ that actions which we experience as voluntary are in fact ‘initiated’ by brain events occurring a short while before we have the ‘feeling’ of deciding on them. The brain ‘decides’ to do x, and the conscious mind records this decision some time later. Libet’s experiments have produced reams of neurobabble. But the conclusion depends on forgetting what the question might have been. It looks significant only if we assume that an event in a brain is identical with a decision of a person, that an action is voluntary if and only if preceded by a mental episode of the right kind, that intentions and volitions are ‘felt’ episodes of a subject which can be precisely dated. All such assumptions are incoherent, for reasons that philosophers have made abundantly clear…
March 30, 2012
The originality of the species: Any breakthrough depends on the efforts of countless predecessors- reflections on originality and collaboration
March 30, 2012
In June 1858 a slender package from Ternate, an island off the Dutch East Indies, arrived for Charles Darwin at his country home in Down, Kent. He may well have recognised the handwriting as that of Alfred Wallace, with whom he had been in correspondence and from whom he was hoping to receive some specimens. But what Darwin found in the package along with a covering letter was a short essay. And this essay was to transform Darwin’s life.
Wallace’s 20 pages, so it seemed to their reader on that momentous morning, covered all the principle ideas of evolution by natural selection that Darwin had been working on for more than two decades and which he thought were his exclusive possession – and which he had yet to publish. Wallace, working alone, with very little in the way of encouragement or money, drew from his extensive experience of natural history, gathered while sending back specimens for collectors. He articulated concisely the elements as well as the sources familiar to Darwin: artificial selection, the struggle for survival, competition and extinction, the way species changed into different forms by an impersonal, describable process, by a logic that did not need the intervention of a deity. Wallace, like Darwin, had been influenced by the geological speculations of Charles Lyell, and the population theories ofThomas Malthus.
In a covering letter Wallace politely asked Darwin to forward the essay to Lyell. Now, Darwin could have quietly destroyed Wallace’s package and no one would have known a thing – it had taken months to arrive, and the mail between the Dutch East Indies could hardly have been reliable in the mid-19th century. But Darwin was an honourable man, and knew that he could never live with himself if he behaved scurrilously. And yet he was in anguish. In his own letter to Lyell, that accompanied Wallace’s essay, which Darwin forwarded that same day, he lamented: “So all my originality, whatever it may amount to, will be smashed.” He was surprised at the depth of his own feelings about priority, about being first. As Janet Browne notes in her biography of Darwin, the excitement of discovery in his work had been replaced by profound anxieties about possession and ownership. He was ambushed by low emotions – mortification, irritation, rancour. In a much-quoted phrase, he was “full of trumpery feelings”.
He had held off publishing his own work in a desire to perfect it, to amass instances, to make it as immune to disproof as he could. And, of course, he was aware of his work’s theological implications – and that had made him cautious too. But he had been “forestalled”. That day he decided he must yield priority to Wallace. He must, he wrote, “resign myself to my fate”.
Within a day, he had even more pressing concerns. His 15-year-old daughter, Henrietta, fell ill and there was fear that she had diphtheria. The next day the baby, Charles, his and Emma’s 10th and last child, developed a fever. Meanwhile, Lyell was urging Darwin to concede nothing and to publish a “sketch”, which would conclusively prove Darwin’s priority over Wallace.
Taking his turn to nurse the sick baby, Darwin could decide nothing, and left the matter to his close friend Joseph Hooker, and to Lyell. They discussed the matter and proposed that Darwin’s “sketch” should be read along with Wallace’s essay at a meeting of the Linnean Society, and the two pieces would be published in the society’s journal. Speed was important. Wallace might have sent his essay to a magazine, in which case, Darwin’s priority would be sunk, or at least compromised. There was no time to ask Wallace’s permission to have his essay read.
But before Darwin could consider the proposal, the baby died. In his grief, Darwin hastily made a compilation for Hooker to edit. An 1844 set of notes, though out of date, seemed to make a conclusive case for priority, for they bore Hooker’s pencilled marks. A more recent 1857 letter to Asa Grey, the professor of botany at Harvard, set out concisely Darwin’s thoughts on evolution by natural selection.
Lyell, Hooker and Darwin were eminent insiders in the closed world of Victorian metropolitan science. Wallace was the outsider. He came from a far humbler background, and if he was known at all, it was as a provider of material for gentlemen experts. It was customary at the Linnean Society for double contributions to be read in alphabetical order. And so, in Darwin’s absence – he and Emma buried their baby that day – his 1844 notes were followed by his detailed 1857 letter, and then, almost as a footnote, came Wallace’s 1858 essay.
Darwin had delved far deeper over many years and certainly deserved priority. Wallace found it difficult to think through the implications of natural selection, and was reluctant in later years to allow that humans too were subject to evolutionary change. The point, however, is Darwin’s mortification about losing possession. As he wrote later to Hooker, “I always thought it very possible that I might be forestalled, but I fancied that I had a grand enough soul not to care.”
Hooker began to press his friend to write a proper scientific paper on natural selection. Darwin protested. He needed to set out all the facts, and they could not be accommodated within a single paper. Hooker persisted, and so Darwin began his essay, which in time grew to becomeOn the Origin of Species. In Browne’s description, what was suddenly released were “years of pent-up caution”. Back at Down House, Darwin did not use a desk, but sat in an armchair with a board across his knees and wrote like a fiend. “All the years of thought,” writes Browne, “climaxed in these months of final insight … the fire within came from Wallace.”
The Origin, written in 13 months, represents an extraordinary intellectual feat: mature insight, deep knowledge and observational powers, the marshalling of facts, the elucidation of near-irrefutable arguments in the service of a profound insight into natural processes. The reluctance to upset his wife Emma’s religious devotion, or to contradict the theological certainties of his scientific colleagues, or to find himself in the unlikely role of iconoclast, a radical dissenter in Victorian society, all were swept aside for fear of another man taking possession of and getting credit for the ideas he believed to be his…
A new report from the Manhattan Institute, a conservative think tank, has declared “The End of the Segregated Century.” Unfortunately, this is an overstatement: segregation has declined, but it is not at an end. And the significance of the decline is up for debate.
The report, by economists Edward Glaeser and Jacob Vigdor, has garnered substantial media attention, including a write-up in The New York Times. It rightly claims—as is widely known, in a large part thanks to Glaeser’s and Vigdor’s work—that the segregation of African Americans in the United States is down from its all-time peak in 1970.
But segregation remains remarkably present. Calling the decline a “success story,” as Glaeser has elsewhere, implies a tragically low standard for success. As Jonathan Rothwell of the Brookings Institution has reported, a majority of African Americans still live in “hypersegregated” metropolitan areas (such as Detroit), where at least 60 percent of the African American population would have to move in order to be evenly spread in the metropolitan area. Ninety-five percent of African Americans live in at least a moderately segregated metropolitan area (such as Kansas City), where 40 percent of blacks would have to move to achieve integration.
So we are not at the “end of the segregated century,” even though segregation has substantially declined.
What is more, the changes that have occurred likely seem of little consequence to blacks living in ghettos that, while smaller than 40 years ago, are still massive.
In evaluating the level of segregation, we need to think about why we sought to end segregation in the first place—that is, we have to consider whether the reduction is having an impact on the negative outcomes of segregation. And it turns out segregation has declined in a manner that is unlikely to reduce its pernicious effects.
There are at least three reasons why the United States, as a matter of public policy, was and should continue to be committed to ending segregation.
First, segregation was built by denying a group of people the fundamental freedom to choose where to live. In this respect, the decline in segregation is at least a partial success story. The state-sanctioned discrimination that helped construct the American ghetto has largely been eliminated. Restrictive covenants are long dead and strict government oversight of the real-estate industry has curtailed practices reinforcing segregation. Moreover, as Glaeser and Vigdor argue, it appears that the economic forces that contributed to segregation, such as unequal access to credit, have diminished as well.
While the isolation of African Americans has declined since its peak, very little of the decline has been caused by integration with whites.
Second, diversity builds a better society and rears better individuals. There is a good deal of evidence that people raised in diverse, tolerant communities are more likely to be tolerant when they grow up. And, importantly, organizations such as schools and businesses are more efficient, innovative, and profitable when comprising a diverse body of individuals. Neighborhoods and cities, too, benefit from a diverse population, as Glaeser has persuasively argued elsewhere. Segregation denies cities the advantages of diversity. In this respect, the decline in segregation is far from a success. As I noted last fall, the exposure of whites to African Americans in their own neighborhoods still lags far behind the population share of African Americans in the nation as whole. The same is true for the exposure of African Americans to whites—especially for the large portion of African Americans living in hypersegregated cities…
The Turkish Airlines flight to Tashkent, Uzbekistan was scheduled to leave at 9:25 on an October night, and dozens of people, nearly all of whom were holding Uzbek passports, stood at the gate. Gripping the handles of bulging plastic bags filled with candy and gifts, they stared at an electronic board announcing a Moscow-bound flight that had been unexpectedly assigned the same gate.
“LAST CALL” for Moscow, the board flashed in Turkish and English at 30-second intervals. As time passed, the announcement began to seem less urgent. Finally, the last passenger got on the Moscow airplane, and officials began ushering the Uzbek travelers through.
For weeks leading up to the trip, I’d had restless nights full of frightening dreams. For Uzbeks, however, real life can be as haunting as any nightmare.
President Islam Karimov runs the country, a sprawling parcel of the former Soviet empire, like a fiefdom. In 2002, two religious dissidents were boiled to death, according to a State Department report. In May 2005, Uzbek troops shot and killed hundreds of protestors in the eastern city of Andijan. A witness named Juravoi Abdulaev showed a Radio Free Europe reporter, Gafurjan Yuldashev, one of the resulting mass graves. After Yuldashev filed a story about the mass graves, Abdulaev was stabbed to death. A human rights investigator told me that sources he interviewed following the Andijan massacre were later tracked down by authorities, imprisoned, and tortured.
Some Uzbeks, such as Abdulaev and those interviewed by the human rights investigator, have been brave enough to speak openly about their experiences, and they paid a price. Others prefer to speak with journalists discretely. As I stood at the gate, I held my passport and a notebook filled with the names of both kinds: dissidents who had been outspoken about human rights abuses, along with others who were willing to talk as long as they could remain anonymous. The list included activists, economists, a former government official who had resigned in protest after the Andijan massacre, one woman whose relative, a political leader, had been assassinated, and four journalists.
The Turkish official at the gate held up my passport. He had dirty blond hair and glasses, and a yellow cord dangling around his neck. He put the passport down and looked at me. “I hope you have another visa,” he said. “This one isn’t any good.” He pointed to a smudged date in my passport and asked me to stand to the side. Businessmen and mothers clutching small children filed past me, showing him their passports as I waited.
What would happen if officials in Uzbekistan knew I was a journalist—what would happen to my sources?
Abruptly he turned toward me. “Do you have a letter of invitation?” he asked.
I thought about what would happen if he and his colleagues in Uzbekistan knew I was a journalist entering their country on a tourist visa, about what would happen to the people on my list. “No,” I lied.
He stepped over to me. “If you get on that plane and go to Tashkent,” he said, waving his hand toward the jet, “they will deport you. They will send you back on this plane.”
I might have argued, but I lost my nerve that night. I would have been fine, but what about the names in my notebook?
Things were supposed to be getting better in Central Asia, and Americans, in part, were supposed to be responsible.
Since late 2008 the United States has been developing the Northern Distribution Network, a transportation grid running through Uzbekistan, neighboring Kyrgyzstan, and other nations. The military uses the route to transport food and supplies to troops in Afghanistan. In the process the U.S. military has been buying bottled water, plastic forks, and other items made in factories along the way, spending more than $62 million in fiscal year 2010.
The northern supply route is vital to the U.S. war effort in Afghanistan because the southern one, which runs through Pakistan, is frequently bombed and closed down. Central Asia offers a more stable alternative. Roughly 60 percent of goods transported to the troops in Afghanistan come via the northern route, and military officials say that share will increase over the next two years.
U.S. officials have argued that investments in the transportation grid will improve the lives of people in Central Asia. Officials have even suggested that the new partnerships with Central Asian leaders could help improve their records on human rights. “Closer cooperation” might force “progress on human rights” and allow “the regime to loosen its vise on civil society,” Richard B. Norland, the ambassador to Uzbekistan, wrote in a January 2010 diplomatic cable obtained by WikiLeaks.
But Americans have long struggled with competing impulses when dealing with autocratic leaders in strategically important regions, trying to balance the desire to promote democracy and human rights with the need to maintain security and access to bases.
In Central Asia, at least, the military imperatives seem to have won out. When Secretary of State Hillary Clinton gave an International Women of Courage award to an Uzbek human rights activist in 2009, the Uzbek foreign minister made what Norland called an “implicit threat” to suspend deliveries along the supply route if Americans continued to raise the issue of human rights. Afterward Norland told his colleagues in Washington to curb their complaints.
Since then Americans have had much less to say about human rights in Central Asia, while investing even more heavily in the region. U.S. investment in Central Asian business, as part of the commitment to the supply route, jumped from $2.7 million in fiscal year 2009 to $90.6 million in fiscal year 2011, according to Navy Rear Admiral Ron MacLaren, who directs the Defense Logistics Agency’s Joint Contingency Acquisition Support Office, which helps to secure supplies for troops in Afghanistan.
Despite the promise of “heretofore unimagined economic advances” for the people of Central Asia, as scholars at the Center for Strategic and International Studies put it in a December 2009 report, little of the investment has gone to local business people.
“Who benefits? Of course, it’s corrupted elites,” says Baktybek Beshimov, a former member of parliament in Kyrgyzstan. He acknowledges that some of the capital has gone to local businesses but argues that “a huge amount of this money goes to corrupt leaders” and that the funding “leads to the escalation of corruption in Central Asian countries.”
These countries’ leaders have been cracking down harder on human rights and democracy advocates, while Americans have done little to stop them. “In a time of crisis, the American administration can be blind to human rights abuses,” Beshimov says, “and instead they are thinking more about the military or security priorities.”
When it comes to official abuses, Beshimov speaks from experience. As a parliamentarian in the late 2000s, he investigated human rights violations, including the torture of dissidents. Eventually he was placed under state surveillance, and then, he says, “They decided just to kill me.” On March 3, 2009, Beshimov was driving in a chauffeured official vehicle on his way to the capital city of Bishkek. Traffic police stopped the car, claimed the chauffeur was speeding, and asked him to sign papers acknowledging that he had. The police then stopped the car two more times on the same road. Beshimov was annoyed—and suspicious.
Since building a supply route through Central Asia, Americans have had much less to say about human rights there.
As they approached a tunnel on a mountain road, Beshimov saw one of his assistants flagging them down from the side of the road. They pulled over, and the assistant told him that two trucks were idling on the other side of the tunnel. One of the drivers planned to block off the road, his assistant explained, while the other would force Beshimov’s car into a ravine. The chauffeur’s admission that he had been speeding would ensure that blame for the incident fell to him—just another out-of-control driver. Beshimov took another route.
The setup was familiar. “Staged car accidents,” Ambassador Tatiana Gfoeller called them in a March 2009 diplomatic cable. In the cable, she described speculation about the “political assassination” of Presidential Chief of Staff Medet Sadyrkulov, an opposition leader who had been killed in a car crash under mysterious circumstances less than two weeks after the police stopped Beshimov…
March 30, 2012
March 29, 2012
Blurred Lines: If this administration won’t tackle the vexing problems of America’s vast intelligence gathering apparatus, we’re all in danger.
March 29, 2012
When he was at the helm of the Central Intelligence Agency, Michael Hayden was fond of comparing the laws that limit agency operations to the white sidelines of a football field. CIA agents should operate so close to legal boundaries, he remarked, that they get “chalk on their cleats.”
Unfortunately, those chalk lines today are too faint for either intelligence officers or the public to see. Although Congress instituted intelligence reform in 2004, and a hallmark of President Barack Obama’s first term has been his aggressive approach to fighting terrorism, there has never been a real debate in Congress or in the public square about the intersection of our values and our requirements for gathering intelligence.
The result is a hodgepodge of internally inconsistent policies, an outsized role for the courts in interpreting and, in some cases, striking down those policies, and huge gaps in what the public knows and has been told. Recent questions raised about the nature of the New York Police Department’s surveillance of mosques are but one example.
In the absence of clear legal policies, those expected to implement them either become risk averse or feel enabled to commit abuses. Abu Ghraib and the more recent Quran burnings in Kabul are unfortunate cases in point. (While the awful Quran episode may have had more to do with cultural insensitivity than intelligence gathering, have we really learned nothing in ten years in Afghanistan?)
One of the biggest reasons for this lack of progress is Congress’s ongoing and exquisite dysfunction. The toxic paradigm of finger-pointing instead of bipartisan problem-solving has created almost total legislative gridlock. What passes for serious debate occurs within a tiny bandwidth, leaving scant chance to raise the tough issues — let alone resolve them during this heated election year.
Discussion of these issues must be high on the agenda for the next president, no matter who he (gender seems the only given at this point) may be. America’s leaders have an obligation — indeed, a very heavy burden — to tackle them.
Here are four that should get top priority:
1. The prison at Guantánamo Bay.
The Guantánamo Bay prison, where people were initially placed in wire cages resembling large chicken coops, has evolved into a state-of-the-art facility — at a cost of $150 million per year. It’s ironic that much of the inmate hierarchy and command structure developed when barbed wire cages permitted free communication — and that none of the subsequent “improvements” has been able to disrupt that.
Although President Obama signed Executive Order 13492 to close the prison within his first year in office, the issue proved to be much tougher than he and his team anticipated. Files on individual inmates were incomplete and in many cases the “evidence tree” could not be rebuilt and was therefore inadmissible in federal court.
The House and Senate also stymied the president’s original intent by blocking transfer of any of the 171 remaining prisoners to the United States for civilian trials. Congress first spooked itself and then launched a politically expedient campaign to scare the American people by invoking visions of grisly terrorist killers wandering around their neighborhoods. It’s the Willie Horton ad campaign all over again.
This ironically bipartisan misbehavior leaves military justice as the only way to clear the backlog of prisoners. Yet military courts have secured only a handful of convictions since 9/11. In contrast,more than 400 terrorists have been convicted in Article III federal courts and are now serving long — sometimes life — sentences in federal supermax prisons.
Still, the tough questions remain on hold. For Gitmo’s so-called “Final 15″ detainees, where there is inadequate evidence to charge and try them but real concerns about the danger of releasing them — even to other countries willing to accept them — is the answer to let them go free? And, if not, does “preventive detention” square with the Constitution and American values? Should the Geneva Conventions — which specify procedures for capture and imprisonment of enemy combatants — be updated?…
On the morning of July 14, 1967, Thurgood Marshall began his second day of Supreme Court confirmation hearings by preparing to confront questions posed by Senator Sam J. Ervin Jr. of North Carolina. This prospect seems unlikely to have been a pleasant one. After thirteen years in Washington, Ervin’s foremost achievement remained his role in drafting the document that had formally been styled a Declaration of Constitutional Principles, but that almost instantly became known as the Southern Manifesto. That document, which drew support from the overwhelming majority of Southern congressmen and senators, denounced the Supreme Court’s decision in Brown v. Board of Education as an abuse of judicial authority. “This unwarranted exercise of power by the Court, contrary to the Constitution, is creating chaos and confusion in the States principally affected,” the politicians complained. “It is destroying the amicable relations between the white and Negro races that have been created through ninety years of patient effort by the good people of both races. It has planted hatred and suspicion where there has been heretofore friendship and understanding.”
As the attorney who led the winning legal team in Brown, Marshall shouldered no small amount of the burden for this precipitous decline in race relations. It must have come as little surprise, then, that Ervin’s questioning demonstrated marked hostility toward Marshall’s nomination. But by 1967 Brown was sufficiently well on its way toward canonization that Ervin avoided directly asking Marshall about segregation in public schools, and instead concentrated his attention on the Warren Court’s decisions protecting criminal defendants.
But lurking not very far beneath the surface of Ervin’s questioning of Marshall was the Southern Manifesto’s primary objection to Brown: that the decision defied constitutional originalism. “The original Constitution does not mention education,” the Southern Manifesto noted. “Neither does the Fourteenth Amendment nor any other amendment. The debates preceding the submission of the Fourteenth Amendment clearly show that there was no intent that it should affect the system of education maintained by the States.” One need not listen especially hard to hear echoes of this notion in a question that Ervin pitched to Marshall at the hearings: “Is not the role of the Supreme Court simply to ascertain and give effect to the intent of the framers of this Constitution and the people who ratified the Constitution?” Although Ervin’s query was freighted with jurisprudential implications, Marshall’s response deftly sidestepped the danger. “Yes, Senator,” Marshall replied, “with the understanding that the Constitution was meant to be a living document.”
This long-forgotten riposte merits renewed attention, as Marshall managed forty-five years ago to approximate a constitutional theory that has recently become ascendant within liberal legal circles. After decades of attempts to slay originalism, some prominent scholars on the legal left have now begun to embrace the notion—or at least their particular conceptions of it. In so doing, these law professors typically make an intellectual move similar to Marshall’s, suggesting that the bitter dispute between originalists and living constitutionalists fundamentally rests on a false antithesis. In this vein, the title of Jack M. Balkin’s new book, Living Originalism, draws its punch by combining the two ostensibly oxymoronic terms. Balkin contends that “we do not face a choice between living constitutionalism and fidelity to the original meaning of the text. They are two sides of the same coin.” That is so, Balkin insists, because “properly understood, these two views of the Constitution are compatible rather than opposed.”
Owing to his close association with the American Constitution Society and his lengthy track record of producing consistently provocative scholarship that is also consistently left-leaning, Balkin possesses unimpeachable liberal credentials. Accordingly, his declaration in a law review article five years ago that he had—seemingly overnight—converted to originalism created quite a stir within the corridors of the legal academy. From the left, liberal critics accused Balkin of apostasy. From the right, conservative critics accused him of creating a false conversion narrative, asserting that Balkin actually aimed to co-opt originalism, not subscribe to it.
During the last five years, Balkin has dedicated much of his intellectual energy to a series of articles elaborating and refining his own account of the originalist enterprise. Living Originalism, the culmination of this work, succeeds in providing an endlessly engaging theory of constitutional law that wrestles with the field’s most urgent concerns in a way that accounts for nuance without sacrificing clarity. That is no meager achievement. Balkin’s book will likely serve as a focal point for constitutional theorists of various stripes for years to come. The volume’s prominence seems assured because it presents in an unusually acute form the fundamental question of whether any variety of originalism can provide what liberals want—and, significantly, what liberals in future generations will want—in a theory of constitutional interpretation.
FOR THE last few decades, of course, much of the legal left has derided originalism, contending that the method reduces the complex task of judging to an overly simplistic and faux-historical inquiry that would lead to intolerably retrograde decisions. Liberal scholars had their sights firmly trained on originalism even before Attorney General Edwin Meese III brought the issue out of the law reviews and into the national spotlight in 1985 by calling for a “jurisprudence of original intention.” But originalism has proved an elusive target, not least because it has often been on the move. After initially professing that their guiding light was the framers’ “original intent,” originalists next suggested that they were actually concerned with discerning the ratifiers’ “original understanding,” before settling, finally, on the constitutional text’s “original meaning” among the public. Primary among the manifold problems with the initial two formulations is the apparent requirement that constitutional interpreters peer into the minds of various historical actors, and the possibility that those actors may well have held competing rather than complementary conceptions—assuming the actors had formed any conceptions at all. Such inquiries invited an interpretive subjectivity—but that was precisely what originalism sought to diminish, if not to eliminate. By elevating original meaning as the hallmark, originalists meant to shift the focus away from what the framers and ratifiers thought and toward what they actually did, in the form of the constitutional text.
This elevation of the Constitution’s original meaning had the effect of inadvertently creating the potential for space between the expansive language that the framers often used and the specific results that the framers anticipated that their language would initially yield. Traditionally, conservative originalists have aimed to keep those two concepts yoked together as tightly as possible, suggesting that the framers’ “original expected applications” serve—in a very real sense—to define broad constitutional language. Thus Balkin explains that “even though conservative originalists may distinguish between the ideas of original meaning and original expected applications in theory, they often conflate them in practice.” In other words, Justice Scalia talks the original meaning talk, but he walks the original expected application walk.
Except, of course, when he does neither. Scalia tempers his brand of originalism with a heavy dose of stare decisis, the judicial principle that counsels respect for prior decisions. Even though Scalia’s legal philosophy would have precluded him from joining many revered judicial opinions were he deciding the cases in the first instance, stare decisisenables him to avoid demanding that these decisions now be overruled. But as Balkin perceptively observes, this reliance on stare decisis places Scalia in the unenviable position of viewing some of the Supreme Court’s most inspiring decisions as “unfortunate blunder[s] that we are now simply stuck with because of respect for precedent.” Balkin correctly contends that perceiving modern constitutional law “as a series of errors that … would now be too embarrassing to correct” is itself an embarrassment for Scalia’s theory, as it “confuses achievements with mistakes.”…
Married, With Websites: Leaving newsrooms behind, journalist couples from Maine to Alaska are setting up their own shops- online
March 29, 2012
In romantic relationships, it’s often the small courtesies that express love best: doing the dishes, picking up the kids, making the coffee, passing the remote. When you’re a couple running a news outlet together, such small kindnesses can take unique forms. For John Christie and Naomi Schalit, it’s the order of their names on the stories that they write together: Each insists that the other’s name appear first in the byline. They’re newlyweds, you should know.
John and Naomi, 64 and 54, respectively, run Pine Tree Watchdog, a publication of the Maine Center for Public Interest Reporting, a nonprofit grant and donor-supported investigative outlet focused on state government. Together they report, edit, and distribute their articles to 25 media partners, mostly newspapers, free of charge.
They may be newcomers in marital terms, but they are old-school reporters, and though they married and launched their site in 2010, they worked together as journalists before. John is the former publisher of The Kennebec Journal and Morning Sentinel. He hired Naomi in 2006 to be the opinion-page editor for the two dailies. Long hours discussing the editorial pages led to a relationship.
Pine Tree Watchdog is the centerpiece of their lives now. The idea is to alleviate the gap in coverage of Maine’s state government: In 1989, there were about 20 year-round reporters in the statehouse in Augusta; now there are seven, excluding Pine Tree Watchdog. Passion for the job is the fuel; neither John nor Naomi has taken a salary yet, although Naomi is supposed to at some point soon. On a good day they make trips to the statehouse and return home to file FOIA requests, talk to tipsters, and review documents (they can spend months on a complex story). On a bad day they might file taxes, write checks for freelancers, or figure out why the printer won’t work. “This is what our lives are about,” Naomi says. “We’ve kind of distilled it at this point.”
John and Naomi are just one of a number of couples who have updated the traditional family-run news business by taking it online. Couples have left their newsroom jobs behind, pooled their skills, and struck out on their own. With their eggs in one unpredictable basket, such couples tend to bring passion and commitment to the work, as in any family business. Still, the nature of a news site means round-the-clock work, and the online news business comes with no guarantee of success, or even survival, and no instruction manual. Being married to the company serves as both a strength and a weakness. It can help keep the overhead low and the intensity level high, but it also makes establishing boundaries between work and homelife a challenge.
The weight of financial stress on the mom-and-pop news operation can depend on the stage of life the site’s principals are in. Julie Ardery, 59, and Bill Bishop, 58, run The Daily Yonder, an online publication focused exclusively on rural issues, out of their home in Austin, Texas. Funded by the Center for Rural Strategies, they each split one salary and work part-time, editing stories from a stable of freelancers. In the ’80s, the pair ran The Bastrop County Times together and were in a “dog-eat-dog” competition for advertising with a publication up the road. They used to sleep with the police scanner beside the bed, and scrambled for enough income to pay their employees and the printing bill every week. But they sold the paper for a good price, which Julie says has afforded them a measure of security. They find their online news life to be much more manageable than their print life was. “The stresses of running [the Bastrop paper] were ten times what we deal with today,” says Julie. “We’re still very much working people, but we’re not putting children through college either.”
For younger couples, especially those with children or considering them, the barely-getting-paid thing can be a struggle. Christine Stuart, 34 and Doug Hardy, 42, own CTNewsJunkie, a site that is focused on state politics in Connecticut, supported by a combination of ads, donations, and sponsorships. They worked together at north-central Connecticut’s Journal Inquirer but, frustrated with the job and wary about its future, they both took a chance on online publishing. “I didn’t know if the paper would still be there when I was ready to retire,” Doug says. Christine quit first, in March 2006, and bought CTNewsJunkie from its creator, Dan Levine, who was moving and offered her the business as a respite from her frustrations at the newspaper. Eager to try something new, she dove in, supplementing her income with a part-time court-reporting job, while Doug continued working at the Inquirer, mostly for the benefits. “It took me several years to come to the conclusion that I’m killing myself for health benefits,” he says. He quit in March 2011, and is CTNewsJunkie’s business manager; Christine is the editor. They both juggle other gigs to stay on top of their finances, and say their for-profit operation wouldn’t make it if they had to pay for office space, or if they had children. They describe their $10,000-deductible health insurance—the most affordable plan they could find—as “birth control.” “We’re looking at a $10,000 ransom note if we have a kid,” Doug says.
Frank Carini, 44, and Joanna Detz, 37, also say they wouldn’t try to run a news website with a child. They publish ecoRI News, a donor and ad-supported nonprofit site that focuses on environmental issues in Rhode Island. The pair met at Community Newspaper Co. in Boston in the ’90s, where Frank was an editor and Joanna a reporter. She left in the late ’90s to get into graphic design, but Frank stayed, and his frustration with what happened to journalism over the years is palpable. “I’ve heard so many times, ‘Do more with less and fill the paper,’” he says. “I was tired of the mainstream media, the cuts, that whole sad story.”…