In December 2011 a tiny but wondrous Chicago program of the Illinois Humanities Council (IHC) launched an online auction to raise needed cash. The Public Square, which promotes dialogue about political, social, and cultural issues, was celebrating its tenth anniversary, and my wife, Bernardine Dohrn, and I offered our own prize to a winning bidder: a lavish dinner for six.
We’ve done the dinner thing two dozen times over the years—for a local baseball camp, a law students’ public interest group, immigrant-rights organizing, and a lot of other worthy work—and we’ve typically raised a few hundred dollars. There were many more attractive items on the auction list: Alex Kotlowitz was available to edit twenty pages of a non-fiction manuscript, Gordon Quinn to discuss documentary film projects over dinner, and Kevin Coval to write and spit an original poem.
We paid little attention as the online auction launched and then inched onward—a hundred dollars, two hundred, and then three—even when a right-wing blogger picked it up and began flogging the Illinois Humanities Council for “supporting terrorism” by giving taxpayer money to my wife and me, two founding members of the Weather Underground. He was a little off on the concept because we were actually donating money and services to them, not the other way around, but this was a typical turn for the fact-free, faith-based blogosphere, so we paid it no mind.
There was a little “Buy Instantly” button on our dinner item that someone could select for $2,500, which seemed absurdly high. But in early December TV celebrity and conservative bad boy Tucker Carlson clicked his mouse, and we were his.
I loved it immediately. Surely he had some frat boy prank up his sleeve—a kind of smug and superior practical joke or an ad hominem put-down—but so what? We’d just raised more for the Public Square in one bid than anyone thought would be raised from the entire auction. We won!
Well, not so fast—this did mean we had to prepare dinner for Carlson plus five, and that could become messy. But, maybe it wouldn’t, and anyway, we argued, it’s just a couple of distasteful hours at most, and, then bingo! Cash the check.
Right wing blogs erupted, with some writers tickled by Carlson’s sense of humor and others earnestly saluting his courage and daring in service to “the cause” for his willingness to sit in close quarters with us—radical leftists and enemies of the state. But others took a grimmer view: “Don’t do it, Tucker,” they pled. “This will legitimize and humanize two of America’s greatest traitors.”
Carlson got a congratulatory letter from the IHC that offered ten potential dates for dinner and noted that “all auction items were donated to the IHC [which] makes no warranties or representations with respect to any item or service sold” and that “views and opinions expressed by individuals attending the dinner do not reflect those of the Illinois Humanities Council, the National Endowment for the Humanities, or the Illinois General Assembly.” I imagined the exhausted scrivener bent over his table copying that carefully crafted, litigation-proof language—does it go far enough?
Carlson chose February 5- Super Bowl Sunday.
We were besieged by friends clamoring to come to dinner. “I’ll serve drinks,” wrote one prominent Chicago lawyer, “or, if you like, I’ll wear a little tuxedo and park the cars. Please let me come!”
All our friends saw the event as theater, but not everyone was delighted with the show. A few called Carlson and company “vipers” and argued that we should never talk to people like them. We disagreed; talk can be good. Others began distancing themselves from us, wringing their hands the moment they saw themselves mentioned on the right-wing blogs and instantly, almost instinctively, assuming a defensive crouch.
Dinner with Tucker Carlson seemed cheery and worthwhile compared to counseling a bunch of cringing liberals.
Things quickly got weirder. Two IHC board members resigned, complaining that the organization was now affiliated with people who “advocate violence”—presumably Bernardine and me, not Carlson or his friends. The paid stenographers at the Chicago Tribune duly reported the two resignations by quoting the outraged quitters and leaving it at that.
(Regarding the art and science of fact-checking: had the Tribune in fact checked the facts, the fact-checker would have checked the fact that the quitters used the phrase “advocates violence.” Check. Had he or she dug a little deeper, the fact-checker might have discovered that, yes, we’d been described that way before, even in the pages of the Tribune. Check. And so it goes in the hermetically sealed, narcissistic echo chamber—a characterization becomes a fact with enough repetition. Oh, and for the record, we don’t advocate violence—we’re not with NATO or G8. Check.)
Some winced and stooped; no one was moved publicly to defend the idea that dialogue, controversy, and conversation are essential to the culture of democracy and to the vitality of the humanities, and no one condemned this most knee-jerk instance of demonization and far-fetched guilt-by-association.
Dinner with Carlson seemed cheery and worthwhile compared to counseling a bunch of cringing liberals. Where is the backbone or the principle? No wonder the cadre of right-wing keyboard flamethrowers feels so disproportionately powerful. Liberals seem forever willing to police themselves into an orderly line right next to the slaughterhouse…
March 31, 2012
In a dimly lighted conference room in the Palo Alto, Calif., offices of Smule, a maker of music apps, Ge Wang was sitting in a meeting with his colleagues, humming, singing and making odd whooshing noises into the microphone of an iPad, checking the screen, and then pounding fugues of code into an attached laptop. Poking at his devices, he reminded me of a child obliviously amusing himself while the grown-ups natter on around him. Nobody else in the meeting seemed to notice Wang’s behavior as they listened to a debriefing about recent updates to Smule’s Mini Magic Piano app.
When the guy at the head of the table mentioned that the graphics on the welcome page now subtly pulse, Wang looked up. “Yeahhhh,” he said. “Classic Smule,” he added in a mutter to nobody in particular. “Everything needs to pulse.” Then he blew into his iPad mic and banged some more code.
Wang, who is 34 and a founder of the company, often leaves an impression of childlike distractedness. But in fact he’s distressingly productive. He was coding in someone else’s meeting in July because he had just two hours to prepare for a presentation on a new Smule product, code-named “Project Oke.” His company has been remarkably successful, but the app-o-sphere is more competitive than it used to be, and there was a lot riding on his coming up with another hit — ideally by year’s end.
Wang likes to say that he has two full-time jobs, and they seem wholly distinct. At Stanford University, where he is an assistant professor, he teaches a full course load through the Center for Computer Research in Music and Acoustics (usually referred to as CCRMA, pronounced “karma”), presiding over a highly experimental “orchestra” that performs with cleverly customized laptops, cellphones and other electronics. It’s very cutting edge and, in terms of audience, very rarefied. At Smule, a profit-driven, private company that recently raised its second round of venture-capital financing, he devises applications bought by millions.
Founded in 2008, Smule released several apps in rapid succession, but its breakthrough was the Ocarina. Exploiting the iPhone’s microphone as well as its touch-screen interface, Wang converted the device into an easy-to-play flute-like instrument. In what has become a Smule signature, the app also included a representation of the globe, with little dots that light up to show where in the world someone is playing the app at that moment. With a tap, you can listen. It’s also possible to arrange a duet with an Ocarina user thousands of miles of way, whom you’ve never met. The Ocarina was downloaded half a million times, at 99 cents a pop, in its first couple of months, making it the top-selling app for three straight weeks; a new artist selling that many downloads of a single today would probably end up on the cover of Rolling Stone.
The common aim of Smule’s products is to prod nonmusicians into making music and to interact with others doing the same. There are singing apps like I Am T-Pain and Glee Karaoke, and digital versions of instruments like Magic Piano and Magic Fiddle. What connects these easy-to-use diversions to Wang’s more abstruse gear-tinkering is the exploration of expressive sound via technology: everyone can make music, he believes, and everyone should.
It’s hard to overestimate how much Smule’s strategy revolves around Wang himself. Before the first Project Oke demo, I asked another Smule employee what the app would consist of, how it would work. He shrugged. “Right now,” he said cheerfully, “it’s all in Ge’s brain.”
What marched out of Wang’s brain at that first Project Oke demo in July was a cute robot, singing and dancing. The app, now known as Sing, Robot, Sing!, is likely to be in Apple’s App Store early next year, depending on how quickly the final version moves through the approval process.
There it will join what has become a bewildering array of products in the “music” category. This includes services like Spotify and Pandora that are analogous to radio, and games like Tap Tap Revenge, which involve tapping dots on your phone’s screen in sync with songs. Artists routinely release phone and tablet applications that include remix-it-yourself options. Reality Jockey, based in London, has created “reactive music” apps that respond to sounds in the listener’s environment as well as user actions. There are sophisticated instrumentlike apps that require technical skill or musical knowledge to master, and apps that recreate that ultimate amateur form, karaoke.
You could think about these apps on a continuum from the enduring (making something that aspires to art) to the ephemeral (a time-killing game). Smule sits somewhere in the middle. (“Smule” is a shortened version of Sonic Mule, a reference to a character in Isaac Asimov’s “Foundation Trilogy” who influences others without their knowledge, disrupts existing power structures and builds an empire.) Smule’s apps have instrumentlike functions, meaning they can be used to create new, expressive sounds, but they also feel like games. Wang is essentially trying to trick users into making music without quite realizing it. “He’s always had this notion that everybody is musical but they’re just too embarrassed to do anything about it,” says Perry Cook, a computer-music pioneer who was Wang’s adviser at Princeton and today consults for Smule. “Of course, the karaoke solution to that is to get everybody drunk,” he adds…
There are many reasons for believing the brain is the seat of consciousness. Damage to the brain disrupts our mental processes; specific parts of the brain seem connected to specific mental capacities; and the nervous system, to which we owe movement, perception, sensation and bodily awareness, is a tangled mass of pathways, all of which end in the brain. This much was obvious to Hippocrates. Even Descartes, who believed in a radical divide between soul and body, acknowledged the special role of the brain in tying them together.
The discovery of brain imaging techniques has given rise to the belief that we can look at people’s thoughts and feelings, and see how ‘information’ is ‘processed’ in the head. The brain is seen as a computer, ‘hardwired’ by evolution to deal with the long vanished problems of our hunter-gatherer ancestors, and operating in ways that are more transparent to the person with the scanner than to the person being scanned. Our own way of understanding ourselves must therefore be replaced by neuroscience, which rejects the whole enterprise of a specifically ‘humane’ understanding of the human condition.
In 1986 Patricia Churchland published Neurophilosophy, arguing that the questions that had been discussed to no effect by philosophers over many centuries would be solved once they were rephrased as questions of neuroscience. This was the first major outbreak of a new academic disease, which one might call ‘neuroenvy’. If philosophy could be replaced by neuroscience, why not the rest of the humanities, which had been wallowing in a methodless swamp for far too long? Old disciplines that relied on critical judgment and cultural immersion could be given a scientific gloss when rebranded as ‘neuroethics’, ‘neuroaesthetics’, ‘neuromusicology’, ‘neurotheology’, or ‘neuroarthistory’ (subject of a book by John Onians). Michael Gazzaniga’s influential study, The Ethical Brain, of 2005, has given rise to ‘Law and Neuroscience’ as an academic discipline, combining legal reasoning and brain imagining, largely to the detriment of our old ideas of responsibility. One by one, real but non-scientific disciplines are being rebranded as infant sciences, even though the only science involved has as yet little or nothing to say about them.
It seems to me that aesthetics, criticism, musicology and law are real disciplines, but not sciences. They are not concerned with explaining some aspect of the human condition but with understanding it, according to its own internal procedures. Rebrand them as branches of neuroscience and you don’t necessarily increase knowledge: in fact you might lose it. Brain imaging won’t help you to analyse Bach’s Art of Fugue or to interpret King Lear any more than it will unravel the concept of legal responsibility or deliver a proof of Goldbach’s conjecture; it won’t help you to understand the concept of God or to evaluate the proofs for His existence, nor will it show you why justice is a virtue and cowardice a vice. And it cannot fail to encourage the superstition which says that I am not a whole human being with mental and physical powers, but merely a brain in a box.
The new sciences in fact have a tendency to divide neatly into two parts. On the one hand there is an analysis of some feature of our mental or social life and an attempt to show its importance and the principles of its organisation. On the other hand, there is a set of brain scans. Every now and then there is a cry of ‘Eureka!’ — for example when Joshua Greene showed that dilemmas involving personal confrontation arouse different brain areas from those aroused by detached moral calculations. But since Greene gave no coherent description of the question, to which the datum was supposed to suggest an answer, the cry dwindled into silence. The example typifies the results of neuroenvy, which consist of a vast collection of answers, with no memory of the questions. And the answers are encased in neurononsense of the following kind:
‘The brains of social animals are wired to feel pleasure in the exercise of social dispositions such as grooming and co-operation, and to feel pain when shunned, scolded, or excluded. Neurochemicals such as vasopressin and oxytocin mediate pair-bonding, parent-offspring bonding, and probably also bonding to kith and kin…’ (Patricia Churchland).
As though we didn’t know already that people feel pleasure in grooming and co-operating, and as though it adds anything to say that their brains are ‘wired’ to this effect, or that ‘neurochemicals’ might possibly be involved in producing it. This is pseudoscience of the first order, and owes what scant plausibility it possesses to the fact that it simply repeats the matter that it fails to explain. It perfectly illustrates the prevailing academic disorder, which is the loss of questions.
Traditional attempts to understand consciousness were bedevilled by the ‘homunculus fallacy’, according to which consciousness is the work of the soul, the mind, the self, the inner entity that thinks and sees and feels and which is the real me inside. We cast no light on the consciousness of a human being simply by redescribing it as the consciousness of some inner homunculus. On the contrary, by placing that homunculus in some private, inaccessible and possibly immaterial realm, we merely compound the mystery.
As Max Bennett and Peter Hacker have argued (Philosophical Foundations of Neuroscience, 2003), this homunculus fallacy keeps coming back in another form. The homunculus is no longer a soul, but a brain, which ‘processes information’, ‘maps the world’, ‘constructs a picture’ of reality, and so on — all expressions that we understand, only because they describe conscious processes with which we are familiar. To describe the resulting ‘science’ as an explanation of consciousness, when it merely reads back into the explanation the feature that needs to be explained, is not just unjustified — it is profoundly misleading, in creating the impression that consciousness is a feature of the brain, and not of the person.
Perhaps no instance of neurononsense has been more influential than Benjamin Libet’s ingenious experiments which allegedly ‘prove’ that actions which we experience as voluntary are in fact ‘initiated’ by brain events occurring a short while before we have the ‘feeling’ of deciding on them. The brain ‘decides’ to do x, and the conscious mind records this decision some time later. Libet’s experiments have produced reams of neurobabble. But the conclusion depends on forgetting what the question might have been. It looks significant only if we assume that an event in a brain is identical with a decision of a person, that an action is voluntary if and only if preceded by a mental episode of the right kind, that intentions and volitions are ‘felt’ episodes of a subject which can be precisely dated. All such assumptions are incoherent, for reasons that philosophers have made abundantly clear…