December 31, 2011
What is racial colorblindness?
Racial issues are often uncomfortable to discuss and rife with stress and controversy. Many ideas have been advanced to address this sore spot in the American psyche. Currently, the most pervasive approach is known as colorblindness. Colorblindness is the racial ideology that posits the best way to end discrimination is by treating individuals as equally as possible, without regard to race, culture, or ethnicity.
At its face value, colorblindness seems like a good thing — really taking MLK to task on his call to judge people on the content of their character rather than the color of their skin. It focuses on commonalities between people, such as their shared humanity.
However, colorblindness alone is not sufficient to heal racial wounds on a national or personal level. It is only a half-measure that in the end operates as a form of racism.
December 31, 2011
The effects of technology on religious belief, and of religious belief on technology, are great but insufficiently explored. Often religious communities have been the inventors, the popularizers, or the preservers of technologies. One important example, which Lewis Mumford called to our attention long ago in his Technics and Civilization (1934), is the intimate relationship between medieval monastic life and the invention of reliable clocks. It was the need to be faithful in keeping the horæ canonicæ, the canonical hours of prayer, that stimulated the creation of accurate timepieces. But of course, this invention spread to the rest of society, and over the centuries has come to shape our experience of time in ways that affect our religious lives as much as anything else.
It is scarcely possible to overstress the importance of this development; and yet perhaps even more important are the connections between religious life and technologies of knowledge, especially those pertaining to reading and writing. This point could be illustrated in any number of ways, but with particular force in tracing the long entanglement of Christianity and the distinctive form of the book called the codex. In this history one can discern many ways in which forms of religious life shape, and in turn are shaped by, their key technologies. And as technologies change, those forms of life change too, whether their participants wish to or not. These changes can have massive social consequences, some of which we will wish to consider at the end of this brief history. Christians are, as the Koran says, “People of the Book”; in which case we might want to ask what will become of Christianity if “the book” is radically transformed or abandoned altogether.
Scroll and Sequence
In the eleventh chapter of the Gospel of Luke, Jesus is engaged in his public ministry: preaching to the crowds, casting out demons. At this moment, oddly enough, a Pharisee asks him to come to dinner, and Jesus immediately accepts; but then, we are told, “the Pharisee was astonished to see that he did not first wash before dinner.” This astonishment prompts Jesus to begin a series of “woes” — “Woe to you Pharisees! … Woe to you lawyers also!” — which in turn may prompt us to remember a comment the novelist Frederick Buechner once made: “No one ever invited a prophet home for dinner more than once.”
But to continue:
Woe to you! For you build the tombs of the prophets whom your fathers killed. So you are witnesses and you consent to the deeds of your fathers, for they killed them, and you build their tombs. Therefore also the Wisdom of God said, “I will send them prophets and apostles, some of whom they will kill and persecute, ” so that the blood of all the prophets, shed from the foundation of the world, may be charged against this generation, from the blood of Abel to the blood of Zechariah, who perished between the altar and the sanctuary. Yes, I tell you, it will be required of this generation.
Please note especially this phrase: “from the blood of Abel to the blood of Zechariah.” It is an interesting phrase in any number of ways, not least in that it designates Abel, the first murder victim, as a prophet. But the question I want to ask is: Why Zechariah?
Jesus is referring to the second book of Chronicles, which tells this story from the reign of the infidel King Joash of Judah:
Then the Spirit of God clothed Zechariah the son of Jehoiada the priest, and he stood above the people, and said to them, “Thus says God, ‘Why do you break the commandments of the Lord, so that you cannot prosper? Because you have forsaken theLord, he has forsaken you.’” But they conspired against him, and by command of the king they stoned him with stones in the court of the house of the Lord. Thus Joash the king did not remember the kindness that Jehoiada, Zechariah’s father, had shown him, but killed his son. And when he was dying, he said, “May the Lord see and avenge!”
In referring to this story, Jesus is clearly indicating that Zechariah — not, to be clear, the one who wrote the book of Zechariah — is the last of the Bible’s prophet-martyrs, just as Abel was the first. Yet this is clearly not so: half a dozen later prophets were martyred, at least according to unanimous tradition. But there is no mistake here, neither by Jesus nor by Luke. By invoking an arc that stretches from Abel to Zechariah, Jesus is indeed imagining a strict sequence, but not that of the history of Israel: rather, he has in mind the sequence of the Bible as he knew it.
The Hebrew Bible in the time of Jesus, as now, was divided into three parts, in this order: first Torah, the Law; then Nevi’im, the prophets; then the rather miscellaneous category called Ketuvim, the Writings. The book we call 2 Chronicles is the last of the Ketuvim and therefore the last of the whole Bible. So when Jesus refers to the martyrdom of prophets “from Abel to Zechariah” he does not mean “from the Creation to the most recent moment of Israelite history,” but rather “from the first pages of the Word of God to the last.”
This is rather extraordinary, and for a number of reasons. For one thing, if we look at the history of the Hebrew Bible after the time of Jesus — which is the only reliable history we have — the order of theKetuvim is not settled. While the rabbis of the Babylonian Talmud and the Masoretic text agree in placing Chronicles at the end, other very old texts — the Aleppo and Leningrad codices, for instance — place it at the beginning.
But that’s not the oddity to which I wish to call attention. Rather, I would like to think about thetechnologies of the book in the time of Jesus, and in preceding centuries of Judaic culture. Consider, for instance, the variety of writing technologies discernible just in the Old Testament: the “brick” on which Ezekiel is commanded to inscribe an image of Jerusalem (4:1), the “tablet” used by Isaiah (30:8) and Habakkuk (2:2), the stone on which the Decalogue is inscribed (Ex. 24:12, Joshua 8:32). The styli used by Isaiah (8:1) and Jeremiah (17:1) may have been used to write on metal. Clay tablets were kept in jars (Jeremiah 32:14) or boxes (Exodus 25:16, 1 Kings 8:9). But the Scriptures themselves, it is clear, were typically written on papyrus scrolls and kept in cabinets. As C. H. Roberts has noted in theCambridge History of the Bible, for the scribal culture in the centuries preceding Jesus,
The strictest rules governed the handling, the reading and the copying of the Law. Multiplication of copies by dictation was not allowed; each scroll had to be copied directly from another scroll; official copies, until a.d. 70 derived ultimately from a master copy in the Temple, were kept at first in a cupboard in each synagogue, later in a room adjoining it. The cupboard faced towards Jerusalem, and the rolls within it were the most holy objects in the synagogue.
I emphasize these technologies for one simple reason: none of them promotes the idea of sequence in texts: while they do not forbid, they certainly do not encourage the temporally linear way that Jesus thought about the text of Scripture. Thus when the Talmudic sages debate the order in which the Biblical books should be recorded, they consider various principles of organization. A key passage from the relevant tractate, Bava Batra, says:
Our Rabbis taught: The order of the Prophets is, Joshua, Judges, Samuel, Kings, Jeremiah, Ezekiel, Isaiah, and the Twelve Minor Prophets. Let us examine this…. Isaiah was prior to Jeremiah and Ezekiel. Then why should not Isaiah be placed first? — Because the Book of Kings ends with a record of destruction and Jeremiah speaks throughout of destruction and Ezekiel commences with destruction and ends with consolation and Isaiah is full of consolation; therefore we put destruction next to destruction and consolation next to consolation.
“Destruction next to destruction and consolation next to consolation” — historical order is explicitly rejected in favor of what we might call a theologically thematic order. Or rather, history in the relatively short term gives way to the great arc of history as a whole.
As one looks at a cabinet of scrolls, little about that technology suggests sequence. It is true that the scrolls could be organized in “reading order” — in the case of Hebrew, right to left and top to bottom — but they would not have to be so ordered. And it would be very easy in any case for scrolls to be removed from one pigeonhole and then replaced in another. Surely that was a commonplace occurrence. And this might help to explain why the rabbis were debating this matter at such a (relatively) late date in the history of the Hebrew scriptures. The canon itself had been effectively set for centuries — though there are still debates in the Talmud about whether the Song of Songs and Esther belong — yet, as we just saw, even the basic principles of organization, beyond the threefold division of Torah, Nevi’im, and Ketuvim, are still up for grabs.
I want to suggest here that the primary reason for this debate occurring when it does, and in the way it does, is technological. That is, the question of the order or sequence of Biblical books is forced upon the rabbis by the arrival of the technology that would ultimately displace the scroll cabinet: the codex. And Jesus’ invocation of the Biblical sequence, from “Abel to Zechariah,” can be seen as both an anticipation of the rise of the codex and a commendation of that technology — or at least, a commendation of the patterns of thought the codex supports. There is an intimate connection between the Christian message, the Christian scriptures, and the codex. This may mark one central way in which Christians are “People of the Book.”…
Marginal revolutionaries: The crisis and the blogosphere have opened mainstream economics up to new attack
December 31, 2011
POINT UDALL on St Croix, one of the US Virgin Islands, is a far-flung, wind-whipped spot. You cannot travel farther east without leaving the United States. Visitors can pose next to a stone sundial commemorating America’s first dawn of the third millennium. A couple named “Sigi + Ricky” have added a memento of their own, an arrowstruck heart scrawled on the perimeter wall in memory of “us”.
Warren Mosler, an innovative carmaker, a successful bond-investor and an idiosyncratic economist, moved to St Croix in 2003 to take advantage of a hospitable tax code and clement weather. From his perch on America’s periphery, Mr Mosler champions a doctrine on the edge of economics: neo-chartalism, sometimes called “Modern Monetary Theory”. The neo-chartalists believe that because paper currency is a creature of the state, governments enjoy more financial freedom than they recognise. The fiscal authorities are free to spend whatever is required to revive their economies and restore employment. They can spend without first collecting taxes; they can borrow without fear of default. Budget-makers need not cower before the bond-market vigilantes. In fact, they need not bother with bond markets at all.
The neo-chartalists are not the only people telling governments mired in the aftermath of the global financial crisis that they could make things better if they would shed old inhibitions. “Market monetarists” favour more audacity in the monetary realm. Tight money caused America’s Great Recession, they argue, and easy money can end it. They do not think the federal government can or should rescue the economy, because they believe the Federal Reserve can.
The “Austrian” school of economics, which traces its roots to 19th-century Vienna, is more sternly pre-Freudian: more inhibition, not less, is its prescription. Its adherents believe that part of the economy’s suffering is necessary, an inevitable consequence of past excesses. They do not think the Federal Reserve can rescue the economy. They seek instead to rescue the economy from the Fed.
You tell me it’s the institution
These three schools of macroeconomic thought differ in their pedigree, in their beliefs, in their persuasiveness and in their prospects. Yet they also have a lot in common. They have thrived on the back of massive disillusion with mainstream economics, which held that the economy would grow steadily if central banks kept inflation low and stable, and that there were no great gains in the offing from fiscal expansion, nor any great cause for concern over financial instability. And they have benefited hugely from blogging.
Economics, perhaps more than any other discipline, has taken to blogs with gusto. Mainstream figures such as Paul Krugman and Greg Mankiw have commanded large online audiences for years, audiences which include many of their peers. But the crisis has made the academic establishment fractious and vulnerable. Highly credentialed economists now publicly mock each other’s ignorance and foolishness. That has created an opening for the less decorated members of the guild, and the truly peripheral. In the blogosphere anywhere can be, as the title of Mr Mosler’s blog has it, “The Center of the Universe”.
In a world beset by doubt, there are great opportunities for those happy to pursue their beliefs to their logical conclusions and thrillingly thoroughgoing in the way they do so. Is fiscal stimulus not working? Then do more of it, say the neo-chartalists. Are monopolies and price controls a problem? Then get rid of the central bank’s monopoly in setting the price of credit and the supply of government money, say the Austrians. Damn the torpedoes and never mind the naysayers—acolytes in the comments section will sort them out.
What’s more, put into the context of a pathetic response to the current crisis, the ideas offered by these very different schools all take on a similar form: that policymakers are overly worried about something that should concern them less. The Austrians see the bogeyman as deflation, the fear of which inflates bubbles. The market monetarists, diametrically opposed, see exaggerated fear of inflation. And the economy is getting too little help from fiscal stimulus, according to neo-chartalists, because of the government’s superstitious fear of insolvency.
The clearest example of the power of blogging as a way of getting fringe ideas noticed is “The Money Illusion”, a blog by Scott Sumner of Bentley University, in Waltham, Massachusetts. In the wake of the financial crisis Mr Sumner, a proponent of market monetarism, felt he had something to say, but no great hope of being heard.
We’d all love to see the plan
And so on February 2nd 2009 he started to blog. He was not, he admitted, a “natural blogger”, which is to say his posts were long, tightly-argued and self-deprecating (“consider me an eccentric economist at a small school taking potshots from the sidelines”). But he attracted thoughtful comments and replied in kind. On February 25th, he earned a link from Tyler Cowen, a professor at George Mason University whose “Marginal Revolution” blog is widely respected. And one month after he started Mr Krugman devoted a short post to rebuffing him.
To be noticed by Mr Krugman is a big thing for a blogger; all the current heterodoxies court such attention, with neo-chartalists churning up his comment threads and Austrians challenging him to set-piece debates. The more Mr Krugman wrestles with them, the more attention they garner—a correlation that has made him wary. “I’ll link to any work I find illuminating, whoever it’s from,” he writes. “I’ll link to work I think is deeply wrong only if it comes from someone who already has a following.” Otherwise, “why give him a platform?”
Mr Sumner’s blog not only revealed his market monetarism to the world at large (“I cannot go anywhere in the world of economics…without hearing his name,” says Mr Cowen). It also drew together like-minded economists, many of them at small schools some distance from the centre of the economic universe, who did not realise there were other people thinking the same way they did. They had no institutional home, no critical mass. The blogs provided one. Lars Christensen, an economist at a Danish bank who came up with the name “market monetarism”, says it is the first economic school of thought to be born in the blogosphere, with post, counter-post and comment threads replacing the intramural exchanges of more established venues.
This invisible college of bloggers focuses first on the level of spending on American products: America’s domestic output, valued at the prices people pay for it. This is what economists call “nominal” GDP (NGDP), as opposed to “real” GDP, which strips out the effects of inflation. They think the central bank should promise to keep NGDP on a steady upward path, rising at, say, 5% a year. Such growth might come about because more stuff is bought (“real” growth) or because prices are higher (inflation). Mr Sumner’s disinhibition is to encourage the Fed not to care which of the two is doing more of the work…
December 31, 2011
December 30, 2011
In early October, U.S. officials accused Iranian operatives of planning to assassinate Saudi Arabia’s ambassador to the United States on American soil. Iran denied the charges, but the episode has already managed to increase tensions between Washington and Tehran. Although the Obama administration has not publicly threatened to retaliate with military force, the allegations have underscored the real and growing risk that the two sides could go to war sometime soon — particularly over Iran’s advancing nuclear program.
For several years now, starting long before this episode, American pundits and policymakers have been debating whether the United States should attack Iran and attempt to eliminate its nuclear facilities. Proponents of a strike have argued that the only thing worse than military action against Iran would be an Iran armed with nuclear weapons. Critics, meanwhile, have warned that such a raid would likely fail and, even if it succeeded, would spark a full-fledged war and a global economic crisis. They have urged the United States to rely on nonmilitary options, such as diplomacy, sanctions, and covert operations, to prevent Iran from acquiring a bomb. Fearing the costs of a bombing campaign, most critics maintain that if these other tactics fail to impede Tehran’s progress, the United States should simply learn to live with a nuclear Iran.
But skeptics of military action fail to appreciate the true danger that a nuclear-armed Iran would pose to U.S. interests in the Middle East and beyond. And their grim forecasts assume that the cure would be worse than the disease — that is, that the consequences of a U.S. assault on Iran would be as bad as or worse than those of Iran achieving its nuclear ambitions. But that is a faulty assumption. The truth is that a military strike intended to destroy Iran’s nuclear program, if managed carefully, could spare the region and the world a very real threat and dramatically improve the long-term national security of the United States.
DANGERS OF DETERRENCE
Years of international pressure have failed to halt Iran’s attempt to build a nuclear program. The Stuxnet computer worm, which attacked control systems in Iranian nuclear facilities, temporarily disrupted Tehran’s enrichment effort, but a report by the International Atomic Energy Agency this past May revealed that the targeted plants have fully recovered from the assault. And the latest IAEA findings on Iran, released in November, provided the most compelling evidence yet that the Islamic Republic has weathered sanctions and sabotage, allegedly testing nuclear triggering devices and redesigning its missiles to carry nuclear payloads. The Institute for Science and International Security, a nonprofit research institution, estimates that Iran could now produce its first nuclear weapon within six months of deciding to do so. Tehran’s plans to move sensitive nuclear operations into more secure facilities over the course of the coming year could reduce the window for effective military action even further. If Iran expels IAEA inspectors, begins enriching its stockpiles of uranium to weapons-grade levels of 90 percent, or installs advanced centrifuges at its uranium-enrichment facility in Qom, the United States must strike immediately or forfeit its last opportunity to prevent Iran from joining the nuclear club.
Some states in the region are doubting U.S. resolve to stop the program and are shifting their allegiances to Tehran. Others have begun to discuss launching their own nuclear initiatives to counter a possible Iranian bomb. For those nations and the United States itself, the threat will only continue to grow as Tehran moves closer to its goal. A nuclear-armed Iran would immediately limit U.S. freedom of action in the Middle East. With atomic power behind it, Iran could threaten any U.S. political or military initiative in the Middle East with nuclear war, forcing Washington to think twice before acting in the region. Iran’s regional rivals, such as Saudi Arabia, would likely decide to acquire their own nuclear arsenals, sparking an arms race. To constrain its geopolitical rivals, Iran could choose to spur proliferation by transferring nuclear technology to its allies — other countries and terrorist groups alike. Having the bomb would give Iran greater cover for conventional aggression and coercive diplomacy, and the battles between its terrorist proxies and Israel, for example, could escalate. And Iran and Israel lack nearly all the safeguards that helped the United States and the Soviet Union avoid a nuclear exchange during the Cold War — secure second-strike capabilities, clear lines of communication, long flight times for ballistic missiles from one country to the other, and experience managing nuclear arsenals. To be sure, a nuclear-armed Iran would not intentionally launch a suicidal nuclear war. But the volatile nuclear balance between Iran and Israel could easily spiral out of control as a crisis unfolds, resulting in a nuclear exchange between the two countries that could draw the United States in, as well.
These security threats would require Washington to contain Tehran. Yet deterrence would come at a heavy price. To keep the Iranian threat at bay, the United States would need to deploy naval and ground units and potentially nuclear weapons across the Middle East, keeping a large force in the area for decades to come. Alongside those troops, the United States would have to permanently deploy significant intelligence assets to monitor any attempts by Iran to transfer its nuclear technology. And it would also need to devote perhaps billions of dollars to improving its allies’ capability to defend themselves. This might include helping Israel construct submarine-launched ballistic missiles and hardened ballistic missile silos to ensure that it can maintain a secure second-strike capability. Most of all, to make containment credible, the United States would need to extend its nuclear umbrella to its partners in the region, pledging to defend them with military force should Iran launch an attack.
In other words, to contain a nuclear Iran, the United States would need to make a substantial investment of political and military capital to the Middle East in the midst of an economic crisis and at a time when it is attempting to shift its forces out of the region. Deterrence would come with enormous economic and geopolitical costs and would have to remain in place as long as Iran remained hostile to U.S. interests, which could mean decades or longer. Given the instability of the region, this effort might still fail, resulting in a war far more costly and destructive than the one that critics of a preemptive strike on Iran now hope to avoid…
Cal State’s Chutzpah: A hypocritical university goes silent while a math professor spouts anti-Israeli politics
December 30, 2011
Spend any time on a university campus, and the official culture will become obvious in short order. Bigotry and prejudice against blacks, gays, or women simply isn’t tolerated. Even a hint of racism or sexism is met with quick and decisive punishment. But anti-Israel rants on California’s public-college campuses seem to be tolerated, politely ignored, or even tacitly condoned by the powers that be.
Consider the case of David Klein, a math professor at California State University, Northridge (CSUN). Klein maintains a page on the university’s web server having nothing to do with mathematical physics, teacher education, or standardized testing, his main areas of research. Rather, the page is devoted to the evils of the state of Israel. Students and other members of the university can learn that “Israel is the most racist state in the world at this time” and that the Jewish state engages in “ethnic cleansing.” Visitors can discover, furthermore, that the answer to the question “Aren’t Palestinians equally responsible for the violence?” is an emphatic “No.” Klein provides links to an assortment of Israel haters and, of course, calls for a boycott of Israeli products and U.S. companies that do business with Israel.
It isn’t hard to imagine what would happen to a professor who used the university’s website to post content opposed, say, to illegal immigration or legal abortion, especially if the subject was outside his academic field. Administrators would demand that the pages disappear, and they’d cite the university’s policies, chapter and verse. We know university administrators would loudly condemn a professor who maintained a website off campus that had a “deleterious effect on the university’s reputation.” That’s what happened in 2010, when CSUN erupted in outrage over economics professor Kenneth Ng’s personal site, Bigbabykenny.com—which, his critics claimed, promoted illegal sex tourism in Thailand. Both the Gender and Women’s Studies Department and the Asian-American Studies Department publicly denounced Ng, and several students and faculty demanded that he take the site down or lose his job. But while university officials blasted the site, they stopped short of forcing Ng to take it down. Ng removed the site anyway, after weeks of public pressure. “I think he realized he’s putting the university in an awkward position,” CSUN provost Harold Hellenbrand told the campus newspaper, adding, “We expect that [faculty] act at a higher level than their profession requires.”
Yet no one within the CSUN community has condemned Klein, and his webpage remains active—though it clearly violates university policies, which state that “use of computers, networks, and computing facilities for activities other than academic purposes or University business is not permitted.” The university also prohibits associating its name with boycotts and other politically motivated activity. CSUN further retains the right to remove “any defamatory, offensive, infringing, or illegal materials” from its website at any time…
December 30, 2011
David Hume was born three hundred years ago, in 1711. The world has changed radically since his time, and yet many of his ideas and admonitions remain deeply relevant, though rather neglected, in the contemporary world. These Humean insights include the central role of information and knowledge for adequate ethical scrutiny, and the importance of reasoning without disowning the pertinence of powerful sentiments. They also include such practical concerns as our responsibilities to those who are located far away from us elsewhere on the globe, or in the future.
Hume’s influence on the nature and reach of modern thinking has been monumental. From epistemology to practical reason, from aesthetics to religion, from political economy to philosophy, from social and cultural studies to history and historiography, the intellectual world was transformed by the enlightening power of his mind. In his own time, Hume’s ideas encountered considerable resistance from more orthodox thinkers. One result of this was his being rejected for philosophy chairs first at Edinburgh University and then at the University of Glasgow. Yet the influence of Hume’s ideas has grown steadily and powerfully over time. Indeed, as Nicholas Phillipson remarks in his insightful biography David Hume: The Philosopher as Historian: “David Hume’s reputation has never been higher.”
And yet some of Hume’s central but more iconoclastic ideas have not been brought adequately into contemporary discussion. This neglect continues despite the veneration of Hume as the quintessential “grand philosopher” of the Enlightenment. Many of Hume’s widely cited statements, which are often seen as “David Hume in summary,” fail to capture the largeness of the “understanding”—to use one of his favorite words—that Hume presented to us. The job is not made any easier by Hume’s tendency to make occasional remarks that suggest that he is “forgetting, or mis-stating, his [own] normative beliefs,” as Derek Parfit has recently pointed out in his farreaching philosophical work On What Matters. The issue is of importance, since some of the points that Hume seems to overlook in his occasional remarks had received decisive argumentative support in his own writings.
I BEGIN WITH a perspicacious remark that Hume made in 1751, in an essay called “Of Justice,” to be published later in An Enquiry Concerning the Principles of Morals. In the early days of the increasing globalization in which Hume lived, with new trade routes and expanding economic relations across the world, Hume talked about the growing need to think afresh about the nature of justice, as we come to know more about people living elsewhere, with whom we have come to develop new relations:
Again suppose, that several distinct societies maintain a kind of intercourse for mutual convenience and advantage, the boundaries of justice still grow larger, in proportion to the largeness of men’s views, and the force of their mutual connexions. History, experience, reason sufficiently instruct us in this natural progress of human sentiments, and in the gradual enlargement of our regards to justice, in proportion as we become acquainted with the extensive utility of that virtue.The remark is of interest in itself, and also helps us to understand the general idea of justice, and its particular application to global justice, that can be seen to be part of the Humean line of analysis. But it can also be used to illustrate Hume’s general arguments for the need to interrelate ethics and epistemology, and moral reasoning and human sentiments.
The underlying approach to justice here contrasts with the influential view of Hobbes, according to which there has to be a sovereign state for us to entertain any coherent idea of justice. Hobbes was moved by the idea that institutional demands of justice can be met only within the limits of a functioning sovereign state, which is needed to establish and support the required institutions. While Hume was deeply concerned about the importance of institutions, on which he made many penetrating observations, he was reluctant to allow the idea of justice to be narrowed by the boundaries of sovereignty, as if there were no issues of global justice that could take us beyond our national borders.
The overarching concern in the idea of justice is the need to have just relations with others—and even to have appropriate sentiments about others; and what motivates the search is the diagnosis of injustice in ongoing arrangements. In some cases, this might demand the need to change an existing boundary of sovereignty—a concern that motivated Hume’s staunchly anti-colonial position. (He once remarked, “Oh! How I long to see America and the East Indies revolted totally & finally.”) Or it might relate to the Humean recognition that as we expand trade and other relations with foreign countries, our sentiments as well as our reasoning have to take note of the recognition that “the boundaries of justice still grow larger,” without the necessity to place all the people involved in our conception of justice within the confines of one sovereign state.
As it happens, contemporary theories of justice have largely followed the Hobbesian route rather than the Humean one. They have tended to limit their considerations of justice within the boundaries of a particular state. In an important essay in 2005 called “The Problem of Global Justice,” Thomas Nagel explained that “if Hobbes is right, the idea of global justice without a world government is a chimera.” The most influential modern theory of justice, namely John Rawls’s theory of “justice as fairness,” presented in his epoch-making book A Theory of Justice, concentrates on the identification of appropriate “principles of justice” that fix the “basic institutional structure” of a society, in the form of a cluster of ideal institutions for a sovereign state. This confines the principles of justice to the members of a particular sovereign state. It is worth noting that in a later work, The Law of Peoples, Rawls invokes a kind of “supplement” to this one-country pursuit of the demands of justice—but in dealing with people elsewhere, Rawls’s focus is not on justice, but on the basic demands of civilized and humane behavior across the borders.
Nagel, too, confines his analysis of global propriety not to the demands of justice, but to a “minimal humanitarian morality,” since he, too, takes the view that it is “very difficult to resist Hobbes’s claim about the relation between justice and sovereignty.” Hume’s exploration of how “the boundaries of justice” must “grow larger” in a more globalized world contrasts quite sharply with the Hobbesian way of thinking, and thus differs from the approach chosen by most of the contemporary theorists of justice. His approach has many implications for the way we should explore the idea of justice…
December 30, 2011
December 30, 2011
December 29, 2011
The best way to eliminate grade inflation is to take professors out of the grading process: Replace them with professional evaluators who never meet the students, and who don’t worry that students will punish harsh grades with poor reviews. That’s the argument made by leaders of Western Governors University, which has hired 300 adjunct professors who do nothing but grade student work.
“They think like assessors, not professors,” says Diane Johnson, who is in charge of the university’s cadre of graders. “The evaluators have no contact with the students at all. They don’t know them. They don’t know what color they are, what they look like, or where they live. Because of that, there is no temptation to skew results in any way other than to judge the students’ work.”
Western Governors is not the only institution reassessing grading. A few others, including the University of Central Florida, now outsource the scoring of some essay tests to computers. Their software can grade essays thanks to improvements in artificial-intelligence techniques. Software has no emotional biases, either, and one Florida instructor says machines have proved more fair and balanced in grading than humans have.
These efforts raise the question: What if professors aren’t that good at grading? What if the model of giving instructors full control over grades is fundamentally flawed? As more observers call for evidence of college value in an era of ever-rising tuition costs, game-changing models like these are getting serious consideration.
Professors do score poorly when it comes to fair grading, according to a study published in July in the journal Teachers College Record. After crunching the numbers on decades’ worth of grade reports from about 135 colleges, the researchers found that average grades have risen for 30 years, and that A is now the most common grade given at most colleges. The authors, Stuart Rojstaczer and Christopher Healy, argue that a “consumer-based approach” to higher education has created subtle incentives for professors to give higher marks than deserved. “The standard practice of allowing professors free rein in grading has resulted in grades that bear little relation to actual performance,” the two professors concluded.
Naturally, the standard grading model has plenty of defenders, including some who argue that claims of grade inflation are exaggerated—students could, after all, really be earning those higher grades. The current system forges a nurturing relationship between instructor and student and gives individualized attention that no robot or stranger could give, this argument goes.
But the efforts at Western Governors and Central Florida could change that relationship, and point to ways to pop any grade-inflation bubble.
An Army of Graders
To understand Western Governors’ approach, it’s worth a reminder that the entire institution is an experiment that turns the typical university structure on its head. Western Governors is entirely online, for one thing. Technically it doesn’t offer courses; instead it provides mentors who help students prepare for a series of high-stakes homework assignments. Those assignments are designed by a team of professional test-makers to prove competence in various subject areas.
The idea is that as long as students can leap all of those hurdles, they deserve degrees, whether or not they’ve ever entered a classroom, watched a lecture video, or participated in any other traditional teaching experience. The model is called “competency-based education.”
Designers of Western Governors do not intend to compete with Harvard or any other traditional institution. The online university throws a lifeline to nontraditional students who can’t make it to those campuses.
Ms. Johnson explains that Western Governors essentially splits the role of the traditional professor into two jobs. Instructional duties fall to a group the university calls “course mentors,” who help students master material. The graders, or evaluators, step in once the homework is filed, with the mind-set of, “OK, the teaching’s done, now our job is to find out how much you know,” says Ms. Johnson. They log on to a Web site called TaskStream and pluck the first assignment they see. The institution promises that every assignment will be graded within two days of submission.
Emily L. Child is one of the evaluators. She’s a stay-at-home mother of three who lives near Salt Lake City. Her kitchen table is her faculty office. She grades 10 to 15 assignments per day, six days a week, working early in the morning, before her kids are up, or in the afternoon, while they nap. She estimates that she has graded 14,400 assignments in the six years she has worked for the university.
Western Governors requires all evaluators to hold at least a master’s degree in the subject they’re grading. Ms. Child, a former teacher, grades assignments only in the education major. A typical assignment (the university calls it a “task”), she says, involves a student’s submitting a sample lesson plan or classroom strategies. “I enjoy that it allows me to stay current as a teacher,” she says.
Evaluators are required to write extensive comments on each task, explaining why the student passed or failed to prove competence in the requisite skill. No letter grades are given—students either pass or fail each task. Officials say a pass in a Western Governors course amounts to a B at a traditional university.
All evaluators initially receive a month of training, conducted online, about how to follow each task’s grading guidelines, which lay out characteristics of a passing score.
The identities of the evaluators are kept hidden from students, and even from the mentors. The goal is to protect the graders from students nagging them about grades, or from mentors who might lobby to pass a borderline student to better reflect on their teaching.
The graders must regularly participate in “calibration exercises,” in which they grade a simulated assignment to make sure they are all scoring consistently. As the phrase suggests, the process is designed to run like a well-oiled machine.
Some evaluators have objected to the system at first, says Ms. Johnson, especially professors who have come from traditional higher education. Some insist that they don’t need to justify each grade they give, arguing that they know a passing assignment when they see it. “That’s hogwash,” she says. “If you know it when you see it, then tell us what it is you see.”
Other evaluators want to push talented students to do more than the university’s requirements for a task, or to allow a struggling student to pass if he or she is just under the bar. “Some people just can’t acclimate to a competency-based environment,” says Ms. Johnson. “I tell them, If they don’t buy this, they need to not be here.”
Even Ms. Johnson had to be convinced when she started out at Western Governors, after having taught school and helped to develop instructional standards for the Utah State Office of Education. “I was an academic snob,” she says, noting that she took a position at the university because she needed a job. “As I was going through their training, I began to think, Oh, my gosh I think they have something here.”…
December 29, 2011
IN OCTOBER 1991, astrophysicists observed something incredible in the skies above Dugway Proving Ground, a former weapons-testing facility in a remote corner of Utah. It was a cosmic ray with an enormous amount of energy – equivalent to the kinetic energy of a baseball travelling at 100 kilometres per hour, but compressed into a subatomic particle. It came to be known as the oh-my-god-particle, and though similar events have been recorded at least 15 times since, mainstream physicists remain baffled by them.
To Jim Carter, a trailer-park owner in Enumclaw, Washington, ultra-high-energy cosmic rays pose no problem. They offer proof of a radical theory of the universe he has been developing for 50 years.
In Carter’s theory, these rays are photons left over from the earliest stage of cosmic evolution. He calls them “apocalyptic photons” and believes that one of them was responsible for the Tunguska event in 1908, in which a mysterious something from outer space flattened 2100 square kilometres of Siberian forest.
Carter’s ideas are not taken seriously by the physics mainstream. He does not have a PhD and has never had any of his work published in a scientific journal. He has just a single semester of university education, which was enough to convince him that what was being taught in physics departments was an offence to common sense.
In response, Carter went off and developed his own ideas. Five decades on he has his very own theory of everything, an idiosyncratic alternative to quantum mechanics and general relativity, based on the idea that all matter is composed of doughnut-shaped particles called circlons. Since the 1970s he has articulated his ideas in a series of self-published books, including his magnum opus, The Other Theory of Physics.
For the past 18 years I have been collecting the works of what I have come to call “outsider physicists”. I now have more than 100 such theories on my shelves. Most of them are single papers, but a number are fully fledged books, often filled with equations and technical diagrams (though I do have one that is couched as a series of poems and another that is written as a fairy tale). Carter’s is by far the most elaborate work I have encountered.
The mainstream science world has a way of dealing with people like this – dismiss them as cranks and dump their letters in the bin. While I do not believe any outsider I have encountered has done any work that challenges mainstream physics, I have come to believe that they should not be so summarily ignored.
Consider the sheer numbers. Outsider physicists have their own organisation, the Natural Philosophy Alliance, whose database lists more than 2100 theorists, 5800 papers and over 1300 books worldwide. They have annual conferences, with this year’s proceedings running to 735 pages. In the time I have been observing the organisation, the NPA has grown from a tiny seed whose founder photocopied his newsletter onto pastel-coloured paper to a thriving international association with video-streamed events…
December 29, 2011
Philosophers have a long but scattered history of analyzing food. Plato famously details an appropriate diet in Book II of the Republic. The Roman Stoics, Epicurus and Seneca, as well as Enlightenment philosophers such as Locke, Rousseau, Voltaire, Marx, and Nietzsche, all discuss various aspects of food production and consumption. In the twentieth century, philosophers considered such issues as vegetarianism, agricultural ethics, food rights, biotechnology, and gustatory aesthetics. In the twenty-first century, philosophers continue to address these issues and new ones concerning the globalization of food, the role of technology, and the rights and responsibilities of consumers and producers. Typically, these philosophers call their work “food ethics” or “agricultural ethics.” But I think they sell themselves short. Philosophers do more than treat food as a branch of ethical theory. They also examine how it relates to the fundamental areas of philosophical inquiry: metaphysics, epistemology, aesthetics, political theory, and, of course, ethics. The phrase “philosophy of food” is more accurate. We might eventually come to think of the philosophy of food as a perfectly ordinary “philosophy of” if more philosophers address food issues and more colleges offer courses on the subject—or at least that is my hope.
But why is this subject – a footnote to Plato just like the rest of the philosophy – not yet fully entrenched as a standard philosophical subject? Why do philosophers only occasionally address questions concerning food? The subject is obviously important and the scholarship on food has real pedigree. Some have argued that food is eschewed because of the perception that it is too physical and transient to deserve serious consideration (Telfer, 1996). Others have argued that food production and preparation have conventionally been regarded as women’s work and, therefore, viewed as an unworthy topic for a male-dominated profession (Heldke, 1992). Still others argue that the senses and activities associated with food (taste, eating, and drinking) have traditionally been seen as “lower senses” and are too primitive and instinctual to be analyzed philosophically (Korsemeyer, 2002). These are all plausible explanations.
But perhaps the real reason why relatively few philosophers analyze food is because it’s too difficult. Food is vexing. It is not even clear what it is. It belongs simultaneously to the worlds of economics, ecology, and culture. It involves vegetables, chemists, and wholesalers; livestock, refrigerators, and cooks; fertilizer, fish, and grocers. The subject quickly becomes tied up in countless empirical and practical matters that frustrate attempts to think about its essential properties. It is very difficult to disentangle food from its web of production, distribution, and consumption. Or when it is considered in its various use and meaning contexts, it is too often stripped of its unique food qualities and instead seen as, for example, any contextualized object, social good, or part of nature. It is much easier to treat food as a mere case study of applied ethics than to analyze it as something that poses unique philosophical challenges.
But things are starting to change. The level of public discourse about diet, health, and agriculture in the US is remarkably more sophisticated than it was only ten years ago. Food books are bestsellers, cooking shows are ubiquitous, and the public is more informed about food safety and food politics. The mainstream media no longer tends to blame malnutrition and food insecurity on overpopulation but on poverty and poor governance. And most people, I suspect, regardless of one’s take on animal ethics, would be sickened to learn that a staggering 56 billion land animals are slaughtered each year for food. Philosophers are not immune from these facts and trends. We are increasingly joining other academics, journalists, and citizens who take food very seriously. More philosophical work has been done on food and agriculture in the last five years than the previous thirty. Hopefully, we are not just following a trend but helping to steer it in a more intelligent and responsible direction.
The role of philosophy is to cut through the morass of contingent facts and conceptual muddle to tackle the most basic questions about food: What is it exactly? How do we know it is safe? What should we eat? How should food be distributed? What is good food? These are simple yet difficult questions because they involve philosophical questions about metaphysics, epistemology, ethics, politics, and aesthetics. Other disciplinary approaches may touch on these questions concerning food but only philosophy addresses them explicitly. Once we have a clear understanding of philosophy’s unique role, we’ll all be in a better position to engage in dialogue aimed at improving our knowledge, practices, and laws. We should also gain a renewed appreciation for the scope and relevance of the discipline of philosophy itself…
December 29, 2011
This image has been posted with express written permission. This cartoon was originally published at Town Hall.
December 29, 2011
December 28, 2011
Somehow, earlier this year, a philosopher managed to goad the world into vanquishing an evil villain. Perhaps more surprising was the philosopher in question: the man French society loves to mock, Bernard-Henri Lévy.
Celebrity doesn’t always travel well. The conditions it depends upon can be too local, too conditional. Try explaining Kim Kardashian to the Germans; try asking the Germans to explain David Hasselhoff to us. Still, the case of the famously self-regarding, righteous, impeccably coiffed French philosopher and media personality Bernard-Henri Lévy is singularly strange. The events of the past year—in which Lévy, operating freelance, seemed to prompt a broke and crumbling Europe into a humanitarian war in Libya—so obviously belong to a different era that Lévy has left in his wake a torrent of historical analogies: Perhaps he is Lawrence of Arabia, as a friendly French review recently suggested. Or perhaps he is Don Quixote.
One year ago, influence like this appeared far beyond Lévy’s reach. He has long been France’s most famous living philosopher, and was once an important one, but his media and social profile eclipsed his intellectual reputation. He was still suffering from the highly embarrassing Botul episode of 2010, in which Lévy had happened upon a philosophical spoof and, assuming it to be serious, cited its arguments as part of a critique of Immanuel Kant. (He had missed the crucial clue, which was that the fake philosopher, Jean-Baptiste Botul, was elaborating a philosophy called Botulism.) His journalism was often called glib, and his big 2006 book on America had been panned on the front page of the New York Times’ Sunday book review. When I called scholars of European ideas at Harvard and Columbia to talk about Lévy, they dismissed him as overhyped and irrelevant, respectively. At the beginning of 2011, Lévy was most frequently in the French press for his New York mistress, the heiress Daphne Guinness, who kept up a public theater of pining for him on Twitter.
But, as Lévy told me recently, “sometimes you are inhabited by intuitions that are not clear to you.” On February 23, the philosopher was in Cairo watching television images of Muammar Qaddafi’s retribution against the rebel towns around Benghazi, which the dictator and his sons had threatened to drown in “rivers of blood.” Lévy is most fully himself in stark humanitarian crises, when defending what he calls “the memory of the worst.” He is also the heir to a vast timber fortune, wealth that allows him a license to act on his instincts, and so he promptly found the name of rebel leader Mustafa Abdel-Jalil, arranged for a cameraman and for a private plane to fly him near the front, and within a few hours was in a hired car, driving off to war.
Lévy was a veteran of mass killing; he had seen it in a half-dozen conflicts, maybe, and driving through the desert towns east of Benghazi, he detected its early signs: blood-smeared walls, passersby wrapping themselves in hoods to keep their lungs free of contaminants. He foresaw a “crawling tragedy. Thirty, 40 dead a day. Maybe worse.” In Benghazi, Lévy spent the hour before their meeting frantically Googling Abdel-Jalil and leaping up to greet anyone walking past who might be the Libyan. When Abdel-Jalil did arrive (“short with a modest smile and the look of a stunned falcon”), Lévy had prepared his speech. “The world is watching,” he began. It was pompous, he realized, but “you have to say something.” He compared Benghazi to the Warsaw Ghetto, to Sarajevo. “Benghazi is the capital not only of Libya but of free men and women all over the world,” Lévy told the rebel leader.
“In the back of his mind, I’m sure, was the idea that I might be a fly-by-night,” Lévy wrote in his diary, “or delusional.” Indeed. Lévy told Abdel-Jalil that he could fly a rebel delegation to Paris on his plane and get them an audience with French president Nicolas Sarkozy. The rebels were badly outgunned, and Abdel-Jalil did not at this moment have a ton of other suitors. He agreed.
The philosopher had barely spoken with Sarkozy in three years and had rather loudly opposed the president’s election. Lévy got so stressed thinking about the call that he developed a migraine, but he phoned the presidential palace anyway and was promptly put through. The call dropped three times; it wasn’t a great connection. But the president agreed to meet with the Libyans, and the next Thursday they were all in his office in Paris, ringed by Sarkozy’s advisers.
Everyone was awkward. The Libyans asked Sarkozy to assassinate Qaddafi. This was impossible. Lévy sensed that the rebels misunderstood their own case: “So maladroit, so not skilled, did not know the cause.” Lévy had sat down privately with Sarkozy the previous day and, grappling for a line of argument, wound up with rhetoric; if there was a massacre in Benghazi, he said, “the blood of the massacred will stain the French flag.” Sarkozy seemed to buy it. At the meeting with the rebels, the French president pledged a bombardment if he could secure the cooperation of the allies. In Lévy’s account, as the meeting emptied, Sarkozy said to him, “Feel free to, uh, say what you saw and heard.” Outside, Lévy told reporters that France would recognize the rebels as the Libyan government; he mentioned that “targeted operations” would come soon. Le Monde, bewildered, noted that the philosopher seemed to have taken the job of governmental spokesman. The man officially in charge of French foreign affairs—Alain Juppé, the foreign minister—was in Brussels at the time; he would later reportedly threaten to resign over the end run.
Things came together rapidly, for a war. Four days later, Lévy was flying a rebel general onto the freezing tarmac at Le Bourget, the Teterboro of Paris, for a meeting he had brokered with Hillary Clinton, and soon the Americans were committed; a few days later, Sarkozy called Lévy to tell him that the U.N. Security Council was in agreement. “I am proud of my country,” Lévy told his president. On March 19, the intervention began. Sarkozy had just done what Lévy had spent three decades urging politicians to do—had used the West’s military power to help avert an impending massacre. Lévy was quick to point out, to anyone who asked, that this would not alter his opposition to Sarkozy’s bid for reelection.
This is the story that Lévy has since unfurled, bannerlike, against a backdrop of official no-comments. Through reports in Le Monde, the countervailing perspective of the French bureaucracy has since emerged: They were planning a Libyan intervention all along, and Lévy’s actions were a sideshow. But the circumstantial evidence inclines Lévy’s way. For the war’s six-month duration, Lévy was Sarkozy’s exhorter and confessor in Paris, at crucial moments taking three calls a day from the French president, and his tour guide in Tripoli. “They say they had plans,” Lévy told me. “Okay, why not? It is a defense ministry. They have plans for literally everything: invading Vanuatu, repelling Mauritius, and so on.” He shrugged. “So?”
Wars are no longer supposed to begin like this. They are exercises in national interest and self-defense, not personal morality and valor. They are the product of military plans, not proddings from celebrity philosophers. And yet Libya—so far the most aggressive humanitarian intervention of the 21st century—depended not on any broad public movement nor any urgent security threat. There was instead a chain of private conversations: Hillary Clinton moving Barack Obama, Nicolas Sarkozy moving Dmitri Medvedev, and at the chain’s inception this romantic propagandist, Bernard-Henri Lévy. “I think this war was probably launched by two statesmen,” Lévy told me. “Hillary Clinton and Sarkozy. More modestly, me.”…
This Sudanese General Founded The Janjaweed. So Why Is The Arab League Sending Him To ‘Help’ The Syrians?
December 28, 2011
For the first time in Syria’s nine-month-old uprising, there are witnesses to President Bashar al-Assad’s crackdown, which according to the United Nations has claimed more than 5,000 lives. Arab League observers arrived in the country on Dec. 26, and traveled to the city of Homs — the epicenter of the revolt, where the daily death toll regularly runs into the dozens, according to activist groups — on Dec. 27. Thousands of people took to the streets to protest against Assad upon the observers’ arrival, while activists said Syrian tanks withdrew from the streets only hours before the Arab League team entered the city.
“I am going to Homs,” insisted Sudanese Gen. Mohammad Ahmed Mustafa al-Dabi, the head of the Arab League observer mission, telling reporters that so far the Assad regime had been “very cooperative.”
But Dabi may be the unlikeliest leader of a humanitarian mission the world has ever seen. He is a staunch loyalist of Sudan’s President Omar al-Bashir, who is wanted by the International Criminal Court for genocide and crimes against humanity for his government’s policies in Darfur. And Dabi’s own record in the restive Sudanese region, where he stands accused of presiding over the creation of the feared Arab militias known as the “janjaweed,” is enough to make any human rights activist blanch.
Dabi’s involvement in Darfur began in 1999, four years before the region would explode in the violence that Secretary of State Colin Powell labeled as “genocide.” Darfur was descending into war between the Arab and Masalit communities — the same fault line that would widen into a bloodier interethnic war in a few years’ time. As the situation escalated out of control, Bashir sent Dabi to Darfur to restore order.
According to Julie Flint and Alex De Waal’s Darfur: A New History of a Long War, Dabi arrived in Geneina, the capital of West Darfur, on Feb. 9, 1999, with two helicopter gunships and 120 soldiers. He would stay until the end of June. During this time, he would make an enemy of the Masalit governor of West Sudan. Flint and De Waal write:
Governor Ibrahim Yahya describes the period as ‘the beginning of the organization of the Janjawiid’, with [Arab] militia leaders like Hamid Dawai and Shineibat receiving money from the government for the first time. ‘The army would search and disarm villages, and two days later the Janjawiid would go in. They would attack and loot from 6 a.m. to 2 p.m., only ten minutes away from the army. By this process all of Dar Masalit was burned.’
Yahya’s account was supported five years later by a commander of the Sudan Liberation Army, a rebel organization movement in the region. “[T]hings changed in 1999,” he told Flint and De Waal. “The PDF [Popular Defense Forces, a government militia] ended and the Janjawiid came; the Janjawiid occupied all PDF places.”
Dabi provided a different perspective on his time in Darfur, but it’s not clear that he disagrees on the particulars of how he quelled the violence. He told Flint and De Waal that he provided resources to resolve the tribes’ grievances, and employed a firm hand to force the leaders to reconcile — “threatening them with live ammunition when they dragged their feet,” in the authors’ words. “I was very proud of the time I spent in Geneina,” Dabi said…
December 28, 2011
In the past two years, protesters against authoritarian regimes have begun to heavily use social-networking and media services, including Twitter, Facebook, YouTube, and cell phones, to organize, plan events, propagandize, and spread information outside the channels censored by their national governments. Those governments, grappling with this new threat to their holds on power, have responded by trying to unplug cyberspace.
Some examples: In April 2009, angry young Moldovans stormed government and Communist Party offices protesting what they suspected was a rigged election; authorities discontinued Internet service in the capital. In Iran, the regime cracked down on protesters objecting to fraudulent election outcomes in June 2009 by denying domestic access to servers and links, and by slowing down Internet service generally — although protesters and their supporters found ways around those restrictions. In Tunisia, when protests against President Zine el Abidine ben Ali escalated in December 2010, his government sought to deny Twitter services in the country and hacked the Facebook accounts of some Tunisian users in order to acquire their passwords. In Egypt, amid mass protests in Cairo and several other cities in January 2011, Hosni Mubarak’s government attempted to disconnect the Internet. But there, too, protesters found limited workarounds until the doomed regime eventually restored some services.
Authoritarians may have reason to fear cyberspace. It is widely believed that the proliferation of Internet access and other communications technologies empowers individuals and promotes democracy and the spread of liberty, usually at the expense of centralized authority. As Walter Wriston optimistically put it in his 1992 book The Twilight of Sovereignty: “As information technology brings the news of how others live and work, the pressures on any repressive government for freedom and human rights will soon grow intolerable because the world spotlight will be turned on abuses and citizens will demand their freedoms.”
Two decades later, the hope that cyberspace will promote international peace and cooperation shines brighter than ever. To this end, the Obama administration has undertaken a project to promote its vision of cyberspace around the world. It was launched with the 2009 announcement in Morocco of the “Civil Society 2.0 Initiative,” a collection of efforts to help grassroots organizations use cyberspace to advance their goals. As the president explained at a 2009 forum in Shanghai, responding to a question about Internet censorship, “The more open we are, the more we can communicate. And it also helps to draw the world together.”
Secretary of State Hillary Clinton echoed this sentiment in a 2010 speech at the Newseum in Washington, D.C., arguing that the Internet can help bridge differences between religious groups and create “one global community, and a common body of knowledge that benefits and unites us all.” In addition, she noted, there are the practical economic benefits of connectivity: cyberspace has become a critical ingredient for economic growth — “an on-ramp to modernity” — often by enabling producers to specialize and open new markets, and by generally improving productivity. Secretary Clinton further declared her intent to place Internet freedom on the agenda of the United Nations Human Rights Council; launch a program to use cyberspace to “empower citizens and leverage our traditional diplomacy” in cooperation with industry, academia, and nongovernmental organizations; and strengthen the Global Internet Freedom Task Force formed during the Bush administration.
Since then, the Obama administration has promoted cooperation with the private firms that own and operate the Internet’s infrastructure in hopes of establishing standards to promote freedom in cyberspace; it has protested diplomatically when foreign states impinge on their citizens’ free use of the Internet; and it has resisted foreign attempts to transfer Internet governance from technical organizations to political organizations, most notably to the United Nations. Meanwhile, the State Department’s Bureau of Democracy, Human Rights, and Labor issued $5 million in grants to private organizations developing technologies to enable unrestricted access to the Internet and secure communications over mobile devices. The department hopes to issue $30 million more.
Secretary Clinton’s Newseum speech, and a follow-up address she delivered in early 2011 at George Washington University, are important not only because of the initiatives they launched, but also because they articulate the administration’s perception of cyberspace’s role in international relations. Central to this view is
the freedom to connect — the idea that governments should not prevent people from connecting to the Internet, to websites, or to each other. The freedom to connect is like the freedom of assembly, only in cyberspace. It allows individuals to get online, come together, and hopefully cooperate.
Indeed, Clinton equated the “freedom to connect” with the freedom of expression and association as codified in the First Amendment to the U.S. Constitution and in Article 19 of the Universal Declaration of Human Rights.
While well-intentioned, the administration’s efforts to advance the cause of “Internet freedom” as a human right should raise some concerns. First, despite the admirable desire to apply the nation’s enduring principles to the rapidly evolving realm of high technology, framing “Internet freedom” as a human right risks weakening the very concept of human rights. Further, by lending its prestige and credibility to the international cause of Internet freedom, the U.S. government may actually make it more likely that tyrannical regimes will crack down on the Internet…