In December 23, 2011, the dons of Harvard University finally got to see Adam Wheeler sentenced to a year in prison. Wheeler, a twenty-five-year-old whom they admitted in 2007 on the strength of an academic record he’d fabricated out of thin air, had been caught again—and this was not something a young gentleman does to America’s most highly self-regarded institution of advanced credentialing.
A few months earlier, Wheeler had submitted a résumé to U.S. Green Data Inc., on which he said he had attended Harvard. Technically, this was true; he’d been one year short of graduating when someone at the school belatedly noticed he had falsified the credentials that won him admission, and that he had plagiarized the papers that won him scholarships and prestigious awards. But the ten-year probationary punishment that the Middlesex County Superior Court had meted out upon the discovery of his fabulism forbade him ever from claiming he had attended the school, and the new offending résumé landed on the desk of a Harvard alum, who forwarded it to a dean, who turned it over to the district attorney’s office. And Adam Wheeler, who attended Harvard and who had been forced by the Court to lie about his having attended Harvard, was packed off to jail for lying about his having attending Harvard. The school of George W. Bush and Henry Kissinger (the war criminal who was feted on campus this spring as a conquering hero) took all appropriate measures to ensure that its name would never be sullied by associating with an immoral, egomaniacal charlatan, at least one who never held high office. And all the useful knowledge that Wheeler picked up in more than two years of classes was no longer something from which he could draw on to contribute to society.
College credential fraud may seem like a nitpicking offense for throwing a nonviolent offender into the overcrowded prison system for a tour of the seasons. But Wheeler embarrassed Harvard; his puncture of arbitrary power was so trifling that, paradoxically, it couldn’t be ignored. Harvard officials had little choice but to make an example of him through an aggressive, custom-tailored prosecution whose real aim was to restore the correct order of things. Adam Wheeler, after all, is merely a mediocre public school graduate from Delaware. But Harvard—well, everyone knows that Harvard shines across the fair land as a beacon of meritocratic upward mobility universally accessible to a nationwide corps of upper-middle-class teenagers of arbitrary intellectual ability.
Take a look at the victim impact statement Harvard presented to the Court in 2010, and notice how the country’s mightiest and richest institution of enlightened learning asks for the maximum punishment to be inflicted upon a lying schoolboy. The victim wanted to send a message to the entire world that fraud on campus will not be tolerated, no ifs, ands, or buts about it:
Wheeler’s acts of deception and fraud not only harmed Harvard University directly, but also undermined the public perception of integrity in higher education nationally and around the world. We require honesty as well as excellence from our students, which is why, when we discovered Mr. Wheeler’s fraudulent conduct, we brought it to the attention of the district attorney’s office. In terms of sentencing, we believe restitution is appropriate, so that the financial aid and other funds that Mr. Wheeler stole from Harvard can be put to use to support deserving Harvard students. We also feel strongly that Mr. Wheeler should be prohibited from profiting from his fraudulent schemes for as long a period as the court has the power to impose. Were he permitted to profit from the notoriety he already has gained as a result of his flagrant dishonesty, all of higher education would continue to be negatively impacted.It’s not as though Harvard lacks for alums whom the institution should be ashamed to be associated with, or who have befouled “the public perception of integrity in higher education.” Wheeler’s final prosecution came just three years after a cabal of alumni known as the financial services sector destroyed the economy by playing computer games with the planet’s accumulated wealth. There was former Harvard President, Treasury Secretary, and deregulator extraordinaire Larry Summers; there was Summers’s predecessor at Treasury and mentor in the intricate art of fucking up global economies of weaker nations for no good reason, Robert Rubin (AB ’60 and member of the Corporation, Harvard’s governing body); there was the CEO of America’s most ruthless megabank (“the smart ones,” in financial expert circles), Lloyd Blankfein (AB ’75, JD ’78); and then there were approximately 100 percent of the other key figures who engineered this wholly preventable near-reversion to the state of nature—all Crimson men with at least one tour of duty. The university offers no protest as these apocalypse machinists drop John Harvard’s name in their pursuit of sinecures atop whatever remaining elite institutions and systems they have yet to destroy; instead, it covers them with laurels and showers them with money.
Take the case of Andrei Shleifer, a prominent Harvard economics professor and former head of the disgraced Russia Project at the Harvard Institute for International Development. In the nineties, Shleifer won a contract from the U.S. government to administer “shock therapy” to the Russian economy, to theorize and implement its transformation from failed socialism to a market economy dominated by private capital and guided by legal norms. The key to establishing a favorable investment climate was teaching the Russians respect for the rule of law—because every academic economist knows that underlying market economies lies a rational consensus, an agreement to play by the rules duly articulated and enforced. And out of this orthodoxy came not only a failed mission and a reaction inside Russian power circles that set a baleful course, but a tale of personal and institutional corruption as awesome as you are likely to find anywhere in scandal-plagued higher education, with sordid details of fabricated expense accounts, no-show jobs, and leisured junkets that make Adam Wheeler seem like a piker. The FBI and U.S. Attorney’s Office investigations of Shleifer’s activities turned up large quantities of credible evidence of money laundering, embezzlement, tax evasion, and fraud, evidence that directly implicated his wife, a hedge fund manager. In 2004, a Boston judge in the federal district court ruled Harvard liable for breach of contract in the Shleifer debauch and found the celebrated economist liable for conspiracy to defraud the U.S. government. Adam Wheeler’s tangle of lies cost Harvard $45,000 and change, a pittance next to the $26.5 million they paid in the Shleifer settlement, the largest in the university’s history…
The United States has been shaped by three far-reaching political revolutions: Thomas Jefferson’s “revolution of 1800,” the Civil War, and the New Deal. Each of these upheavals concluded with lasting institutional and cultural adjustments that set the stage for new phases of political and economic development. Are we on the verge of a new upheaval, a “fourth revolution” that will reshape U.S. politics for decades to come? There are signs to suggest that we are. In fact, we may already be in the early stages of this twenty-first-century revolution.
The great recession that began in 2008 caused many to suggest that the United States is entering a period of “decline” during which it will lose its status as the world’s most powerful and prosperous nation state. The metaphor of “decline” presumes that the American people will sit by passively as their standard of living and international status erode year by year. That is unlikely to occur: Americans will do everything in their power to reverse any such process of national decline. Thus, what the United States is now facing is not a gradual decline but a political upheaval that will reshape its politics, policies, and institutions for a generation or two to come. There is no guarantee that the nation will emerge from this crisis with its superpower status intact, just as there were no guarantees that it would emerge from the Civil War or the Great Depression in a position to extend its wealth and power. The most that we can say is that, in the decade ahead, Americans will struggle to forge a governing coalition that can guide the nation toward a path of renewed growth and dynamism.
The financial crisis and the long recession, with the strains they have placed upon national income and public budgets, are only the proximate causes of the political crisis now unfolding in the United States. The deeper causes lie in the exhaustion of the post-war system of political economy that took shape in the 1930s and 1940s. One pillar of that system emerged out of the New Deal with its emphasis upon national regulation of the economy, social insurance, expanding personal consumption, and public debt; the second emerged out of World War II with the U.S. dollar as the world’s reserve currency and the U.S. military as the protector of the international trading system. The post-war system created the basis for unprecedented prosperity in the United States and the Western world. That system is now unwinding for several reasons, not least because the American economy can no longer underwrite the debt and public promises that have piled up over the decades. The urgent need to cancel or renegotiate these debts and public promises on short notice will ignite the upheaval referred to here as “the fourth revolution.” There will follow an extended period of conflict in the United States between the two political parties as they compete for support either to maintain the post-war system or to identify a successor to it.
It is not possible to outline in advance the precise lineaments of the fourth revolution. After all, few Americans living in 1798, 1858, or 1928 could have foreseen what was going to happen to their country in the years immediately ahead. The best that we can do is to look for some general patterns in these earlier events that might serve as guides for what is likely to happen in the United States in the next decade or two.
Notwithstanding its reputation for stability and continuity, the U.S. political system seems to resolve its deepest problems in relatively brief periods of intense and potentially destabilizing conflict. These events are what some historians have called our “surrogates for revolution” because, rather than overthrowing the constitutional order, they adjust it to developing circumstances.
There are a few clear reasons why the American system adjusts in this discontinuous fashion. The constitutional system, with its dispersed powers and competing institutional interests, resists preemptive and over-arching solutions to accumulating problems. At the same time, America’s dynamic economy and highly mobile society are constantly creating new challenges to which the political system cannot easily respond. At times, these challenges have built up to a point where the differences between parties and interests have been so fundamental as to defy efforts to resolve them through the ordinary channels of politics.
There are a few superficial similarities in the structure of these earlier events that might provide clues as to what we might look for in any new upheaval. These events—Jefferson’s revolution, the sectional conflict, and the crisis of the 1930s and 1940s—extended over several election cycles before producing a stable resolution; the political settlements that emerged from these conflicts lasted roughly a lifetime—sixty or seventy years—until they began to unravel under the pressure of new developments; and each event ended with the ouster of the political party that had dominated the system during the previous era.
At a deeper level, each of these realignments discredited an established set of governing elites and brought into power new groups of political and cultural leaders. After reorganizing national politics around new principles, these new elites took control of the national government, staffing its departments and agencies with their political supporters. As they strengthened their control over the system, they also gradually extended their influence into important subsidiary organizations, such as newspapers, college and university faculties, book publishers, and civic associations. College and university faculties and our major newspapers today are overwhelmingly Democratic; from the 1870s into the 1930s, they were generally Republican. This is one of the factors that cements any realignment in place and gives it the stability to persist over many decades.
One can also identify in all three cases an abrupt change of policy, a broken agreement, or some perceived violation of faith that poisoned relations between the parties, drove them further apart, and closed off possibilities for compromise. The Federalists’ passage of the Alien and Sedition Acts (1798), which opponents saw as an attempt to criminalize criticism of the Adams administration, provoked all-out warfare with Jefferson’s fledgling party and convinced Jefferson and James Madison that their ultimate goal should be the destruction of the Federalist Party. The Democratic Party’s repeal of the Missouri Compromise in 1854 brought the Republican Party into existence and sharpened the sectional conflict by several degrees. In 1932, FDR claimed (falsely in this case) that the bankers and industrialists had caused the Depression by irresponsible speculation in stocks. Because of this violation of trust, he declared that their activities would have to be supervised more closely by federal authorities.
More fundamentally, each of these realignments was carried out and then maintained by one dominant political party. Following the election of 1800, Jefferson’s (and later Jackson’s) Democratic party defined the parameters of political competition until the outbreak of the sectional crisis in the 1850s. The Republican Party led the nation through the Civil War and maintained its dominant status throughout the post-bellum era of industrial development. In the midst of the Great Depression, FDR’s Democratic Party organized the modern system around the politics of public spending and national regulation. The Democrats completed this revolution after World War II when the United States began to assume responsibilities in the international arena commensurate with those it had already assumed in the domestic economic arena…
August 1, 2012
SEEMINGLY, IT was a historic moment. The prime minister of Israel and leader of the Likud Party publicly embraced the two-state solution. A short while into his second term in office, ten days after the newly inaugurated president of the United States promised in Cairo to “personally pursue this outcome,” Netanyahu declared an about-face, shifting from the traditional course he and his political camp had once pursued.
Thus, more than ninety years after the Balfour Declaration of November 1917, it appeared the successors of the founders of Zionism were moving toward a historic compromise to resolve the conflict embedded in that intentionally vague statement. It is the conflict between “the establishment in Palestine of a national home for the Jewish people” and “nothing shall be done which may prejudice the civil and religious rights of existing non-Jewish communities in Palestine.”
Now it appeared that this dispute, which for decades had split Israeli society into rival political camps, could be resolved. Forty-two years after the occupation of the West Bank and the Gaza Strip, formerly held by Jordan and Egypt, a right-wing prime minister declared his willingness to return these territories to the people living in them, as well as his consent for the establishment of a new, independent state of Palestine.
But almost immediately, other voices emerged questioning whether this solution—dividing the land into two independent, coexisting states—was still feasible; whether the “window of opportunity” that might have been available in the past had already closed for good; whether the Israeli settlement enterprise in the West Bank had reached a point of no return, creating a new situation that did not allow for any partition; and whether the division of political powers within Israeli society had changed, making the dramatic move impossible. As Robert Serry, UN special coordinator for the Middle East peace process, put it:
If the parties do not grasp the current opportunity, they should realize the implication is not merely slowing progress toward a two-state solution. Instead, we could be moving down the path toward a one-state reality, which would also move us further away from regional peace.
This article focuses on the Israeli side of this equation in part because the Palestinian leadership, as far back as 1988, made a strategic decision favoring the two-state solution, presented in the Algiers declaration of the Palestinian National Council. The Arab League, for its part, voted in favor of a peace initiative that would recognize the state of Israel and set the terms for a comprehensive Middle East settlement. Meanwhile, various bodies of the international community reasserted partition of the land as their formal policy. But Israel, which signed the Oslo accords nearly two decades ago, has been moving in a different direction. And Netanyahu’s stirring words of June 2009 now ring hollow.
Israel never overtly spurned a two-state solution involving land partition and a Palestinian state. But it never acknowledged that West Bank developments had rendered such a solution impossible. Facing a default reality in which a one-state solution seemed the only option, Israel chose a third way—the continuation of the status quo. This unspoken strategic decision has dictated its polices and tactics for the past decade, simultaneously safeguarding political negotiations as a framework for the future and tightening Israel’s control over the West Bank. In essence, a “peace process” that allegedly is meant to bring the occupation to an end and achieve a two-state solution has become a mechanism to perpetuate the conflict and preserve the status quo.
This reality and its implications are best understood through a brief survey of the history that brought the Israelis and Palestinians to this impasse. The story is one of courage, sincere efforts, internal conflicts on both sides, persistent maneuvering and elements of folly.
IN AUGUST 1993, the foreign ministers of Israel and the Palestine Liberation Organization (PLO), Shimon Peres and Mahmoud Abbas, signed a declaration of principles. In September of that year, Israeli prime minister Yitzhak Rabin and PLO chairman Yasir Arafat exchanged the “letters of recognition,” which led to an impressive signing ceremony on the White House lawn. Words about historical compromise, reconciliation and peace filled the air. The world perceived a true, deep change sweeping the Middle East, with both sides resolved to divide the land into two states.
Nevertheless, the negotiating partners’ starting points remained far apart. The Palestinians considered engaging in a process based on the acceptance of the 1967 borders to be a major compromise in itself. They believed their willingness to settle for territory representing 22 percent of mandatory Palestine was already an immense compromise foreclosing much further concession. Israel, in contrast, considered these borders the starting point for talks and never intended to withdraw fully from the occupied territory.
Prime Minister Rabin accentuated this position in seeking Knesset support for the interim agreement, or Oslo II:
We would like this to be an entity which is less than a state, and which will independently run the lives of Palestinians under its authority. The borders of the State of Israel, during the permanent solution, will be beyond the lines which existed before the Six Day War. We will not return to the 4 June 1967 lines.
Rabin further referred to different areas of the West Bank that Israel would insist on keeping, including regions that no Palestinian negotiator could give up.
Because of these differences, the Oslo accords were originally labeled an interim agreement “for a transitional period not exceeding five years,” meant to lay the foundations for “a permanent settlement based on Security Council Resolutions 242 (1967) and 338 (1973).” Yet, even though the final objective intentionally remained vague, the agreement itself listed detailed timetables for the implementation of interim phases, including, most remarkably, an Israeli withdrawal from the cities of Gaza and Jericho in three months. Already in this sensitive initial phase, cracks appeared. “No dates are sacred,” said Rabin in December 1993, as the deadline for withdrawal was being postponed…