May 28, 2011
May 27, 2011
On August 1, 2007, as rush-hour traffic carried workers home from their jobs in downtown Minneapolis, a decades-old bridge spanning the Mississippi River collapsed without warning into the waters one hundred feet below. The bridge crumbled at a moment when over a hundred vehicles — including a school bus carrying sixty-one children — were driving or parked on it. As the local Star-Tribune reported the next day, “The span was packed with rush-hour traffic, and dozens of vehicles fell with the bridge leaving scores of dazed commuters scrambling for their lives.” “I heard it creaking and making all sorts of noises it shouldn’t make,” said one man who was driving on a road under the bridge, en route to a Minnesota Twins baseball game, seconds before the collapse. “And then the bridge just started to fall apart.”
The collapse killed thirteen people and injured 145 more. The school bus, amazingly, landed on all four of its tires, missing the water and coming to rest on a parkway, according to the Star-Tribune. Its passengers escaped with only injuries.
Before the dust could settle, let alone the cause of the collapse be ascertained, the failure of the I-35W Bridge reheated a long-simmering debate over the state of American infrastructure. It was not the first major infrastructure failure in recent memory, only the latest, coming on the heels of the New Orleans levee breaks of 2005 and the Northeast power grid blackout of August 2003, and so the arguments were already familiar. Within days of the collapse, the Christian Science Monitor asserted that the bridge failure “spotlights America’s deferred maintenance” of its dams, levees, highways and bridges; the New York Times editorialized that as “the nation’s physical foundations seem to be crumbling beneath us,” “the larger problem of crumbling roads, bridges and levees and crashing electrical grids can almost always be traced to a lack of investment.” The Times called for increased federal spending, the establishment of a national infrastructure bank, and the creation of a national commission on infrastructure priorities.
Three years later, the nation continues to grapple with fundamental questions of how to plan, build, and maintain major infrastructure. Barry B. LePatner, a New York attorney noted for his decades of experience in construction and real estate law, takes a leading role in that debate with his new book, Too Big to Fall. Using the I-35W Bridge collapse as his primary case study, LePatner paints a dire picture of the state of national infrastructure, and he proposes solutions whose boldness matches the magnitude of the apparent problem.
Le Patner’s analysis begins with a fundamental question: What, precisely, caused the I-35W Bridge to collapse? As he observes, “bridges do not collapse for ‘no apparent reason.’” The National Transportation Safety Board researched that question for a year before issuing its final conclusion: a lateral shift in one of the diagonal members supporting the bridge, and the subsequent failure of the gusset plates tying together the diagonal members and other support beams, was the “initiating event” of the collapse.
But could that event have been prevented by better maintenance and inspections over the course of the bridge’s life? No, according to the NTSB, because the initiating event was the inevitable consequence of the bridge’s design. “Because the bridge’s main truss gusset plates had been fabricated and installed as the designers specified, the inadequate capacity of the … gusset plates had to have been the result of an error on the part of the bridge design firm.” Worse still, “even though the bridge design firm knew how to correctly calculate the effects of stress in gusset plates, it failed to perform all necessary calculations for the main truss gusset plates of the I-35W Bridge, resulting in some of the gusset plates having inadequate capacity…”
May 27, 2011
‘What I am reflecting about now is not that I don’t think I know an answer to your question,” says a pensive Henry Kissinger, sitting in his spacious Park Avenue corner office adorned with signed photos of former presidents and foreign leaders. “It’s that I don’t know whether I choose to talk about it at this moment and in this forum. . . . And I don’t mind dropping the interview and I don’t mind you saying that I refused to go any further and pay the price for it.”
What sort of hard-hitting question should elicit such evasiveness from the former secretary of state? When it comes to Mr. Kissinger there is never a shortage of controversial topics, from the 1970 incursion into Cambodia to the 1973 coup that overthrew Salvador Allende in Chile to the turf wars he waged with his colleagues in the Nixon and Ford administrations. But my question, which comes a few minutes into our interview, is of a milder variety: “What are the historic sources of Chinese vulnerability, and what are the current ones?”
The topic of the discussion was no accident: Mr. Kissinger’s 16th book, “On China,” was about to hit bookstores when we sat down to talk. He had consented to our interview—the first, he says, that he granted in connection to the book—on the condition that two-thirds of my questions be about China. I had agreed, on condition that the questions be future-leaning and go beyond the book itself. (My review of the book appeared on these pages May 12, a day after our meeting.)
Mr. Kissinger, who will turn 88 later this month and remains sprightly and intellectually as sharp as ever, seems to be in a bright mood when I enter his office. But it darkens with my first question, which concerns the treatment of Chinese dissident Liu Xiaobo, who, like Mr. Kissinger, is a winner of the Nobel Peace Prize.
“I have not read his writings,” he answers. “My impression is the Chinese are extremely sensitive to the implications of the Jasmine Revolution, and that they find themselves in a position where if demonstrations develop, they know, or think they know, that the American government might be supportive so that they are probably trying to prevent any temptation from that. That’s how I interpret their general crackdown.”
I press on. Does he denounce Mr. Liu’s treatment? “My policy on this,” he replies, “is to talk to them [Chinese leaders], but my personal view is not to denounce it publicly.”
I ask a more general question: What’s the right—and wrong—way of raising human rights issues with the Chinese? Mr. Kissinger addresses the subject repeatedly in his book, noting that while the U.S. cannot be silent on the matter, “experience has shown that to seek to impose them by confrontation is likely to be self-defeating—especially in a country with such a historical vision of itself as China.” To me, he says that the Obama administration is “doing essentially the right thing: They are stating their general view and then they’re preserving another category for their private discussions.” He adds that “American statesmen can be more explicit on human rights issues than they should be on pressures or sanctions.”
The Good Bad Son: Saif Qaddafi had an affinity for America. And for a brief, tantalizing moment, the feeling was mutual.
May 27, 2011
The last time Benjamin Barber saw Saif Qaddafi, in early December, they spent a cheerless evening together in London. Barber, a political scientist and board member of Saif’s Qaddafi International Charity and Development Foundation, was in town for a board meeting that was supposed to have taken place in Tripoli but, a week before, had been moved to England. Over an Italian dinner in Mayfair, he asked Saif why.
“I don’t feel comfortable in Tripoli,” the 38-year-old son of Colonel Muammar Qaddafi said. “I have too many enemies there right now.” Saif was in a desperate mood. For years he had pushed his way into his father’s chaotic political orbit, urging him to support reform in Libya. Muammar had obliged in the past—but recently he hadn’t. Allies of Saif’s had been arrested and businesses of his shut down. He had decamped and wasn’t sure he wanted to return. “He felt he was not welcome,” Barber says. “He’d been struggling for a long time.”
Barber urged him to push on. Libya was a vastly different country than it had been only a decade earlier, he assured him, thanks largely to Saif. It was a speech Saif had heard many times. He’d long worked with Barber and other academics, executives, consultants, and lobbyists to plot Libya’s future. They’d encouraged Saif, too, and had become partners in a campaign to revive his country and his family name, while he in turn worked with them to make Libya a supposed model of peaceful liberalization in the Arab world. He was what the region badly needed, his foreign boosters said. Saif sometimes agreed.
Two months after that dinner, with Libya in revolt, Muammar asked his favorite son to return home, which he did. Then, seemingly overnight, Saif became a new man: not the deliverer his supporters had hoped but someone indistinguishable from his father.
The second eldest of Muammar’s seven sons, Saif al-Islam Muammar al-Qaddafi was born in 1972, three years after his father took power in a military coup at 27. Little is known about Saif’s upbringing except for the brutal events surrounding it. When Saif was a child, Muammar went from a standard-issue strongman to the self-described Brother Leader of the Great Socialist People’s Libyan Arab Jamahiriyah. It was a system he described as a perfect democracy but which others called a murderous autocracy (with televised executions). In 1986, following a terrorist attack in a German disco frequented by U.S. servicemen, Ronald Reagan ordered air strikes that killed, in the Qaddafis’ version, Saif’s younger sister. The U.N. imposed crippling sanctions on the country after Libyan agents were charged with aiding in the Pan Am 103 bombing, and by the end of the nineties, Libya was in shambles. According to a former State Department official, Muammar “knew he could not win this war” and began secret talks with the Clinton administration on compensating families of the Pan Am victims.
As Muammar entered his sixties, he began considering which of his sons might succeed him. His choices were underwhelming. Saif’s elder half-brother, Mohammed, who ran the state telecom company, was uninterested in the mantle. Younger brother Al-Saadi was known for a failed professional-soccer career and little else, while Hannibal, the next in line, lacked certain statesmanlike qualities, as became clear when he was arrested in a Swiss hotel for beating two domestic servants. Khamis and Saif al-Arab were too young. Muatessem, an army officer five years Saif’s junior, had moved to Egypt after falling out of favor.
Saif, by contrast, had qualities his father admired: his own. Like Muammar, who quoted Rousseau and Madison to visiting diplomats, Saif was charming and well read. Like Muammar, he was confident. “He was absolutely sure what he believed was right,” says Jack Richards, an American businessman and early adviser of Saif’s. Most important, Saif developed ideas about Libya at an early age, including the realization that his father’s rule hadn’t been flawless. “It was implicit in everything Muammar did that Saif was not only the favorite son but the son who understood him the best, because he understood not the tyrant but the democrat,” Barber says. At the same time, there was a “profound Freudian tension.”
Saif showed, of all things, an affinity for America, the country Muammar had made his name maligning. After studying architecture and engineering in Tripoli, he earned degrees in Vienna and at the London School of Economics, and he became enamored of American political history and culture (His favorite movie, reportedly, is Saw). Says LSE professor David Held, who informally advised Saif, “He used to say that Arabs should have nothing to fear from American democracy promotion.”
His anti-authoritarian inclinations were so strong that Saif bristled—or made a show of bristling—at the mention of inheriting power. “The phrase heir apparent was abhorrent to Saif. But I think he secretly always harbored the hope he’d lead the country,” Richards says. Muammar, who held no official title and who spoke idealistically of a future Libya without a Qaddafi in power, seemed to admire his son’s stance, even if he wanted Saif to succeed him…
May 27, 2011
May 27, 2011
May 26, 2011
May 26, 2011
May 26, 2011
“First of all, let me say something that I shouldn’t,” Sen. John McCain began. “I’m not sure they should put Mubarak on trial.”
In a wide ranging-interview with Foreign Policy today, McCain made the case that prosecuting the former Egyptian president for killing unarmed protesters, as the new Egyptian government has promised to do, would encourage the Arab world’s other embattled dictators to cling to power rather than risk the consequences of stepping down. He also weighed in on how the United States should support democratic transitions throughout the Arab world, and blasted cuts to funding for Title VI and other international educational programs as a “short-sighted” move that could weaken American diplomatic capabilities and, over time, create a “hollow diplomatic corps.”
On Syria, McCain urged moral support for protesters, but offered a surprisingly strong warning against leading them to believe that any foreign military intervention might be forthcoming. He called for the United States and Europe to work quickly in support of the democratic transition and economic rebuilding of Egypt — but warned that we shouldn’t call it a “Marshall Plan.” And the former presidential candidate expressed cautious optimism on Libya, calling on the administration to recognize the National Transitional Council.
McCain criticized President Barack Obama for moving too slowly at key moments, saying that the administration has been “a step behind” events in Egypt, Libya, and Syria. But quibbles over timing aside, his thoughts on the region were surprisingly close to those of the Obama administration — a remarkable convergence given the toxic political arguments that usually characterize Washington these days, not to mention the heated rhetoric of the 2008 presidential campaign. Extending this bipartisan comity even further, McCain is co-sponsoring a bill with Foreign Relations Committee chairman Sen. John Kerry in support of U.S. intervention in Libya.
McCain gave an impassioned defense of the importance of supporting democracy in the region — even when anti-Israeli or anti-American voices appear as a result. “There’s every likelihood that, in the open political campaigns that take place in Egypt and other countries, the anti-Israel issue will be raised by some candidates,” he said. “I know these politicians, I know some of the people who are going to be running, and they hate Israel.”
But that did not deter him. Asked whether he still believed that Arab democracy was an American interest, he responded forcefully: “[I]f we don’t believe that democracy is in our interest, we are somehow very badly skewed in our priorities and our inherent belief in the rights of everybody.” Acknowledging that this could be a tough sell, especially when it came to finding funds to support these transitions, McCain said with emphasis that “we’ve got to convince people that it’s in our interest to see [the Middle East] make this transition.”
McCain sees job creation as key to a successful democratic transition (I didn’t ask if he felt the same way about the Obama administration’s efforts to do just that for the American economy). He’s gravely concerned about the dismal economic situation in Egypt and Tunisia. “We were at the pyramids [in Cairo] three weeks ago, not a soul there,” he said. “We stayed in a hotel in Tunis, Joe [Lieberman] and I were the only people in the whole hotel. I mean, they have really been decimated. [Tourism] is 10 percent of their GDP.”
He went on: “What we need to do to these young people is say: We’re going to give you an opportunity to get a job. That’s the key to this.” With a raised eyebrow, he also offered up a commentary on a country which did not appear in Obama’s recent Middle East speech: Saudi Arabia. “Look at what the Saudis have done: They’re just buying people off. They’re distributing money…”
It was a brief moment of happiness on a voyage that would end in death for many on board. They held a child up in the air, cheered and hugged each other. They were so delirious that they almost caused the overcrowded boat to capsize.
A helicopter was circling above their heads, say three of the nine survivors of the dramatic voyage, as they sit in Shousha refugee camp on the Tunisia-Libyan border. They say that they were able to make out the word “army” on the fuselage. Two months since their failed attempt to flee Libya, they can still write the word on a piece of paper and make a detailed drawing of a helicopter.
They say that they are certain. “Why should we lie?” asks Elias Kadi, a gaunt 23-year-old Ethiopian who is fluent in Arabic and speaks English relatively well. “What good would it do us now? What happened happened and can no longer be made up for.”
Hope of Being Saved
According to the survivors’ account, the helicopter descended to a low altitude, circled about 10 to 15 meters (33 to 50 feet) above the boat and, using ropes, lowered water bottles with Italian labels and several packages of cookies to the refugees, who fished the items out of the water with their hands.
The refugees had left Libya from a point near Tripoli on a nameless fishing boat two days earlier, headed for Europe. Their destination was the Italian island of Lampedusa, which is about 290 kilometers (180 miles) off the coast of Libya. But then they became lost at sea.
Now, rescue seemed imminent. Their captain, a very tall Ghanaian in his early 30s who had never told them his name, cut the engine. When they looked up at the soldiers in the helicopter, they saw they were carrying weapons and that one of them was taking pictures. Then he waved his hand as if to say: Help is on its way, so stay where you are. At least that was how the refugees interpreted the gesture. Then the helicopter turned and flew away, say the survivors. The refugees watched until the helicopter was only a dot on the horizon.
Some time later, the captain did something that an experienced skipper would never do. Elias, one of the survivors, is standing on shaky legs in the refugee camp as he recalls what happens. He holds onto the ropes of his tent, as if they were part of a boat’s rigging. Then he says: “The captain threw his navigation equipment and his satellite phone overboard.” Apparently he wanted to prevent the rescuers from arresting him and charging him with human trafficking, and to prevent border police from locking him up because he was illegally transporting Africans to Europe.
Elias was surprised, but he didn’t stop the captain. He thought to himself: Technology won’t help us anymore. What we need are people on ships to tow us into a harbor. They’ll be here in one or two hours, Elias thought, coming from Malta or Lampedusa, the island that they saw as the promised land.
But the rescuers would never arrive. The ensuing drama was only one episode in a much bigger drama. The survivors’ story is an account of immense suffering before the gates of Fortress Europe. It is a logbook of death.
Since ordinary people in the Arab world began rebelling against the powerful and fighting for their freedom, about 34,000 refugees have made it to Europe. According to United Nations figures, 14,000 refugees have already traveled from Libya to Italy or Malta. But no one wants them.
Within the space of just a few weeks, the exodus has changed Europe more than anyone would have believed was possible. Because the Italians simply gave the refugees temporary visas for the rest of Europe, the French temporarily closed their border. The Danes want to reintroduce border controls. EU leaders are now arguing over how to limit the freedom to travel, calling into question one of the fundamental tenets of a united Europe.
It isn’t that European governments are so worried about the Arab refugees. What they do fear, however, is that if the refugees make it, hundreds of thousands of people from all over Africa could follow them. As it happens, the people on the nameless boat were from Ethiopia, Nigeria, Eritrea, Ghana and Sudan.
They risked the crossing even though they knew that the trip across the Mediterranean in boats that are often unsafe can be deadly. The UN estimates that some 1,200 refugees have been lost at sea since the end of March alone.
‘I Had No Choice’
The three survivors that SPIEGEL spoke to — Elias, Mohammed Ibrahim, 23, and Kabbadi Dadi, 19 — had waited for their chance to leave Africa for years. Elias, the son of a cowherd, arrived in Tripoli four years ago, and the two others came in 2008. Elias says that they are members of the Oromo, a persecuted ethnic group in southern Ethiopia.
One of his eight brothers was killed in fighting against government militias, while another is in prison. Elias left home without a word of farewell. He worked as a car washer in the Sudanese capital Khartoum until he had saved enough money for the trip across the Sahara in overcrowded trucks.
Their chances were better than ever, they thought, in late March, when the West was bombing Tripoli. There were no more patrols on the beaches, as there had been in previous years, when Italy was paying Libyan leader Moammar Gadhafi a lot of money to keep poor African migrants away from Europe’s shores. Elias knew what the dangers were but, as he says: “I had no choice. It was either prison and torture in Ethiopia, or freedom in Europe.”
As is so often the case among refugees fleeing from North Africa, someone knew a Sudanese who knew a Libyan, and he called them one night to tell them to come to the beach. It was March 25, at 3 a.m. The moon was clouded over, but they could see the lights of nearby Tripoli and hear the bombing.
Their Libyan contact wanted $800 (€570) from each of them, but no names or other personal information were exchanged. The open boat, a blue plastic fishing boat that was only 10 meters long and 3 meters wide, was bobbing up and down near the shore. There were 50 refugees. They rolled up their trousers and waded through the shallow water out to the boat. It was the first time Elias had ever been in a boat.
The Libyan had blue gasoline canisters brought on board, a water bottle for each passenger, cookies and dates. A few minutes before the boat set out to sea, 22 more Africans climbed on board, bringing the total to 72 people — in a space of 20 square meters (215 square feet). “Anyone who thinks it’s too full should get out now,” said the Libyan, “but no one gets his money back.” The boat, loaded with 50 men and 22 women and children, Christians and Muslims, the oldest 45 and the youngest only a year old, set out to sea. With visibility good and the sea calm, the vessel moved quickly through the water. It’s a relatively short trip to Lampedusa, Europe’s outpost, only 300 kilometers from Tripoli.
The passengers formed a sort of human chain, with each person sitting between the angled legs of the person in front of him. They were wearing several layers of warm clothing and the women had wrapped their heads in scarves. They knew that it would get cold at night. “The mood was good at first,” says Elias. “We took pictures of each other with our mobile phones.”
They should have seen land by the second day, March 26. The Ghanaian had said that they would reach Lampedusa in 18 hours, but now about 30 hours had already passed. They became anxious. When the helicopter approached the boat at about 10 a.m., the passengers waved, held up their empty canisters and shouted: “Help, help!” The helicopter lowered provisions to the boat and then banked and flew away.
They spent the next three hours waiting for a rescue ship, but when it failed to materialize they became increasingly desperate. They asked the captain for his satellite phone.
A man named Petrus, a Christian who prayed constantly, dialed a number in Italy and spoke with a priest he knew at the Vatican. “What should we do?” he shouted into the phone…
May 26, 2011
In his grand and gloomy book Civilization and Its Discontents, Sigmund Freud identified the tenacious sense of guilt as “the most important problem in the development of civilization.” In fact, he continued, it seems that “the price we pay for our advance in civilization is a loss of happiness through the heightening of the sense of guilt.” Such guilt made for an elusive quarry, however. It was hard to identify and hard to understand, and even harder to counteract, since it so frequently dwelled at an unconscious level and could easily be mistaken for something else.
Of course, Freud was notoriously hostile to religion, but, in this one respect, he thought it deserved some grudging credit: The world’s religions “have never overlooked the part played in civilization by a sense of guilt,” which is why they seek “to redeem mankind from this sense of guilt, which they call sin.” The same cannot be said of the modern secular dispensation, which often finds itself entirely baffled and defenseless against guilt’s formidable power. The sense of guilt often manifests itself to us moderns,
Freud argued, not as anything actually resembling guilt but “as a sort ofmalaise [Unbehagen], a dissatisfaction,” for which modern people seek other explanations, whether external or internal. Guilt itself turns out to be exceptionally crafty, a born trickster and chameleon, capable of disguising itself, hiding out, altering its size and appearance, moving its location. And yet it remains notoriously difficult to dislodge, managing to tighten its hold even as it is undergoing protean and unpredictable transformation.
Whatever one finally thinks of Freud—and I count myself among the respectful unbelievers in his fanciful systems—this seems to me a very rich and insightful analysis, and a useful starting place for considering a subject largely neglected by historians: the steadily intensifying (though rarely visible) role played by guilt in determining the deep structure of our lives in the twentieth and twenty-first centuries. Such an analysis cannot, for obvious reasons, be reduced to quantifiable data; and it admittedly runs the risk of veering onto the circular path of the non-falsifiable, a Freudian spécialité de la maison. Yet it has a ring of truth to it, both as a diagnosis and as a symptom of the condition it diagnoses. It suggests that what W. H. Auden claimed for Freud over seventy years ago remains equally true today: Even if he was “wrong and at times absurd,” he stands for “a whole climate of opinion under whom we conduct our different lives.”
One way of expressing that difference is to say that we live in a therapeutic age; and nothing illustrates that fact more clearly than the striking ways in which the sources of guilt’s power and the nature of its would-be antidotes have changed for us. Freud sought to relieve in his patients the worst mental burdens and pathologies imposed by their oppressive and hyperactive consciences, which he renamed theirsuperegos, while deliberately refraining from rendering any judgment as to whether the guilty feelings ordained by those superegos had any moral justification. In other words, he sought to release the patient from guilt’s crushing hold by disarming and setting aside guilt’s moral significance and redesignating it as just another psychological phenomenon, whose proper functioning could be ascertained by its effects on one’s more general well-being. After all, since the superego was for him nothing more than the introjection of parental and quasi-parental authority, experienced as a form of irrational compulsion, it was not exactly a product of sweet Kantian reasonableness, let alone the deposit of God’s law written on the heart.
Health was the only remaining criterion for success or failure in therapy, and health was a matter of managing a tolerable equilibrium among the competing elements in the psyche—less a state of peaceable harmony, or the optimal flourishing of an organism realizing its telos, than the achievement of an uneasy truce or stalemate between intrinsic antagonists, a condition sufficiently pacified to allow for mature and rational behavior, and perhaps even the occasional faint and fleeting glimpse of something like happiness.
This is not to say that all Freud’s followers understood him thus. We Americans are always very selective in the ways we appropriate our intellectual imports, and the full gloominess of Freud taken neat was unlikely ever to be more than a minority taste here. His arguments for the easing of Victorian sexual mores, on the other hand, were an early vote-getter, particularly among the most advanced libidos of Greenwich Village. And the nonjudgmental therapeutic worldview whose seeds he planted has come into full flower in the mainstream sensibility of modern America, which in turn has profoundly affected the standing and meaning of the most venerable of all our moral transactions, and not merely matters of guilt.
Take for example the various ways in which forgiveness is now understood. Forgiveness is one of the chief antidotes to the forensic stigma of guilt, and as such has long been one of the golden words of our culture, with particularly deep roots in the Christian tradition, in which the capacity for forgiveness is seen as a central attribute of the Deity itself. It glistens with a hundred admirable qualities, and its purity and moral prestige seem beyond challenge. To forgive others is taken to be a sign of a full and munificent and sacrificial heart, and moreover a heart that wisely recognizes the fleeting nature of life and the universal weakness of all human beings, very much including oneself. For Christians the willingness to forgive has an even deeper source: the simple acknowledgment that we should be willing to extend to others, in a spirit of gratitude, the same forgiveness that God has graciously extended to us…
May 26, 2011
May 26, 2011
May 24, 2011
For at least the past decade, there has been a boom in work on the economics of happiness. But recalling Tolstoy’s famous opening lines inAnna Karenina, I’ve always wondered why we don’t study the economics of unhappiness instead. After all, we have so much more data.
The American tradition is to enshrine economic activity as a central element of “the pursuit of happiness.” In reality, however, economic activity is largely concerned with the relief of unhappiness. At the subsistence level of economic activity that has prevailed through most of human history, people must work to eat and to be clothed and housed, not so that they can enjoy the happiness that these goods can bring but so that they can avoid the pain of hunger, cold, and exposure to the elements.
In developed economies, most of us can assuage these fundamental sources of unhappiness. But whether because of drives inherent in our nature or because of the constant efforts of advertisers and others, we seem destined to remain unhappy with our economic lot.
Despite the burgeoning literature on happiness, and the contributions of prominent economists such as Richard Easterlin, Richard Layard, and Andrew Oswald, the general response of the mainstream English-language literature in economics has been to shrug and leave questions of this kind to psychologists and marketers. However, there is some interesting discussion going on in Europe, and a couple of recently translated works might help to stir the debate.
First up is Tomas Sedlacek’s Economics of Good and Evil: The Quest for Economic Meaning From Gilgamesh to Wall Street (Oxford University Press, 2011), a surprise best seller in the original Czech and with a glowing foreword by Václav Havel. More than half of the book is devoted to the economic views of the ancients, starting with the Sumerians, but Sedlacek’s closest engagement is with Adam Smith.
A primary concern is what Joseph Schumpeter called “Das Adam Smith problem.” That is, how to reconcile the Adam Smith of The Wealth of Nations—the advocate of the benefits of self-interest celebrated by Adam Smith clubs, Adam Smith tie pins, and the like—with Adam Smith the advocate of sympathy as the foundation of social order in The Theory of Moral Sentiments.This is a problem that has been tackled from many angles but never before, I suspect, based on an interpretation of the epic of Gilgamesh.
The core issue is not so much evil in general but the desire for more of everything, traditionally stigmatized as “greed” or “avarice” in Christian thought, but viewed more positively as “aspiration” in modern times. Sedlacek inclines to the Christian view and even more to that of the Stoics, that “we have to be satisfied with what we have, and that happiness can be found precisely in that.”
But, as he observes, that is hard advice to live by, and even more so in the modern world. Views about life and its possibilities, about good and evil, are fundamentally altered in a society characterized by economic growth as compared with the essentially static economic possibilities of the ancient and medieval worlds. Arguably, it is precisely the experience of economic growth that distinguishes the economists of the Enlightenment era (most notably the Scottish Enlightenment, which gave us Smith) from their pre-modern forebears.
From early times—say, 2,000 years before Christ—down to the beginning of the 18th century, there was no very great change in the standard of life of the average person living in the civilized parts of the world. Ups and downs, certainly. Visitations of plague, famine, and war. Golden intervals. But no big progressive shift. Some periods perhaps 50 percent better than others—at the utmost 100 percent better—in the 4,000 years that ended roughly in AD 1700.
The realization that life had changed fundamentally was reflected in the 17th- and 18th-century disputes between advocates of the ancients’ values and those of the moderns. Supporters of the ancients, represented most effectively by Jonathan Swift in his “Battle of the Books,” scored some rhetorical points but couldn’t obscure the evidence of intellectual and scientific progress. By the middle of the 18th century, the Industrial Revolution was under way, and the era of economic growth had begun…
May 24, 2011
What is a person? And why does it matter how we answer that question?
Every social science explanation has operating in the background some idea or other of what human persons are, what motivates them, what we can expect of them. Sometimes that is explicit, often it is implicit. And the different concepts of persons assumed by social scientists have important consequences in governing the questions asked, sensitizing concepts employed, evidence gathered, and explanations formulated. We cannot put the question of personhood in a “black box” and really get anywhere. Personhood always matters. By my account, a person is “a conscious, reflexive, embodied, self-transcending center of subjective experience, durable identity, moral commitment, and social communication who — as the efficient cause of his or her own responsible actions and interactions — exercises complex capacities for agency and inter-subjectivity in order to develop and sustain his or her own incommunicable self in loving relationships with other personal selves and with the non-personal world.”
Persons are thus centers with purpose. If that is true, then it has consequences for the doing of sociology, and in other ways for the doing of science broadly. Different views of human personhood will provide us with different scientific interests, different professional moral and ethical sensibilities, different theoretical paradigms of explanation, and, ultimately, different visions of what comprises a good human existence which science ought to serve. In this sense, science is never autonomous or separable from basic questions of human personal being, existence, and interest. Therefore, if we get our view of personhood wrong, we run the risk of using science to achieve problematic, even destructively bad things. Good science must finally be built upon a good understanding of human personhood.
You argue that the standard sociological view of the human person isn’t sufficient, that sociologists generally do not capture the fullness of human experience with their methods. Indeed, you describe them as living with a kind of “schizophrenia” – believing strongly in a human rights and dignity, but at the same time denying any kind of grounding for those moral commitments. What are they missing?
Many, if not most, sociological theories operate with an emaciated view of the person running in the background, models that are grossly oversimplified. Persons are conceptualized as rational reward-maximizers or compliant norm-followers or essentially meaning-seekers or genetic-reproduction machines or whatever else. Often such views are one-dimensional and simplistic. They fail to even begin to portray the complexity and richness of human personal life. Meanwhile, sociologists going about living their own personal lives with often a very different view of humanity in mind. The science does not live up to the reality. I think this is often driven not by the needs of real science but by a kind of insecure scientism. The former is ultimately interested in knowing what is real and how it works, however complex that might turn out to be. The latter, especially in the social sciences, is often mostly concerned to imitate the science of an entirely different sphere of reality, such as physics, which never turns out well…
It starts in childhood: As every kindergartner learns, getting along with others is a practical virtue. From our earliest years, we start to absorb lessons of diplomacy and tact, all meant to help us navigate our surroundings without friction. Down the road, as grown-ups, we seek harmony at home and in the office. Couples who project tranquility are envied, and an unflappable attitude is often a job requirement. Fighting, meanwhile, is perceived as corrosive and stressful.
But what if we’re thinking about fighting wrong? What if, as counterintuitive as it seems, certain kinds of fighting are good for us? In a new paper drawn from the Early Years of Marriage study at the University of Michigan, which tracked newly married couples over 16 years, researchers examined whether conflict behaviors beyond obvious destructive patterns (shouting, name-calling) would predict divorce. Surprisingly, couples that included even one spouse who withdrew from fights using popular strategies like leaving a room to cool down had higher rates of divorce. When both partners found ways to hash out conflicts directly, they were far more likely to prevail.
As the study of conflict gains traction, researchers are examining which conflict dynamics might enhance our daily lives and how the right kinds of conflict may have merit on their own. At home, it appears, a resilient fighter can help a partner overcome a difficult childhood; at work, research is showing that more tolerance for anger can make for a more productive team.
This isn’t to say that all fighting is good. Hostility run amok, with name-calling and screaming, is counterproductive. Violence is worse yet. But this emerging research offers the heartening suggestion that conflict with other people — which is, after all, an inevitable part of social life — doesn’t have to mean a breakdown in relations. When done right, in fact, it can often mean just the opposite.
For years, researchers have been tracking human relationships to see what behaviors keep people together, and what drives them apart. One of the longest-running studies has been conducted by the University of Michigan. From 1986 to 2002, researchers followed 373 couples, tracing patterns of marital conflict and what happened to the relationships.
At the end of those 16 years, nearly half of the couples — 46 percent — had divorced. But as a new paper published in October in the Journal of Marriage and Family shows, successful couples were likely to have something particular going for them: Both spouses fought constructively.
Led by Kira Birditt, an assistant professor at Michigan’s Life Course Development Program Institute for Social Research, the authors of the paper found surprising correlations between conflict styles and divorce rates. As it turned out, it was not just couples with what researchers called destructive styles of personal conflict who were more likely to divorce. Birditt and her team found that if wives and husbands both withdrew from conflict, that also correlated with a greater likelihood of divorce. And if one of the spouses tried to engage in constructive conflict, but the other withdrew, then they were still more likely to divorce than couples who both fought well…