Death by Sweet Syrup: Sentiment surfaces fast and runs hot in public life, dumbing it down and crippling intimacy in private life
April 17, 2012
When I was a child, I knew national flags by the color and design alone; today I could know diseases the same way. This occurs to me on my morning commute as I note the abundance of magnetic awareness ribbons adhering to cars. A ribbon inventory on the Internet turns up 84 solid colors, color combinations, and color patterns, although there are certainly more. The most popular colors must multitask to raise awareness of several afflictions and disasters at once. Blue is a particularly hard-working color, the new black of misfortunes; 43 things jockey to be the thing that the blue ribbon makes us aware of.
Awareness-raising and fundraising 5K races augment the work of the ribbons. Maryland, where I live, had 28 5K races in one recent two-month period. I think it might be possible to chart a transcontinental route cobbled together entirely by annual 5K charity and awareness runs. Some memorialize a deceased loved one or raise funds for an affliction in the family (“Miles for Megan,” for example, or “Bita’s Run for Wellness”); others raise awareness of problems ranging from world health to Haiti to brain injury. A friend of mine who works in fundraising and development once observed, and lamented, that some medical problems were more popular than others and easier to solicit money for. Conditions with sentimental clout elicit more research donations, and cute endangered animals such as the giant panda, the World Wildlife Fund’s mascot, lure more donations than noncuddly ones.
On some days you’ll see makeshift shrines for victims of car accidents or violence by the side of the road, placed next to a mangled guardrail or wrapped around a lamppost. As more people hear of the tragedy, teddy bears, flowers, and notes accumulate. Princess Diana’s was the biggest of such shrines, a mountain of hundreds of thousands of plastic-sheathed bouquets outside her residence. Queen Elizabeth resisted the presumptuous momentum of all the grief but finally relented and went to inspect the flower shrine and its handwritten messages, a concession to sentiment depicted in the movie The Queen. Maybe I was the only one in the theater who thought the Queen was right; I rooted for her propriety over Tony Blair’s dubious advice that she drag the monarchy into the modern age by publicly displaying a sentiment she probably didn’t feel. The mourners didn’t even know Diana, the queen reasoned by an obsolete logic of restrained stoicism, and the palace flag didn’t fly at half-mast even for more illustrious figures. But she caved in the end. We most always do.
Sentiment surfaces fast and runs hot in public life, and it compels our attention. On good days I dimly register this makeshift iconography of people’s sorrows, losses, and challenges. Some of them have been my own, too, but I don’t have ribbons. On my dark days I believe that pink ribbons and 5K runs and temporary shrines and teddy bears and emails exclamation-pointed into a frenzy—the sentimental public culture—is malicious to civil society and impedes in one elegant motion our capacities for deliberation in public life and intimacy in private life. On the days I’m feeling melodramatic I suspect that we are in the grips of death by treacle.
The age of the ribbon unofficially began in 1979 when Penne Laingen, the wife of a hostage in Iran, tied a ribbon around a tree in her yard to memorialize her missing husband. America was “seething with rage” over the hostage crisis, The Washington Post reported. Psychologists proposed ways to handle this “emotional distress.” Laingen, quoted in the article, had taken inspiration from the 1973 popular song “Tie a Yellow Ribbon Round the Ole Oak Tree,” about a boyfriend, soon to return from prison, who wondered whether his girlfriend still loved him and proposed that she tie a ribbon to signal her enduring love. Laingen tied her ribbon in the same spirit of a collusive vow, intending to keep it there until her husband could take it down himself, which he eventually did. In the Postarticle she suggested that other Americans could tie ribbons, too, and millions complied, and so her personal code became a sentimental-political icon. Today the flagship yellow ribbon raises awareness of at least six afflictions and events including endometriosis, deployed soldiers, bladder cancer, suicide, bone cancer, and the Australian 2009 Victorian bushfire victims.
Around the time of yellow ribbons Americans also got the exclamation-point typewriter key and victim impact statements—two other suggestive, modest cameos in the drift toward a more sentimental public culture.
The exclamation point is singular among all punctuation because it has no true grammatical function in English except to amplify a feeling—excitement, enthusiasm, or shock—presumably not adequately conveyed by the words selected. It wasn’t even a standard feature on typewriters until the 1970s. Before then, you had to be judicious about that exclamation point because assembling it required that you type a period, backspace, and type an apostrophe above it. Today the exclamation point is used with unprecedented, hyperventilating frequency in correspondence, deployed to soften underlying hostilities or to gin up excitement where no true reason for it suggests itself. As a default punctuation setting, occupying the place in email and texting where the staid, neutral period once stood, the exclamation point is the grammatical mascot of an age that values the public projection of sunny emotions and feeling.
One of the first victim impact statements made outside a civil courtroom was that of the mother of actress Sharon Tate, who was murdered in 1969 by Charles Manson and his family. At the time of Manson’s 1978 parole hearing in California no state specifically allowed victim statements in criminal cases—those brought by government and “We the People.” Today, however, they are a routine part of the sentencing and parole process in every state. According to advocates, they allow victims to personalize the crime and elevate the status of the victim by describing the effect the crime has had on them or their families. Some laud the courtroom ritual as an aid in the emotional recovery of the victim, with the criminal proceeding envisioned as part of a larger therapeutic process. A few legal scholars suggest that the well-intentioned personalization of a crime can blur the line between public justice and private retribution. Conversely, does a criminal deserve a more lenient sentence if his victim was someone of so little charm or social worth that he had no one to testify movingly for him? Of course, rape charges used to be mitigated on just such grounds, that the victim had so little virtue or sexual morals that the crime against her didn’t mean as much…
April 17, 2012
Social media—from Facebook to Twitter—have made us more densely networked than ever. Yet for all this connectivity, new research suggests that we have never been lonelier (or more narcissistic)—and that this loneliness is making us mentally and physically ill. A report on what the epidemic of loneliness is doing to our souls and our society.
YVETTE VICKERS, A FORMER Playboy playmate and B-movie star, best known for her role inAttack of the 50 Foot Woman, would have been 83 last August, but nobody knows exactly how old she was when she died. According to the Los Angeles coroner’s report, she lay dead for the better part of a year before a neighbor and fellow actress, a woman named Susan Savage, noticed cobwebs and yellowing letters in her mailbox, reached through a broken window to unlock the door, and pushed her way through the piles of junk mail and mounds of clothing that barricaded the house. Upstairs, she found Vickers’s body, mummified, near a heater that was still running. Her computer was on too, its glow permeating the empty space.
The Los Angeles Times posted a story headlined “Mummified Body of Former Playboy Playmate Yvette Vickers Found in Her Benedict Canyon Home,” which quickly went viral. Within two weeks, by Technorati’s count, Vickers’s lonesome death was already the subject of 16,057 Facebook posts and 881 tweets. She had long been a horror-movie icon, a symbol of Hollywood’s capacity to exploit our most basic fears in the silliest ways; now she was an icon of a new and different kind of horror: our growing fear of loneliness. Certainly she received much more attention in death than she did in the final years of her life. With no children, no religious group, and no immediate social circle of any kind, she had begun, as an elderly woman, to look elsewhere for companionship. Savage later told Los Angeles magazine that she had searched Vickers’s phone bills for clues about the life that led to such an end. In the months before her grotesque death, Vickers had made calls not to friends or family but to distant fans who had found her through fan conventions and Internet sites.
Vickers’s web of connections had grown broader but shallower, as has happened for many of us. We are living in an isolation that would have been unimaginable to our ancestors, and yet we have never been more accessible. Over the past three decades, technology has delivered to us a world in which we need not be out of contact for a fraction of a moment. In 2010, at a cost of $300 million, 800 miles of fiber-optic cable was laid between the Chicago Mercantile Exchange and the New York Stock Exchange to shave three milliseconds off trading times. Yet within this world of instant and absolute communication, unbounded by limits of time or space, we suffer from unprecedented alienation. We have never been more detached from one another, or lonelier. In a world consumed by ever more novel modes of socializing, we have less and less actual society. We live in an accelerating contradiction: the more connected we become, the lonelier we are. We were promised a global village; instead we inhabit the drab cul-de-sacs and endless freeways of a vast suburb of information.
At the forefront of all this unexpectedly lonely interactivity is Facebook, with 845 million users and $3.7 billion in revenue last year. The company hopes to raise $5 billion in an initial public offering later this spring, which will make it by far the largest Internet IPO in history. Some recent estimates put the company’s potential value at $100 billion, which would make it larger than the global coffee industry—one addiction preparing to surpass the other. Facebook’s scale and reach are hard to comprehend: last summer, Facebook became, by some counts, the first Web site to receive 1 trillion page views in a month. In the last three months of 2011, users generated an average of 2.7 billion “likes” and comments every day. On whatever scale you care to judge Facebook—as a company, as a culture, as a country—it is vast beyond imagination.
Despite its immense popularity, or more likely because of it, Facebook has, from the beginning, been under something of a cloud of suspicion. The depiction of Mark Zuckerberg, in The Social Network, as a bastard with symptoms of Asperger’s syndrome, was nonsense. But it felt true. It felt true to Facebook, if not to Zuckerberg. The film’s most indelible scene, the one that may well have earned it an Oscar, was the final, silent shot of an anomic Zuckerberg sending out a friend request to his ex-girlfriend, then waiting and clicking and waiting and clicking—a moment of superconnected loneliness preserved in amber. We have all been in that scene: transfixed by the glare of a screen, hungering for response.
When you sign up for Google+ and set up your Friends circle, the program specifies that you should include only “your real friends, the ones you feel comfortable sharing private details with.” That one little phrase, Your real friends—so quaint, so charmingly mothering—perfectly encapsulates the anxieties that social media have produced: the fears that Facebook is interfering with our real friendships, distancing us from each other, making us lonelier; and that social networking might be spreading the very isolation it seemed designed to conquer.
FACEBOOK ARRIVED IN THE MIDDLE of a dramatic increase in the quantity and intensity of human loneliness, a rise that initially made the site’s promise of greater connection seem deeply attractive. Americans are more solitary than ever before. In 1950, less than 10 percent of American households contained only one person. By 2010, nearly 27 percent of households had just one person. Solitary living does not guarantee a life of unhappiness, of course. In his recent book about the trend toward living alone, Eric Klinenberg, a sociologist at NYU, writes: “Reams of published research show that it’s the quality, not the quantity of social interaction, that best predicts loneliness.” True. But before we begin the fantasies of happily eccentric singledom, of divorcées dropping by their knitting circles after work for glasses of Drew Barrymore pinot grigio, or recent college graduates with perfectly articulated, Steampunk-themed, 300-square-foot apartments organizing croquet matches with their book clubs, we should recognize that it is not just isolation that is rising sharply. It’s loneliness, too. And loneliness makes us miserable.
We know intuitively that loneliness and being alone are not the same thing. Solitude can be lovely. Crowded parties can be agony. We also know, thanks to a growing body of research on the topic, that loneliness is not a matter of external conditions; it is a psychological state. A 2005 analysis of data from a longitudinal study of Dutch twins showed that the tendency toward loneliness has roughly the same genetic component as other psychological problems such as neuroticism or anxiety.
Still, loneliness is slippery, a difficult state to define or diagnose. The best tool yet developed for measuring the condition is the UCLA Loneliness Scale, a series of 20 questions that all begin with this formulation: “How often do you feel …?” As in: “How often do you feel that you are ‘in tune’ with the people around you?” And: “How often do you feel that you lack companionship?” Measuring the condition in these terms, various studies have shown loneliness rising drastically over a very short period of recent history. A 2010 AARP survey found that 35 percent of adults older than 45 were chronically lonely, as opposed to 20 percent of a similar group only a decade earlier. According to a major study by a leading scholar of the subject, roughly 20 percent of Americans—about 60 million people—are unhappy with their lives because of loneliness. Across the Western world, physicians and nurses have begun to speak openly of an epidemic of loneliness.
The new studies on loneliness are beginning to yield some surprising preliminary findings about its mechanisms. Almost every factor that one might assume affects loneliness does so only some of the time, and only under certain circumstances. People who are married are less lonely than single people, one journal article suggests, but only if their spouses are confidants. If one’s spouse is not a confidant, marriage may not decrease loneliness. A belief in God might help, or it might not, as a 1990 German study comparing levels of religious feeling and levels of loneliness discovered. Active believers who saw God as abstract and helpful rather than as a wrathful, immediate presence were less lonely. “The mere belief in God,” the researchers concluded, “was relatively independent of loneliness.”
But it is clear that social interaction matters. Loneliness and being alone are not the same thing, but both are on the rise. We meet fewer people. We gather less. And when we gather, our bonds are less meaningful and less easy. The decrease in confidants—that is, in quality social connections—has been dramatic over the past 25 years. In one survey, the mean size of networks of personal confidants decreased from 2.94 people in 1985 to 2.08 in 2004. Similarly, in 1985, only 10 percent of Americans said they had no one with whom to discuss important matters, and 15 percent said they had only one such good friend. By 2004, 25 percent had nobody to talk to, and 20 percent had only one confidant…
Arabian Fights: Why it’s a little early for dramatic and sweeping statements about the Arab uprisings.
April 17, 2012
How does one evaluate or even describe the nature and effects of a tornado when it’s still swirling? This is the conundrum facing anyone writing about the tumultuous changes taking place in the Arab world. These qualities of extreme flux and fluidity—what Frantz Fanon termed an “occult zone of instability”—are what have given rise to the dizzying plethora of terms coined to try to describe the unrest: “Arab Spring,” “Arab uprisings,” “Arab revolution(s),” “Arab awakening,” and Iran’s particularly misguided phrase, “Islamic awakening,” are just a few. Since concerted popular protests began in Tunisia on December 18, 2010, anti-government unrest has spread to many Arab countries. Several dictators have fallen, and others appear to be on their way out. But the outcomes in different Arab states undergoing these radical changes, and for strategic relations in the region as a whole, remain undetermined and, to some extent, unreadable.
The reshaping of the political and strategic landscape of one of the most important regions on earth properly commands the attention of the entire world. There are profound implications for U.S. foreign policy given that virtually everything most Americans, including policy-makers, thought they knew about Arab societies and political culture turns out to be incorrect or no longer applies. The uprisings clearly require a thorough reconceptualization of American and other Western attitudes toward Arab peoples, culture, and societies, and the casting aside of moldy orientalist stereotypes and anachronistic assumptions.
Because everything is changing so quickly and in so many places at the same time, following the trajectory of developments is daunting enough, let alone trying to analyze and understand exactly what they mean or where they’re going. The most obvious and persistent questions are almost impossible to answer. Are we seeing the emergence of liberal Arab democracies, Islamist systems, or entirely new hybrid post-Islamist political orders? Will the new Arab world be more pluralistic or embolden sectarianism? Will the changes bring greater stability or more conflict? Will they be the basis for economic revival or the chaos underwriting economic collapse? Developments are shifting so dramatically that it is difficult even to formulate the right questions, let alone to investigate possible answers.
Under such circumstances, reporters and journalists who limit themselves to narratives describing and contextualizing events have it a little easier than analysts and academics, who are supposed to produce “big picture” evaluations. Two new books, Liberation Square by Ashraf Khalil and The Arab Uprising by Marc Lynch, are excellent illustrations of the strengths and limitations of both approaches. Khalil focuses primarily on telling the story of the days between the outbreak of the protest movement in Egypt on January 25, 2011, and the ouster of President Hosni Mubarak 18 days later. Lynch, on the other hand, tries to provide a broad-based analysis of the unprecedented events in the region, and to posit a comprehensive methodological framework for understanding them.
Having set himself a much more limited, manageable, and straightforward task, Khalil, a reporter who has covered the Middle East for several major Western publications including The Los Angeles Times, succeeds admirably. But his book doesn’t offer any guide to what happened after Mubarak fell in Egypt, or what is likely to happen in that country or anywhere else in the future. Lynch’s project is infinitely more complex and, ultimately, unrealizable, at least at this stage. He certainly deserves a lot of credit for trying, and I’m not sure anyone else could have done any better than Lynch, a professor of political science at George Washington University. As a consequence of being both impossibly broad and clearly premature, Lynch’s book suffers from serious flaws. In many passages it feels rushed, at times even becoming a hodgepodge of incongruous arguments, and Lynch is fixated on the influence of the Qatar-based Al Jazeera television network. But unlike Liberation Square, The Arab Uprising does offer a broad framework for understanding not only what has happened, but what may well happen in the Arab world, and some sober suggestions about what this implies for the United States.
For a detailed, day-by-day account of exactly what happened in Tahrir Square, one need look no further than Liberation Square. This is exemplary reportage: fair, serious, dynamic, and engaging. It is at its most vivid in Chapter Nine, “The Fall of the Police State,” in which Khalil describes in detail the process by which protesters finally overwhelmed the Egyptian security units and forced the military into making a final decision whether or not to intervene to crush the rebellion. He is clear, and correct, that this was the decisive turning point: “Egypt’s nonviolent revolution wouldn’t have happened without some people who were willing to be extremely violent at times. Over a four-day period, a hardcore cadre of protesters confronted and physically shattered the Egyptian police state.” Khalil brings to life a “full-blown rock war” on the crucial day of confrontation, January 28, pitting stone-throwing protesters against tear gas and baton charges from security forces. He explains how “the protesters worked in organized shifts; those returning from the front lines of the conflict were treated for tear-gas exposure and buckshot wounds by makeshift triage units,” while others “dragged a blanket loaded with hundreds of rocks and concrete chunks toward the front to be thrown at the police.”
But Khalil’s three-word final paragraph, after describing the removal of Mubarak by the military, is profoundly misleading: “It was over.” As subsequent events in Egypt have conclusively shown, if by “it” one means the tumultuous changes transforming the Egyptian political scene and system, then “it” had only just begun. The overthrow of Mubarak was, in fact, not a revolution at all, but a regime decapitation by elements of the existing power structure seeking to preserve as much of their supremacy, privileges, and wealth as possible in the face of a popular rebellion. As this essay goes to press, Egypt is still firmly in the grip of the Mubarak-era military. A year after Mubarak’s downfall it would still be possible, and probably accurate, to argue that the fundamental transformation of that country, if that is indeed what is taking place, remains in its infancy.
The greatest strength of Liberation Square is Khalil’s masterful contextualization of the genesis of the Egyptian uprising. He grounds it in the plight of what University of Illinois sociology professor Asef Bayat has perfectly described as the “middle-class poor” in the Arab world, mainly educated and primarily young people who simply cannot find jobs commensurate with their education and expectations. Khalil’s most revealing passages vividly describe the “palpable sense of despair and helplessness…taking hold” in much of Egyptian middle-class poor society in the last decade of Mubarak’s rule. Through an insightful reading of Cultural Film, a superficially lightweight comedy released in 2000, Khalil describes how, because “[t]here are no jobs out there—at least none that pay enough” for young professionals and couples to get their own apartments, their lives are placed on hold for years if not decades. Both careers and romantic relationships fall apart under such strains. Khalil suggestively wonders “just how much pure sexual frustration fed into Egypt’s revolutionary rage.” While the film ends on a contrived happy note, he aptly points out that its main characters in fact “would have no true options other than to start a revolution, join a fundamentalist cell, or kill themselves.”…