April 30, 2012
April 30, 2012
Soon after Egyptian President Hosni Mubarak was ousted from power last year, protesters stormed the Egyptian national security headquarters, in which police records are housed. Some Egyptians found files the authorities had compiled about them. Others uncovered files focusing on friends and colleagues. There were wiretap transcripts, reams of printouts of intercepted e-mails, and mobile messages, communications once thought to be private.
As it turns out, American-made technology had helped Mubarak and his security state collect, compile, and parse vast amounts of data about everyday citizens. The Egyptian government was using “deep packet inspection” technology purchased from Narus, a Sunnyvale, California-based firm owned by Boeing. The company’s most successful product is NarusInsight, which according to Narus’ website, helps “network and security operators obtain real-time situational awareness of the traffic traversing their networks.” In short, the same technology not only assists network administrators in pursuing attackers and intruders; it can also help governments patrol their citizens’ online activities. Narus’ core clients are the U.S. Department of Homeland Security and the National Security Agency, but a good portion of the company’s business comes from abroad. In 2005, Narus signed a multimillion-dollar licensing deal for the use of its technology with Egypt, Palestine, Saudi Arabia, and Libya.
Noting the Narus case this week is particularly ironic as the White House announced new sanctions against Iran and Syria on Monday, aimed at technology that Tehran and Damascus are using to target their own citizens. On Monday, President Barack Obama said of the Internet and mobile technologies that they, “should be in place to empower citizens, not suppress them.”
In the Internet age, it is technically trivial for corporations and governments to gain access to people’s private communications and track their movements. The Obama administration recognizes that online freedom requires not only an open and uncensored Internet, but also one on which government and corporate surveillance powers are appropriately constrained, so that citizens are protected against abuse, and abusers are held accountable. Without strong global standards of public transparency and accountability in how surveillance technologies are deployed, the empowering potential of the Internet diminishes quickly.
Yet, even as the White House clamps down in Iran and Syria, other parts of the U.S. government are driving the development of policies, regulatory norms, and business practices that make a mockery of Washington’s well-meaning efforts to expand Internet freedom abroad. Put another way, although the State Department funnels millions of dollars to nonprofits fighting censorship and surveillance beyond U.S. borders, repressive digital surveillance around the world continues to expand in scope and sophistication.
Over the past four years, as part of Secretary of State Hillary Clinton’s “global Internet freedom” agenda, the State Department has spent more than $70 million promoting Internet access around the world. The money has funded projects that produce circumvention software — for example, Tor, Psiphon, Ultrasurf, and Freegate — that has helped millions of people in China, Iran, and other countries access censored websites. Other initiatives have provided Internet security training for activists and bloggers. State Department-funded groups now publish technical training manuals in more than a dozen languages.
Underscoring the point, last December Clinton gave a speech at a Dutch-sponsored Internet freedom conference in The Hague, calling for a “global coalition to preserve an open Internet.” Soon after her speech, the 34 member-states of the OECD adopted principles that stressed keeping the Internet open and interconnected and called on member states to “ensure transparency, fair process, and accountability.” But then the document pivoted — it emphasized the need to “encourage co-operation to promote Internet security” and “give appropriate priority to enforcement efforts.” That language provided a loophole for governments to do what they deem necessary as long as the goal is labeled “security” and “enforcement.”
Path breaking as Clinton’s global Internet agenda may be, it is dwarfed by a multi-billion dollar global censorship and surveillance technology industry. The bulk of that work emanates from research and development labs owned by companies based in North America and Western Europe whose main clientele — as in the case of Narus — are law enforcement and national security agencies of their own governments. According to the Washington Post, at a surveillance technology trade show held last year near Washington, D.C. known informally as the “wiretappers’ ball,” 35 federal agencies, alongside representatives from state and local law enforcement, joined representatives of 43 countries to inspect the wares of companies who manufacture the world’s most state-of-the-art surveillance tools and devices. Such trade shows are held regularly around the world as part of a global market that sells an estimated $5 billion dollars worth of cutting edge surveillance equipment every year.
Despite the Obama administration’s proclaimed commitment to global Internet freedom, the executive branch is not transparent about the types and capabilities of surveillance technologies it is sourcing and purchasing — or about what other governments are purchasing the same technology. Trade shows such as the wiretappers’ ball are highly secretive, and ban journalists from attending. None of the U.S. agencies that attended the wiretappers’ ball — including the FBI, the Secret Service, and every branch of the military — were willing to comment when a reporter queried them about their attendance.
Revelations over the past several years, however, show that these technologies are deployed in illegal and unconstitutional contexts. The American Civil Liberties Union recently uncovered evidence that police departments around the United States used of cell phone tracking technology in non-emergency situations — without court orders or warrants. In 2004, a whistleblower revealed that the National Security Agency built a secret room inside an AT&T facility in San Francisco, into which all phone and e-mail traffic passing through the facility was copied. The software used to inspect the data and transmit anything of interest back to the NSA came from Narus. According to national security expert James Bamford, secret NSA rooms using Narus technology are still operating at AT&T facilities around the country…
Race and Violence, the European Way: For every Sanford, Fla., there’s an Amsterdam, Utoya, Zwickau & Toulouse.
April 30, 2012
The shooting of seventeen-year old Trayvon Martin in Sanford, Florida on February 26, a tragic loss of life, rapidly spiraled into a national spectacle involving a toxic mixture of race and violence. NBC added fuel to the flames by broadcasting an edited version of remarks by the alleged shooter, George Zimmerman, distorting his words in a way that seemed to corroborate the narrative of racial-profiling.
The White House also seized the opportunity to leap into the fray, adding a national political dimension to local crime enforcement, presumably to rally the Democratic base in the upcoming presidential election. The responses to the killing, whether in the media, politics, or the general public, have played out against the backdrop of our persistent anxieties about race, a legacy of the American past of slavery.
As American as the Martin shooting was, it is worthwhile to look across the Atlantic and to consider the growing frequency and virulence of parallel events there. Race and violence—and their politicization—are by no means exclusively U.S. phenomena. On the contrary, contemporary European societies display similar troubling tendencies, marked by the fragmentation of ethnically-mixed populations, the spread of extremist ideologies, a growing willingness among radicals to engage in violence, and the propensity of politicians to instrumentalize racial and ethnic anxieties for electoral purposes.
America and Europe have more in common in these areas than is commonly expected. Moreover the exacerbation of ethnic conflict in Europe is not limited to the Eastern fringe, the states formerly part of the Communist world. Nor is the new European violence primarily localized in the zones of economic meltdown, such as Greece and Spain. On the contrary, race and violence are more notorious in the central welfare states of the European Union—Scandinavia, France, and Germany.
On July 22, 2011, Norwegian Anders Behring Breivik took the lives of seventy-seven victims, first detonating a car bomb in Oslo’s government district and then going on a shooting spree on nearby Utoya Island. His internet postings displayed an ideological extremism full of hostility toward Muslim immigrants and driven by a vision of mortal combat between Islam and the West.
In the wake of the slaughter, some public figures tried to use Breivik’s crimes to score political points by blaming the violence on anyone who had raised questions about Europe’s failed integration policies. The tragedy of Utoya sadly became a convenient tool for the left to attack conservative political opponents. Breivik’s trial is now underway. If he is found guilt, he faces a maximum sentence of twenty-one years; there is no death penalty in Norway. In the meantime, he has begun to use the trial as an opportunity to broadcast his extremist message. It is likely that the court will eventually declare him insane in order to defuse the explosive politics of the case.
Europe’s simmering ethnic conflicts are most pronounced in the central welfare states.
Breivik’s crimes and extraordinary brutality represent, in an extreme version, the simmering ethnic and racial conflicts in Europe—perhaps especially in Northern Europe. Once renowned for tolerance and liberalism, the North has witnessed acts of gross intolerance, especially around issues of immigration, both on the part of immigrants as well as locals.
The assassination of Dutch filmmaker Theo van Gogh on November 2, 2004 by Mohammed Bouyeri took place by day in the streets of Amsterdam. Born in Holland of Moroccan parents, Bouyeri left a note, pinned to van Gogh’s corpse with a knife, attacking him for a film critical of Islam’s treatment of women. The note included threats to Jews, the West, and individual politicians, including the then-Interior Minister of France, Nicolas Sarkozy, who became the President of France in 2007.
Less than a year after the van Gogh killing, in September 2005, the Danish magazine Jyllands-Posten published a set of Mohammed caricatures, which set off violent protests worldwide. An extensive debate ensued over freedom of the press and its relation to religion, in some ways a reprise of the controversy around Salman Rushdie’s novel, The Satanic Verses (1988). Yet in these more recent conflicts—the van Gogh assassination, the Mohammed cartoons, and the Breivik killings—something very ugly and ominous is coming to the fore: the breakdown of contemporary societies along racial lines, undermining democratic norms.
Two Danish writers provide a compelling explanation of this worrisome development in their book that has recently appeared in English translation, The Democratic Contradictions of Multiculturalism. Authors Jens-Martin Eriksen and Frederik Stjernfelt take a hard look at the consequences of defining people in terms of cultural communities and then giving those cultural identities priority over the expectations of equality before the law. Multiculturalism in Europe emphasizes group rights over individual rights, thereby undermining the tradition of political liberalism in the name of protecting collective cultural traditions.
European policies facilitate the segregation of immigrant populations.
Eriksen and Stjernfelt trace the history of the idea of “culturalism.” It began with particular schools of anthropology, eventually influenced policies by the United Nations, and led to divisive consequences for contemporary Europe. They suggest that current policies facilitate the segregation of immigrant populations. As a result, immigrant ghettos can incubate radicalism among disaffected youth, like Bouyeri, while the structural separation of majority and minority populations feeds the potential for populist resentment or even violent extremism, such as Breivik’s. Consider two recent cases in Germany and France…
April 30, 2012
Headlines in America’s newspaper of record imply that if you’re not feeling lonely, you may be the lonely exception: “Sad, Lonely World Discovered in Cyberspace”; “Alone in the Vast Wasteland”; and “The Lonely American Just Got a Bit Lonelier.” Add books such as Bowling Alone, The Lonely American, and Alone Together, and you might think that there is an epidemic of loneliness.
An endemic epidemic, perhaps, because we have received such diagnoses for generations. The 1950s—the era of large families, crowded churches, and schmoozing suburbanites—brought us hand-wringing books such as Man Alone: Alienation in Modern Society and the best-selling The Lonely Crowd, which landed author David Reisman on the cover of Time magazine. About a half-century before that, policymakers were worrying about the loneliness of America’s farmers, and observers were attributing a rising suicide rate to the loneliness of immigrants or to modernity in general. And so on, ever back in time. Noted historian Page Smith described colonial Americans’ “cosmic loneliness” and the upset stomachs and alcoholism that resulted. Americans have either been getting lonelier since time immemorial or worrying about it since then.
The latter is more likely. Social scientists have more precisely tracked Americans’ isolation and reports of loneliness over the last several decades. The real news, they have discovered, is that there is no such epidemic; there isn’t even a meaningful trend.
If we turned to historians to measure Americans’ degree of isolation over the centuries, they would probably find periods of growing and lessening social connection. The rough evidence indicates a general decline in isolation. When you think back to, say, a century ago, don’t call up some nostalgic Our Town image (although alienation is a theme in that play). Picture more accurately the millions of immigrants and jobless, farm-less Americans trekking from one part of the country to another, out of touch with family and likely to be trekking again the next year.
Skeptical readers may vaguely recall an oft-repeated “factoid” that Americans have fewer close friends than ever. In 2006 sociologists at Duke reported, based on comparing two General Social Surveys, that the percentage of Americans who had no one to confide in tripled between 1985 and 2004, from about 8 to about 25 percent. Headlines ensued: “Social Isolation: Americans Have Fewer Close Confidantes” (NPR), “Social Isolation Growing in the U.S.” (Washington Post). In 2009 the report’s authors conceded, under pressure from a critic, that the valid estimate for 2004 isolation could be as low as 10 percent. (Disclosure: I was the critic.) Even that figure is likely to have been a technical error and is certainly an anomaly. One coauthor recently admitted that the finding was unreliable.
Several surveys conducted from 1970 through 2010 have asked Americans the same questions about their social bonds. The results, which I compiled in Still Connected(2011), show that some aspects of social involvement have changed since the 1970s. In particular, Americans these days sit down to fewer family dinners and host guests in their homes less often; eating and sociability continues, but outside the home. Americans communicate more frequently with their relatives and friends. Critically Americans are not discernibly more isolated—few were isolated at any point in those decades—and Americans remain just as confident of the support family and friends provide.
Many commentators are sure that new technologies have made us lonelier, but people use new media to enhance their existing relationships…
April 30, 2012
A Small World After All? The Internet has changed many things, but not the insular habits of mind that keep the world from becoming truly connected
April 29, 2012
When the Cold War ended, the work of America’s intelligence analysts suddenly became vastly more difficult. In the past, they had known who the nation’s main adversaries were and what bits of information they needed to acquire about them: the number of SS-9 missiles Moscow could deploy, for example, or the number of warheads each missile could carry. The U.S. intelligence community had been in search of secrets—facts that exist but are hidden by one government from another. After the Soviet Union’s collapse, as Bruce Berkowitz and Allan Goodman observe in Best Truth: Intelligence in the Information Age (2002), it found a new role thrust upon it: the untangling of mysteries.
Computer security expert Susan Landau identifies the 1979 Islamic Revolution in Iran as one of the first indicators that the intelligence community needed to shift its focus from secrets to mysteries. On its surface, Iran was a strong, stable ally of the United States, an “island of stability” in the region, according to President Jimmy Carter. The rapid ouster of the shah and a referendum that turned a monarchy into a theocracy led by a formerly exiled religious scholar left governments around the world shocked and baffled.
The Islamic Revolution was a surprise because it had taken root in mosques and homes, not palaces or barracks. The calls to resist the shah weren’t broadcast on state media but transmitted via handmade leaflets and audiocassettes of speeches by Ayatollah Khomeini. In their book analyzing the events of 1979, Small Media, Big Revolution (1994), Annabelle Sreberny and Ali Mohammad, who both participated in the Iranian revolution, emphasize the role of two types of technology: tools that let people obtain access to information from outside Iran, and tools that let people spread and share that information on a local scale. Connections to the outside world (direct-dial long-distance phone lines, cassettes of sermons sent through the mail, broadcasts on the BBC World Service) and tools that amplified those connections (home cassette recorders, photocopying machines) helped build a movement more potent than governments and armies had anticipated.
As we enter an age of increased global connection, we are also entering an age of increasing participation. The billions of people worldwide who access the Internet via computers and mobile phones have access to information far beyond their borders, and the opportunity to contribute their own insights and opinions. It should be no surprise that we are experiencing a concomitant rise in mystery that parallels the increases in connection.
The mysteries brought to the fore in a connected age extend well beyond the realm of political power. Bad subprime loans in the United States lead to the failure of an investment bank; this, in turn, depresses interbank lending, pushing Iceland’s heavily leveraged economy into collapse and consequently leaving British consumers infuriated at the disappearance of their deposits from Icelandic banks that had offered high interest rates on savings accounts. An American businessman on a flight to Singapore takes ill, and epidemiologists find themselves tracing the SARS epidemic in cities from Toronto to Manila, eventually discovering a disease that originated with civet cats and was passed to humans because civets are sold as food in southern China. Not all mysteries are tragedies—the path of a musical style from Miami clubs through dance parties in the favelas of Rio to the hit singles of British–Sri Lankan singer M.I.A. is at least as unexpected and convoluted.
Uncovering secrets might require counting missile silos in satellite images or debriefing double agents. To understand our connected world, we need different skills. Landau suggests that “solving mysteries requires deep, often unconventional thinking, and a full picture of the world around the mystery.”
The unexpected outbreak of the Arab Spring, a mystery that’s still unfolding, suggests that we may not be getting this full picture, or the deep, unconventional thinking we need. Had you asked an expert on the Middle East what changes were likely to take place in 2011, almost none would have predicted the Arab Spring, and none would have chosen Tunisia as the flashpoint for the movement. Zine el Abidine Ben Ali had ruled the North African nation virtually unchallenged since 1987, and had co-opted, jailed, or exiled anyone likely to challenge his authority. When vegetable seller Mohamed Bouazizi set himself on fire, there was no reason to expect his family’s protests against government corruption to spread beyond the village of Sidi Bouzid. After all, the combination of military cordons, violence against protesters, a sycophantic domestic press, and a ban on international news media had, in the past, ensured that dissent remained local.
Not this time. Video of protests in Sidi Bouzid, shot on mobile phones and uploaded to Facebook, reached Tunisian dissidents in Europe. They indexed and translated the footage and packaged it for distribution on sympathetic networks such as al-Jazeera. Widely watched in Tunisia, al-Jazeera alerted citizens in Tunis and Sfax to protests taking place in another corner of their country, which in effect served as an invitation to participate. As Ben Ali’s regime trembled and fell, images of the protests spread throughout the region, inspiring similar outpourings in more than a dozen countries and the overthrow of two additional regimes.
While the impact of Tunisia’s revolution is now appreciated, the protests that led to Ben Ali’s ouster were invisible in much of the world. The New York Times first mentioned Mohamed Bouazizi and Sidi Bouzid in print on January 15, 2011, the day after Ben Ali fled. The U.S. intelligence apparatus was no more prescient. Senator Dianne Feinstein (D.-Calif.), who chairs the Senate Intelligence Committee, wondered to reporters, “Was someone looking at what was going on the Internet?”
A central paradox of this connected age is that while it’s easier than ever to share information and perspectives from different parts of the world, we may be encountering a narrower picture of the world than we did in less connected days. During the Vietnam War, television reporting from the frontlines involved transporting exposed film from Southeast Asia by air, then developing and editing it in the United States before broadcasting it days later. Now, an unfolding crisis such as the Japanese tsunami or Haitian earthquake can be reported in real time via satellite. Despite these lowered barriers, today’s American television news features less than half as many international stories as were broadcast in the 1970s.
The pace of print media reporting has accelerated sharply, with newspapers moving to a “digital first” strategy, publishing fresh information online as news breaks. While papers publish many more stories than they did 40 years ago (online and offline), Britain’s four major dailies publish on average 45 percent fewer international stories than they did in 1979.
Why worry about what’s covered in newspapers and television when it’s possible to read firsthand accounts from Syria or Sierra Leone? Research suggests that we rarely read such accounts. My studies of online news consumption show that 95 percent of the news consumed by American Internet users is published in the United States. By this metric, the United States is less parochial than many other nations, which consume even less news published in other countries. This locality effect crosses into social media as well. A recent study of Twitter, a tool used by 400 million people around the world, showed that we’re far more likely to follow people who are physically close to us than to follow someone outside our home country’s borders, or even a few states or provinces away. Thirty-nine percent of the relationships on Twitter involve someone following the tweets of a person in the same metropolitan area. In the Twitter hotbed of São Paulo, Brazil, more than 78 percent of the relationships are local. So much for the death of distance…
After weeks of skirmishes in the Nafusa Mountains southwest of Tripoli, Sifaw Twawa and his brigade of freedom fighters are at a standstill. It’s a mid-April night in 2011, and Twawa’s men are frightened. Lightly armed and hidden only by trees, they are a stone’s throw from one of four Grad 122-millimeter multiple-rocket launchers laying down a barrage on Yefren, their besieged hometown. These weapons can fire up to 40 unguided rockets in 20 seconds. Each round carries a high-explosive fragmentation warhead weighing 40 pounds. They urgently need to know how to deal with this, or they will have to pull back. Twawa’s cell phone rings.
Two friends are on the line, via a Skype conference call. Nureddin Ashammakhi is in Finland, where he heads a research team developing biomaterials technology, and Khalid Hatashe, a medical doctor, is in the United Kingdom. The Qaddafi regime trained Hatashe on Grads during his compulsory military service. He explains that Twawa’s katiba—brigade—is well short of the Grad’s minimum range: at this distance, any rockets fired would shoot past them. Hatashe adds that the launcher can be triggered from several hundred feet away using an electric cable, so the enemy may not be in or near the launch vehicle. Twawa’s men successfully attack the Grad—all because two civilians briefed their leader, over Skype, in a battlefield a continent away.
Indeed, civilians have “rushed the field,” says David Kilcullen, author of The Accidental Guerrilla, a renowned expert on counterinsurgency and a former special advisor to General David Petraeus during the Iraq War. Their communications can now directly affect a military operation’s dynamics. “Information networks,” he says, “will define the future of conflicts.” That future started unfurling when Libyan networks—and a long list of global activists—began an information war against Qaddafi. Thousands of civilians took part, but one of the most important was a man who, to paraphrase Woodrow Wilson, used not only all the brains he had but all the brains he could borrow.
The war against Qaddafi was fought with global brains, NATO brawn, and Libyan blood. But it took brains and blood to get the brawn. On February 18, three days into the protests that would swell into the successful revolt against the regime, Libya went offline. Internet and cell-phone access was cut or unreliable for the duration, and people used whatever limited connections they could. In Benghazi, Mohammed “Mo” Nabbous realized he had the knowledge and the equipment, from an ISP business he had owned, to lash together a satellite Internet uplink. With supporters shielding his body from potential snipers, Nabbous set up dishes, and nine live webcams, for his online TV channel Libya Alhurra (“Libya the Free”), running 24/7 on Livestream.
Nabbous had pitched a brightly lit virtual tent in a darkening Libya. As Benghazi descended into fighting that killed hundreds and left thousands injured, he gave interviews to international media outlets such as CNN and the BBC. He also connected with supporters and activists from dozens of countries, among whom a cadre of information warriors soon emerged.
Stephanie Lamy was one. A self-described strategic communications consultant and single mother living in Paris, she was using the Egyptian and Libyan revolutions to explain her work to her nine-year-old daughter. They searched Google and found Libya Alhurra TV; Lamy was hooked. “When I saw the cries for help on Livestream, I knew my skills were just perfect for this situation, and it was my duty to help,” she says. She abandoned her business and started working up to 24 hours a day. It was a situation where “each action counted.”
In its first six weeks, the channel served 25 million “viewer minutes” to more than 452,000 unique viewers. Nabbous had only enough bandwidth to broadcast, so volunteers stepped forward to capture and upload video. Livestream took an active role, too: it archived backups several times a day, dedicated a security team to guard against hackers, and waived its fees. Others ran Facebook groups or monitored Twitter, pasting tweets and links into the chat box. They shared first-aid information in Arabic and transcribed or roughly translated interviews in close to real time. “All of us were on a fast learning curve,” says Lamy. “Tanks were moving in, people were getting shelled, people were getting massacred.”
On March 19, Qaddafi launched an assault on Benghazi. With shells exploding, Nabbous said, “No one is going to believe what they are going to see right now!” before heading out to report live. He was still broadcasting when a sniper shot him. Hours after Nabbous’s death, French fighter jets strafed the heavy armor attacking Benghazi. His widow, Samra Naas, pregnant with their first child, broadcast in his place: “What he started has got to go on, no matter what happens.” Along with friends and family, three women she had never met spent much of the night comforting her, as best they could, over Skype.
THE HIT LIST
Among them was Charlie Farah, a Lebanese-American radio producer. She arranged technical support for Libya Alhurra TV, as well as two-way satellite subscriptions for freedom fighters. That required their trust. “When someone you’ve never met says they’ll pay for your satellite, they get your GPS coördinates,” she points out. “In the wrong hands, a missile could follow.”
Most freedom fighters were civilians with no first-aid or weapons training. Farah started teaching what she could about basic triage, planning escape routes, and how to fire and move. She showed people how to share files using YouSendIt, because guards at regime checkpoints were now searching for information being smuggled on portable media. (Rebels in Sabratha had hidden thumb drives in their hair; weapons were slung under their sheep.) For the fighters, discovery could mean imprisonment, torture, or execution.
Although those in Libya were most at risk, Qaddafi had a grim reputation for lashing out overseas. In addition to his involvement in the Berlin nightclub bombing of 1986 and the explosion of Pan Am Flight 103 over Lockerbie, Scotland, in 1988, he supported a bizarre collection of terrorist groups and hunted individual dissidents. In the 1980s, he had dozens assassinated around the world…
Turning Diabetes Treatment Upside Down: Flipping conventional insulin treatment upside-down- with startling results
April 29, 2012
There is something eerily familiar about Athens, Ohio, even if you grew up in New York City. It’s the accessible beauty of Appalachia, which surrounds the town — the gentle hills, the long, flat fields, the meandering brooks and neat, smallish farms.
It’s something more nefarious as well: the profound rural poverty vivid in the mini-malls and convenience stores on the outskirts of town. It’s the curse of plenty — the deal with the devil that this area made long ago with large mining corporations and fast-food chains. And it’s the number of overweight and obese people of all ages in stores and on the streets. People in Athens are pleasant and helpful, but they seem exhausted and desperate, both from generations of poverty and hard physical labor on farms and in mines, but also from the hard work of moving with extra weight. The self seems buried, like a trapped animal, in the body.
The data is mind-numbing: two-thirds of all Americans are overweight or obese, putting them at increased risk for diabetes. Many of us already have the disease and don’t know it. It’s an epidemic: according to the Centers for Disease Control and Prevention, in 2010, one in 10 Americans had type 2 diabetes. And roughly one in every 10 health-care dollars is spent on diabetes every year.
Today, if you’re diagnosed with type 2 diabetes — the most common form — your treatment would likely go like this: Your doctor would tell you to change your lifestyle, exercise more, lose weight, eat more fruits and vegetables. You’d be given a device to check your blood-glucose levels, and you’d be told to come back in a couple of months. After an average of two years of checking your glucose levels, you’d be put on metformin, the most common medicine for type 2 diabetics. Then the disheartening process of “stacking meds” would start. In addition to metformin, which lowers blood sugar by reducing the amount of sugar produced by the liver and helps the body better use its own insulin, you’d be put on sulfonylureas, drugs that stimulate the pancreas to release additional insulin. As this combination became ineffective, you would be put on any of nine other classes of drugs, an average of one additional med every two years. At the end of 10 years, you’d finally be shown how to inject insulin several times a day to lower your blood sugar. If you’re like most patients, this is when you’d feel like a failure. And you’d have spent 10 years thinking you were the victim of a disease.
When Shubrook goes to the grocery store, people stop him in the aisles to ask questions, or often to just say thank you. Although his two teenage daughters have come to dread these trips (a run to the store is never quick), he doesn’t mind: the encounters mean his patients trust him. And trust is critical for doctors working with patients who have a chronic disease, because success depends on the patients’ ability to manage their own health.
That ability is particularly important to Shubrook, because the multipronged approach he’s taking in his research on diabetes is revolutionary. He refuses to treat his patients as passive victims. He asks them to fight — to take certain risks, and face deep fears. And he is turning the practice of stacking meds upside down.
In 1991, fresh out of the University of California, Santa Cruz, with an undergraduate degree in psychobiology, Jay Shubrook got on his bike and pedaled to Appalachia and Ohio University’s medical school. Shubrook can seem Zelig-like. With his blond hair, boyish face, and freckles, he’d fit right in on the beaches of Southern California. His succinct delivery wouldn’t be out of place in a fast-paced city. But he chose small-town Athens because Ohio University had an osteopathic medical school where the students were trained to look at the whole person rather than the specific problem or disease.
When he was a fourth-year medical student, Shubrook did an endocrinology rotation with Frank Schwartz, a doctor in private practice in nearby West Virginia. It was a pivotal meeting for Shubrook: Schwartz had spent almost two decades helping people with diabetes. “I loved my time with him,” Shubrook says. So much so that, in 1998, as a resident, Shubrook requested another rotation with Schwartz.
At the time, many people saw diabetes as a hopeless death sentence. “When I was in school, and even after, the more chronic the disease was, the less sexy — precisely because there is no magic pill,” Shubrook says. “My predecessors were fascinated with medicines. No one wanted to work on diabetes.”
Type 1 diabetes, an autoimmune disease that accounts for 5 to 10 percent of all cases, is diagnosed most often in children and teenagers. In this form of the disease, the immune system mistakenly attacks the beta cells in the pancreas that produce the insulin needed to maintain normal blood sugar, effectively shutting down insulin production. But 90 to 95 percent of all diabetes diagnoses are type 2 — the kind that can be brought on by obesity. People with type 2 can produce insulin, but their cells don’t recognize it, so blood sugar isn’t metabolized properly and rises and falls to dangerous levels.
Dr. Frank Schwartz in his lab at Ohio University in Athens. (John Sattler/Ohio University)
In reality, Shubrook explains, there are more than just two types of diabetes; there are many types — and many misconceptions about them. A lot of people think only adults get type 2 diabetes and only kids get type 1. And a lot of people think you can treat all kinds of diabetes the same way. “I have had some families think they can cure their child who has type 1 diabetes with diet and exercise,” Shubrook says, “and this is just not the case.”
Type 2 diabetes rates have increased for all ages in this country, and among people in their 30s, it has risen by 70 percent in just the past 10 years. That has meant increases in the ghastly problems that can accompany diabetes: dental disease, kidney disease, nervous-system disorders, blindness, limb amputation, heart disease, and strokes. Because of the nation’s high obesity rates, type 2 diabetes — once only seen in adults — has become a common diagnosis in teenagers. In Ohio, more than 10 percent of the population has diabetes, and in Appalachian Ohio, that rate can be twice as high as in other regions of the state.
In 2003, Schwartz was asked by Ohio University to start a diabetes center — which gave him an opportunity to return to diabetes research. Looking to develop a diabetes practice that focused on health and prevention, Shubrook and Schwartz opened the Appalachian Rural Health Institute Diabetes/Endocrine Center.
Since then, Shubrook, Schwartz, and the teams they hired have been transforming diabetes care in Appalachian Ohio. With nearly $1.3 million in funding from the National Institutes of Health, they partnered with Mary de Groot, of the Diabetes Translational Research Center at Indiana University, to start Program ACTIVE, which combines talk therapy and exercise for patients with type 2. The team also opened Athens’s Diabetes Free Clinic for people without insurance. And Shubrook launched programs for health-care workers and educators, and for obese children and their parents…
April 29, 2012
How to Save Marriage From Hitting the Rocks: Couples don’t take their wedding vows seriously when they can get out of matrimony more easily than their mortgage
April 28, 2012
Marriage has had a good press lately. More people are marrying and more people are staying married. This is welcome news. I have recently met a number of community groups that promote marriage in schools, colleges and generally in society, an encouraging and hopeful experience for me. But of course while I welcome this greater interest in marriage, both in promoting it and defending it, it is impossible to do so unless we understand what marriage and the family are.
The Cambridge Group for the History of Population and Social Structure carried out a long-term, cross-cultural study of the family in many different countries some years ago. It discovered that the father, the mother and the children are at the heart of family structure, even in cultures that have traditions of extended families. This is the proper use of the term “nuclear”. A nucleus is the centre, the heart, the core of a cell or an atom. It is not what it is taken to mean these days: isolated from everything else.
This research was backed up later by Martin Richards’s work at the Cambridge Centre for Family Research, which showed how important it is for children to have both parents for their upbringing, and the consequences for them of divorce. Even in situations of conflict (provided they are not too extreme) children prefer their parents to stay together. These centres were building on the discipline known as the sociology of the family, which began in the 1940s and considered the family in the sense I have just described as “natural” or “normative”. This family, expressed as it is in different contexts, is ubiquitous, found in many different cultures and over the course of history.
The Christian church does not claim to have invented the idea of the family. What it has done, at its best, is to identify those things in cultures and places that were there already and which needed to be affirmed and strengthened, sometimes corrected and, on occasion, refuted. So the church built on the Hebraic tradition of the family, as taught in the Old Testament and practised in Judaism, but it also acknowledged some features of the Roman idea of marriage and the family, and built on that. The Greeks were more difficult to learn from because on the one hand you have the statism of Plato, where governors and guardians give up their children to public nurseries — there is nothing new under the sun — so that they can take part in public life. Aristotle on the other hand was overly biological in his understanding of marriage and the family.
The Christian church based its approach to marriage and the family, as it found it to be in different places, on the one-flesh union of man and woman. They were not only given a common mission (known as the cultural mandate in the world) and created and ordered towards one another for the birth and nurture of children, but also for their own fulfilment and security. It is very important to understand this.
Throughout the course of Christian history there have been many expressions of this view of marriage and the family, but I want to examine just one. His is a name difficult to avoid on the topic: St Augustine, the great Bishop of Hippo in North Africa. Augustine saw marriage first of all as the coming together of man and woman for the sake of children, but also for the sake of the security of the partners. This is what you might call the contractual view of marriage. It needed to be understood in a lifelong sense because, apart from anything else, the human child takes a long time to grow up.
But Augustine didn’t stop there. He went on to speak of the commitment that is necessary — contract is not enough — so that we do not use one another simply as a means to our own selfish ends, but commit ourselves to the other as a person. Augustine also spoke of the sacramental bond, which means that you are now not talking about two but one — the unity that is created by the complementarity of man and woman. It is a unity that arises out of similarity and difference. There has to be another in the marriage so that we can come together in this particular way for the common good, for children, and for one’s own fulfilment.
Augustine has remained salient in nearly all the thinking about marriage, certainly in the Christian church but indeed well beyond that — for example, in the Enlightenment.
Let’s take three typical thinkers of the Enlightenment. John Locke emphasised the importance of the contractual side of marriage and particularly the contract that is undertaken for the birth and nurture of children. But the implication is that the contract might not last beyond the growing up of the children. That’s the weakness in Locke’s position, whereas we would have to say, relying on Augustine, that the contract is not just for the sake of the children but also for the security of the partners. What happens when you have brought up the children and then you are abandoned? This is not an unfamiliar story these days.
Immanuel Kant, on the other hand, developed St Augustine’s emphasis on commitment into what he called the unbreakable promise — when you undertake a vow it is then your duty to keep that vow. For him there is no duty higher than the keeping of a promise.
Contract and commitment in this sense of duty, of unbreakable promise, are important elements of marriage, but it was Hegel, another great Enlightenment figure, who talked about the mystical union. Here we are coming very close to the Augustinian idea of the sacramental bond. For Hegel the differences that exist between the two persons are overcome so that there is a real unity of thought, direction and destiny in the marriage. Although in the Christian tradition, marriage between the baptised is thought of as sacramental because it is a sign of unity between Christ and his church, Hegel extends this to marriage in the natural sense. This is reflected to some extent in the Anglican emphasis on marriage as a creation ordinance…
India’s political and business elites have long harbored a desire for their country to become a great power. They cheered when Prime Minister Manmohan Singh finalized a nuclear deal with the United States in 2008. Indian elites saw the deal, which gave India access to nuclear technology despite its refusal to give up its nuclear weapons or sign the Nuclear Nonproliferation Treaty, as a recognition of its growing influence and power. And Indian elites were also encouraged when U.S. President Barack Obama announced, during a 2010 visit to India, that the United States would support India’s quest to gain permanent membership on the United Nations Security Council, which would put the country on an equal footing with its longtime rival, China. In recent years, such sentiments have also spread to large segments of the Indian middle class, which, owing to the country’s remarkable economic growth in the past two decades, now numbers around 300 million. Nearly nine out of ten Indians say their country already is or will eventually be one of the most powerful nations in the world, an October 2010 Pew Global Attitudes survey revealed.
Symbols of India’s newfound wealth and power abound. Last year, 55 Indians graced Forbes’ list of the world’s billionaires, up from 23 in 2006. In 2008, the Indian automobile company Tata Motors acquired Jaguar and Land Rover; last year, Harvard Business School broke ground on Tata Hall, a new academic center made possible by a gift of $50 million from the company’s chair, Ratan Tata. And in 2009, a company run by the Indian billionaire Anil Ambani, a telecommunications and Bollywood baron, acquired a 50 percent stake in Steven Spielberg’s production company, DreamWorks. Gaudy, gargantuan shopping malls proliferate in India’s cities, and BMWs compete with auto-rickshaws on crowded Indian roads. Tom Cruise, eyeing the enormous Indian movie market, cast Anil Kapoor, a veteran Bollywood star, in the most recent Mission: Impossible sequel and spent a few weeks in the country to promote the film. “Now they are coming to us,” one Indian tabloid gloated.
But even as Indian elites confidently predict their country’s inevitable rise, it is not difficult to detect a distinct unease about the future, a fear that the promise of India’s international ascendance might prove hollow. This anxiety stems from the tense duality that defines contemporary India, an influential democracy with a booming economy that is also home to more poor people than any other country in the world.
Of course, staggering poverty and crippling inequality at home do not necessarily prevent countries from trying to project their power abroad. When India won its independence, in 1947, it was even poorer than it is today. Yet Jawaharlal Nehru, the country’s founding prime minister, sought to raise India’s international profile, providing significant political support to independence movements in British colonies in Africa and Asia and helping found the Non-Aligned Movement. Throughout the Cold War, Indian leaders sought to use their country’s victory over British colonialism to inspire other subject peoples in their own struggles for self-determination — and, in the process, to gain more global influence than otherwise might have been possible for an impoverished country. In this way, India’s Cold War-era foreign policies, although primarily concerned with national interests, contained an element of idealism, and the country’s growing international profile during those early decades of independence served as a powerful symbol of freedom and autonomy in the Third World.
Over time, however, India has exchanged idealism for realism, as the country’s leaders have gradually abandoned an anticolonial distrust of hegemony and embraced great-power ambitions of their own. Thus, although India has made admirable progress in many areas, it is unclear whether an ever-growing Indian role in global affairs symbolizes anything more than the country’s expanding definition of its self-interest. It is therefore hard to avoid feeling a sense of ambivalence when considering the prospect of India’s ascent, especially when one scrutinizes the poverty, corruption, and inequality that suffuse Indian life today — as do two recent, revealing books: Behind the Beautiful Forevers, by Katherine Boo, and The Beautiful and the Damned, by Siddhartha Deb.
NOT SO BEAUTIFUL
The economic reforms India enacted in the early 1990s and the economic growth they spurred have pushed more than 100 million Indians above the poverty line and created a vibrant middle class. But 455 million Indian citizens — more than a third of the country’s population — still live on less than $1.25 a day, the subsistence poverty line set by the World Bank. Images of India’s poor are almost a cliché. But the ubiquity of these depictions obscures the fact that very few of them provide rich, multilayered accounts of how the country’s impoverished millions actually live.
Boo’s new book is a welcome exception. An extraordinary work of reportage, Behind the Beautiful Forevers is the single most illuminating portrait of India’s poor, their ambitions, and the monumental labors they perform and sacrifices they make to escape destitution. Boo, a staff writer at The New Yorker, has written movingly about poverty and the unequal distribution of opportunity in the United States. But beginning in 2007, she spent three years in Annawadi, a Mumbai slum abutting the city’s international airport — “a stretch where new India and old India collided and made new India late,” as she puts it.
In 1991, a group of about a dozen Tamil migrant workers were hired to repair a runway at the airport. After completing the job, they decided to settle nearby, hoping to make a living recycling the seemingly endless piles of scrap metal and garbage generated by the airport and the construction of luxury hotels adjoining it. “In an area with little unclaimed space, a sodden snake-filled bit of brushland across the street from the international terminal seemed like the least-bad place to live,” Boo writes. The migrants cleared the brush, filled the swamp with dry earth, and built shacks on the new solid ground. The squalid encampment eventually grew to house 3,000 people. Today, the overwhelming majority of Annawadi’s residents are engaged in the informal, unorganized economy, working off the books without any legal protections or guarantees of a minimum wage — as do 85 percent of all Indian workers. They labor in conditions that are unhygienic and dangerous. But the meager wages they earn allow them to live above the official poverty level.
Debates about poverty in India often overlook just how hard India’s poor work to improve their conditions. By focusing on individual residents of the slum, Boo draws a moving portrait of that struggle. Abdul, a teenager who lives in Annawadi, is an expert at sorting trash and scrap metal and then selling it to recyclers. His days begin early, arranging screws, nails, and bottle caps into neat piles. By sunset, he has usually sorted about a dozen sacks of garbage, which he hauls to a buyer in a beat-up three-wheeled cart. In the years Boo spent observing Abdul, his wages helped his family add a roof to their shack and pay $450 to have his father treated for lung disease in a private hospital. Still, Boo notes, Abdul’s mother longs for a more hygienic way of life for her four children: “She wanted a shelf on which to cook without rat intrusions — a stone shelf, not some cast-off piece of plywood. She wanted a small window to vent the cooking smoke that caused the little ones to cough like their father.”
These are modest wishes. But they reflect a ubiquitous desire for upward mobility in India, present at every socioeconomic level. Indeed, among India’s middle class, the desire for more comfort and luxury can be just as strong as, if not stronger than, the desire of a slum dweller for a clean shelf and a window vent…
April 28, 2012
In September 2010, after Japan arrested a Chinese fishing boat captain in disputed waters in the East China Sea, Beijing allegedly retaliated by holding back shipments to Tokyo of rare earths, a group of 17 elements used in high-tech products. Arcane names such as cerium, dysprosium, and lanthanum — elements that populate the bottom of the periodic table and whose unique properties make them ideal materials in the batteries that power iPhones and electric vehicles — suddenly commanded global attention. It mattered little whether Beijing actually carried through with the threat (reports are murky), the damage was already done: The world had awoken to the fact that overreliance on China for rare-earths supplies could put the international high-tech supply chain at risk.
Today, China produces more than 90 percent of the global supply of rare earths but sits on just about one-third of the world’s reserves of the elements — with the rest scattered from the United States (13 percent) to Australia (5 percent). That was not always the case. A few decades ago, the United States led production, primarily through a large mine in California owned by the mining firm Molycorp. But as California’s environmental regulations tightened in the 1990s, costs rose and profits declined, prompting the American industry eventually to shutter.
In the meantime, China started assuming the role of global supplier, spurred on by the Chinese patriarch Deng Xiaoping’s supposed proclamation that “there is oil in the Middle East, but there are rare earths in China.” In the last few decades, Chinese production of rare earths skyrocketed, more than offsetting declining production elsewhere. And consumers grew accustomed to what seemed to be a low-cost and reliable supplier in China.
Yet behind the façade of stability was an industry marked by mismanagement. First, a perceived abundance of the resources led to a general disregard for efficient and scalable production. In the early days of the Chinese rare-earths rush, preservation of resources was an afterthought, as private entrepreneurs, sensing a lucrative market, dove in. Many of these small-scale miners operated off the books and with little concern for environmental degradation. They were so numerous that the Chinese government could not keep track of them.
Even so, their efforts added up. Between 1990 and 2000, Chinese production of rare earths skyrocketed from just 16,000 tons to 73,000 tons. And in the decade since, China has essentially come to monopolize rare-earths mining. At its peak in 2009, China accounted for 129,000 of the 132,000 tons produced worldwide — in other words, 97 percent of total global output. Meanwhile, it exported roughly 40-50 percent of what it produced.
Yet as demand for these raw materials rose, Beijing became increasingly unhappy that it was “selling gold to foreigners at the price of Chinese radishes,” as one Chinese expression had it. Nationalistic voices in Chinese op-ed pages argued that China should create an OPEC-like rare-earths cartel or strategic reserve. Those calls were colored by an unsubstantiated belief among many Chinese that Japan was keeping just such a strategic reserve of its own, in which it had squirreled away 20 years’ worth of rare earths that it had imported from China.
Moreover, the same jingoistic voices complained that developed countries were “outsourcing” to China the dirty work of digging the elements out of the ground while capturing the value added from designing sophisticated products that used the rare earths themselves. These arguments struck a chord, especially toward the end of the decade when China was crafting its twelfth Five-Year Plan, in which technology and innovation took center stage. China was no longer resigned to being the world’s workshop; it wanted to become the next Germany, Japan, or United States. The way to do this, Beijing rightly believed, would be to capture more value from the goods the country exports, as every successful industrialized nation has done before it. Consider the iPhone. Each unit of the device carries a manufacturing cost of $6.50, or China’s value of assembling the device, a mere 3.6 percent of the total cost of production. The profit margins for Apple are near 64 percent, by some estimates. Rare earths, as Beijing saw it, would be a key ingredient in moving China up the value chain…