November 30, 2011
November 30, 2011
November 30, 2011
The Whole World Is Watching: In an increasingly monitored world, how can consumers and citizens reclaim ownership of their private lives?
November 30, 2011
Early in 2010, The Guardian reported plans by the British Police and Home Office for a remarkable new venture in domestic surveillance. Unmanned aerial drones, now used for tracking insurgents in Pakistan and Afghanistan, are to be adapted (unarmed, one hopes) to monitor Britain’s civil population. An initial aim of the project is crowd control during the 2012 London Olympics. Thereafter, these high-tech surveillance engines are to become a permanent feature of state security and law enforcement—much to the distress of civil libertarians and privacy advocates, who immediately objected to the plans.
But no one can say this is especially new. With an estimated 1.7 million video cameras deployed on the ground, George Orwell’s homeland can probably already claim world leadership in state-sponsored monitoring of its population. And the intensification of all forms of institutional tracking of individuals isn’t restricted to Britain—it is occurring the world over. All told, the United States has probably contributed more to these trends than any other country as both the creator and exporter of different means of government and corporate surveillance. The sheer variety of forms implicated in this monitoring is striking. They include real-time recording of consumers’ buying habits and finances; tracking of travelers’ movements by air, train, and road; monitoring of private citizens’ telecommunications; and the mass harvesting of tidbits of personal data from social sites like Facebook.
The seemingly relentless pace of innovation in surveillance cannot be ascribed to any one interest, policy, organizational purpose, or political mood. Instead, it suffuses all manner of relations between institutions and individuals, from the allocation of welfare-state benefits to the pursuit of suspected terrorists.
The result has been change in the very texture of everyday life. Being “alone” is not what it used to be. Our whereabouts, our financial transactions, our uses of the World Wide Web, and countless other data routinely register in the automated consciousness of corporate and state bureaucracies. More importantly, the results of such monitoring in turn shape the treatment we receive from these organizations—sometimes in ways that we know, and often in ways we hardly imagine.
Many observers dismiss these developments with a shrug: The fate of personal privacy in the face of institutional data-gathering may be hopeless, they hold, but such a development is not really serious. The collection of personal data supports all sorts of valued corporate conveniences and public policies, from easy credit to protection from terrorist threats. The fact that my most intimate medical information is held by a distant bureaucracy is hardly a loss, the argument goes, so long as the people who handle it don’t really know me in any personal way. And why should I care if government agencies track my communications, movements, or expenditures if I have nothing to hide? We ought to be grateful for these developments, and not challenge them with anachronistic values like privacy.
Such nonchalance shortchanges both the complexity of the changes we are enmeshed in and their repercussions in our everyday lives. In every realm of life, the flow (and restriction) of personal information confers advantage and disadvantage between parties, opening some possibilities and closing others. In face-to-face relationships, as elsewhere, we do not readily disclose information about areas of our lives in which we feel weak, troubled, or ashamed. Nor do we reveal information that could confer strategic advantage on the other party, like the maximum we are willing to pay in a purchase we are negotiating, for example. For all sorts of reasons, we cherish the ability to control sensitive information about ourselves.
Thus, even those who profess themselves unconcerned about privacy are apt to object when unauthorized use of their information works againstthem. They will not appreciate finding themselves the losers, for example, in “target pricing”—the practice in which online retailers raise or lower prices offered to different customers for identical items on the basis of their past buying habits. They will be displeased if they discover that their bosses have accessed their medical files from the company health-care plan, and used their medical data as bases for decisions on pay or promotions. They will feel aggrieved on finding themselves subjected to marketing appeals for embarrassing products or services—incontinence supplies, treatments for sexual dysfunction—on the strength of their past website visits or consumer choices. They will wax indignant if they discover that the prices they are quoted for insurance coverage are raised because the insurer has discovered that they have low credit scores, which supposedly correlate with greater likelihood of filing insurance claims. And they will be outraged if they find themselves victims of “universal default”—creditors’ policy of raising a customer’s rates in one credit account based on reports that the amounts of credit used in other credit accounts have risen.
In these cases and countless others, people resent receiving unfavorable treatment on the basis of information about themselves that they consider “nobody else’s business.” The trouble is, notions of what information constitutes anyone’s own “business” are in headlong transformation. We live in a world in which possibilities for accessing personal data are mutating in ways that institutions, unsurprisingly, exploit to their own advantage. What disclosures and uses of personal data are held “reasonable” under such circumstances is constantly up for grabs. That is why the need for serious public conversations about privacy is so urgent.
Classic visions of liberal society stress judicious limitations of institutional power, both governmental and corporate, coupled with preservation of individual autonomy and freedom of choice. We accept that institutions like the IRS have investigative powers sufficient to collect most taxes owed, most of the time. But we recoil—I hope—at an idea like unlimited IRS monitoring of all taxpayers’ e-mails and phone conversations aimed at registering key words associated with tax evasion. Such (hypothetical, but quite feasible) measures could be very effective in spotting underreporting of taxable income. But even greatly increased compliance with tax obligations is not worth such sweeping losses to privacy…
How Brooklyn Got Its Groove Back: New York’s biggest borough has reinvented itself as a postindustrial hot spot
November 30, 2011
In 1982, I moved with my husband and our two young children into a partly renovated brownstone in Park Slope, Brooklyn. Last year, New Yorkpronounced the area “the most livable neighborhood in New York City,” but in those days, real-estate agents euphemistically described it as “in transition,” meaning that the chances you’d get mugged during a given year were pretty good. Educated middle-class couples like us, who had been moving into the area between Seventh Avenue and Prospect Park for more than a decade, lived alongside the Irish, Italian, and Puerto Rican immigrants who had given Brooklyn its working-class identity and its former nickname, “Borough of Churches.” For us, Saint Francis Xavier’s was just the sponsor of our children’s Little League teams, but it remained a religious and community center for those who also frequented smoke-filled bars on Seventh like Snooky’s and Moody’s.
Among the old-timers was our neighbor Peggy Lehane. Her late husband had been a postal worker, and, like a lot of Park Slopers back in the day, she helped the family’s modest finances by taking in boarders in their four-story house. Legend had it that at one time she had a clientele of respectable bachelors and shabby “heiresses” to whom she served tea on silver trays. By the time we arrived, her boarders were elderly men and women on government assistance. We could hear the 3 AM moaning of these sad creatures and smell the contents of their bedpans, which they sometimes tossed into the patch of grass in back. Like the neighborhood, the house was transitional. More than once, and much to the wonderment of my children, ambulances arrived to remove a white-sheet-covered body.
As the decade proceeded and crime worsened, Park Slope trembled between shabby respectability and drug-fueled violence. Rumors circulated of a crack house near Prospect Park, which had become a dangerous shadow of the original Olmsted-and-Vaux masterpiece. The streets around us endured a nightly explosion of shattered glass from car windows. Kids on their way home from elementary school were knocked down; parents walking from the subway station were held up at gunpoint. When my children went to camp, suburban kids, hearing that they were from Brooklyn, would ask: “Have you ever been shot?”
Our neighbor’s house reflected the Slope’s perilous condition. When we first moved in, Mrs. Lehane, always wearing a faded but neatly pressed dress, thick stockings, and lipstick, used to sweep the sidewalk with the intensity of a corporate lawyer on a gym treadmill. Now, her lipstick was smudged, her dresses were torn, and her stockings sagged. Instead of elderly renters, she took in “former” alcoholics and drug addicts living on disability payments. Mrs. Lehane’s children had moved to Long Island, but her foul-tempered granddaughter moved in, supposedly to oversee the house. The granddaughter’s violent fights with her boyfriend would sometimes wake us in the middle of the night. One day, Mrs. Lehane disappeared—to a nursing home, I heard. Many of our friends and acquaintances—fed up with vagrants on their stoops and graffiti, or terrified for the safety and education of their kids—left as well. I don’t know what combination of denial and passiveness made us stay. It seemed inevitable that something terrible would happen.
And so it did. One October night in 1995, after putting our Brooklyn-born youngest child to bed, my husband smelled smoke. Sure enough, a thin film of gray was swaying through our second-floor hallway, and we quickly spotted sickening black waves of the stuff pouring out of the moldings atop our bedroom windows. We ran outside to find the street jammed with fire trucks, ambulances, and awestruck neighbors. For the next two hours, we sat on a neighbor’s stoop and watched the Lehane house—its 1890s mahogany-trimmed parlor; its oak parquet floors; its memories of bourgeois Victorian respectability, of hard-knock immigrants, of addiction and decay—consumed in a conflagration apparently caused by a tenant who’d fallen asleep with a lit cigarette in his hand. We were lucky: our house suffered only some smoke damage. Others were not: several firemen were hurt, and a boarder died from smoke inhalation. For the next three years, the charred and empty house brooded over the block, a symbol of an uncertain urban future.
If you’ve been in Park Slope recently, you can probably guess how things turned out for the Lehane house. But you may not know why. How did the Brooklyn of the Lehanes and crack houses turn into what it is today—home to celebrities like Maggie Gyllenhaal and Adrian Grenier, to Michelin-starred chefs, and to more writers per square foot than any place outside Yaddo? How did the borough become a destination for tour buses showing off some of the most desirable real estate in the city, even the country? How did the mean streets once paced by Irish and Italian dockworkers, and later scarred by muggings and shootings, become just about the coolest place on earth? The answer involves economic, class, and cultural changes that have transformed urban life all over America during the last few decades. It’s a story that contains plenty of gumption, innovation, and aspiration, but also a disturbing coda. Brooklyn now boasts a splendid population of postindustrial and creative-class winners—but in the far reaches of the borough, where nary a hipster can be found, it is also home to the economy’s many losers.
To understand the emergence of the new Brooklyn, it’s best to start by recalling its original heyday. From the mid-nineteenth century to 1898, when it became part of New York City, Brooklyn was one of the nation’s preeminent industrial cities, and its dominance continued until about 1960. Facing New York’s deepwater harbor and the well-traveled East River, Brooklyn’s waterfront was lined with factories. Workers in those factories lived in the borough’s numerous tenements, row houses, and subdivided townhouses. Some worked the assembly line in the Ansonia Clock Factory in Park Slope. (It later became the neighborhood’s first condo-loft space.) Others worked in the Brooklyn Navy Yard, in an area now known as Vinegar Hill. Still others worked on the docks in Red Hook, the inspiration for the Marlon Brando movie On the Waterfront; in the Arbuckle coffee-roasting factory under the Manhattan Bridge; in the paint factories and metal shops in Gowanus; in the breweries in the once-German enclaves of Williamsburg, Greenpoint, and Bushwick; and in the pharmaceutical factory founded in East Williamsburg by Charles Pfizer. They worked in the Domino sugar refinery, at one time the largest in the world, whose big red DOMINO sign (still illuminating the East River at night) was all that some Manhattanites knew firsthand of Brooklyn…
November 30, 2011
When I was very small I lived on a defunct chicken farm. There was a house with a yard, and these together took up half an acre. To the north there was a long, thin chicken coop, empty of chickens, and behind it lay the back pasture, which occupied one acre. Perpendicular to this, to the west, there was the side pasture. Steers dwelled in the back pasture, ate hay, shat, sculpted odd forms on the salt lick (until we had them shot and butchered). As far as I know, these were actually existing steers. But the side pasture was inhabited, I imagined for a long time, by a fox. When I went there by day, I felt I was entering upon its territory; and when I lay in bed at night, I was certain it was out there, in its burrow, dwelling. It lived there like a human in a home, and was as real as any neighbor—except that I had myself brought it into existence, likely by projecting it out of a picture in a book.
The fox did not need to exist in order to function in my imagined community, one which must be judged no more or less real than that of, say, Indonesians, or of humanity. It was enough that there be foxes at all, or creatures that fit that description, in order for me to conjure community with the imaginary fox in the side pasture. And it was no mere puerile phantasm that caused me to imagine this community, either. It was rather my thinking upon my own humanity, a condition which until very recently remained, over the course of an entire human life, embedded within a larger community of beings.
These days, we are expected to grow out of that sort of thinking well before puberty. Our adult humanity consists in cutting off ties of community with animals, ceasing, as Lévi-Strauss put it, to think with them. When on occasion adults begin again to think about animals, if not with them, it is to assess whether animals deserve the status of rights-bearers. Animal rights, should there be such things, are now thought to flow from neurophysiological features and behavioral aptitudes: recognizing oneself in the mirror, running through mazes, stacking blocks to reach a banana.
But what is forgotten here is that the animals are being tested for re-admission to a community from which they were previously expelled, and not because they were judged to lack the minimum requirements for the granting of rights. They were expelled because they are hairy brutes, and we learned to be ashamed of thinking of them as our kin. This shame only increased when Darwin confirmed our kinship, thus telling us something Paleolithic hunters already knew full well. Morality doubled up its effort to preserve a distinction that seemed to be slipping away. Since the 19th century, science has colluded with morality, always allowing some trivial marker of human uniqueness or other to function as a token for entry into the privileged moral universe of human beings. “They don’t have syntax, so we can eat them,” is how Richard Sorabji brilliantly reduces this collusion to absurdity.
Before and after Darwin, the specter of the animal in man has been compensated by a hierarchical scheme that separates our angelic nature from our merely circumstantial, and hopefully temporary, beastly one. And we find more or less the same separation in medieval Christian theology, Romantic nature poetry, or current cognitive science: All of it aims to distinguish the merely animal in us from the properly human. Thus Thoreau, widely lauded as a friend of the animals, cannot refrain from invoking animality as something to be overcome: “Men think that it is essential,” he writes, “that the Nation have commerce, and export ice, and talk through a telegraph, and ride 30 miles an hour, without a doubt, whether they do or not; but whether we should live like baboons or like men, is a little uncertain.” What the author of Walden misses is that men might be living like baboons not because they are failing at something or other, but because they are, in fact, primates. Thoreau can’t help invoking the obscene and filthy beasts that have, since classical antiquity, formed a convenient contrast to everything we aspire to be.
The best evidence suggests that this hatred of animals—there’s no other word for it, really—is a feature of only certain kinds of society, though societies of this kind have dominated for so long that the hatred now appears universal. Until the decisive human victory over other predatorial megafauna several thousand years ago, and the subsequent domestication of certain large animals, the agricultural revolution, the consequent stratification of society into a class involved with food production and another, smaller class that traded in texts and values: Until these complex developments were well under way, human beings lived in a single community with animals, a community that included animals as actors and as persons.
In that world, animals and human beings made up a single socio-natural reality. They killed one another, yes, but this killing had nothing in common with the industrial slaughter of domestic animals we practice today: Then, unlike now, animals were killed not because they were excluded from the community, but because they were key members of it. Animals gave themselves for the sake of the continual regeneration of the social and natural order, and in return were revered and treated as kin.
As human beings abandoned community for domination, thinking with animals became a matter of symbolism. Bears showed up on coats of arms, for example, not because the warriors who fought behind these shields were fighting as bears, as magically transformed ursine warriors. They were fighting behind the bear shield simply because that’s what their clan chose, as today one might choose Tasmanian Devil mudflaps for one’s truck. It was an ornament, a mere symbol…
November 30, 2011
November 30, 2011
November 29, 2011
The air at 20,000 feet above Schweinfurt, Germany, was icy cold, but the bombardier crouching in the nose of the B-17 hardly noticed. Sweat poured down his forehead as flak rocked the aircraft, periodically spattering his compartment’s Plexiglas bubble with fragments. He focused intently on preparing for the final bombing run. He bent over the Norden bombsight, making adjustments with one gloved hand, his other hand grasping a dog-eared booklet filled with numbers—the precise settings he needed to punch into the bombsight to ensure that the B-17′s load released at the exact moment necessary to hit its target.
Ten thousand miles away, a Marine sergeant stood next to a 75-millimeter M1A1 pack howitzer on the beachhead of a tiny Pacific Island, surveying a battalion of American soldiers preparing to charge a Japanese hillside position. But before the Marines could advance, the heavily dug-in enemy artillery atop the hill had to be silenced. Ignoring the bullets peppering the sand around him, the sergeant read out a series of numbers from a small chart he held, sending his gun crew scrambling to zero in on the Japanese positions.
Both the bombardier and the artillery sergeant depended on the accuracy of the figures they fed into their weapon systems. If the sergeants had known where those numbers had originated, they probably would have been astonished. The data were the work of a group of remarkable women with a flair for mathematics who were employed by the Army: the Philadelphia Computing Section (PCS) at the University of Pennsylvania. Known as “computers” in an age when that term referred not to machines but to human beings, some of the women went on to help create the first electronic computer, ENIAC. Like the legendary Rosie the Riveters, who toiled in factories and war plants, they were also vital to the war effort, but these computing Rosies worked in secrecy and anonymity, their contributions still largely unknown and unrecognized today.
Math helped propel the military’s technological gains of the 20th century. In previous centuries, warfare was literally a hit-or-miss affair. Especially at distances beyond a few feet, weapons were mostly inaccurate, clumsy, and inefficient. Whether aiming a musket across a battlefield or a cannon across the water, a soldier or sailor essentially pointed in the general direction of the enemy, fired, and hoped for a hit. War was also up close and personal: soldiers fired only at what they could see directly. Technological warfare opened the possibility of striking at what one couldn’t see—landing an artillery shell on a hidden target over distant hills or dropping a bomb from a high-flying aircraft required firing tables with the correct trajectories, drop points, elevation angles, and muzzle velocities. These calculations also needed to take into account the constantly changing variables of temperature, air density, wind drift, and target position.
Even before World War II, the Army was busily perfecting an arsenal of new artillery and infantry weapons at its Ballistic Research Laboratory at the Aberdeen Proving Ground in Maryland. To develop the crucial data to operate those weapons, engineers crunched numbers with a Bush differential analyzer, a 30-foot-long mechanical calculating machine developed by MIT’s Vannevar Bush in the early 1930s. A cadre of human computers also worked at calculating figures and compiling tables, using pencil, paper, and adding machines. But when America joined the conflict in 1941, the demands of total war forced the Army to enlist more help, and the lab commandeered the resources (and the additional differential analyzer) of the University of Pennsylvania’s Moore Engineering School. That meant that a new corps of “computers” had to be assembled quickly to help process the data. With most able-bodied young men already serving in uniform, the Army sought out the most immediately available mathematical talent: young women with a gift for numbers.
Principals from many high schools received an SOS from the military, remembered Doris Blumberg, who, along with her twin sister, Shirley, was about to graduate from the Philadelphia High School for Girls in May 1942. “The principal called us in and said that they were recruiting women to do mathematical work at the University of Pennsylvania for the war effort, and that we were qualified.” Both girls believed it was their duty to do their part…
November 29, 2011
With the accelerating euro crisis in Europe, the geopolitical revolution in Asia and increasing doubts about the Chinese economy, the increasingly misnamed Arab Spring sometimes has to struggle for airtime these days. But the struggle in Egypt has entered a new phase, one which will test the strength of the various groups struggling to control the country in the wake of President Mubarak’s fall from power.
Those of us old enough to have attended college back when even liberal arts and humanities professors routinely taught subjects that actually matter can dredge up our studies of the French Revolution and the subsequent 200 years of European and global reflection on the meaning and politics of that revolution to help us get to grips with what is happening in Egypt.
No study of history can tell you what will happen (despite technocratic “political scientists” wielding regression analyses and expounding the “laws” of political life), but the study of what happened in the past generally yields valuable insights and often helps you sort out the real issues and identify key turning points.
That is particularly true in Egypt today where the struggle between the protesters in Tahrir Square and the armed forces echoes political patterns that turned up over and over in the rich history of French revolutions and revolts from 1789 right up through 1968. The Tahrir rebles, like French revolutionary wannabes in the past must accomplish two tasks: the revolutionaries in Paris had to unite with the poor and the workers in the capital, and the capital had to win the allegiance of the rest of the country. The question of who ruled France often turned on the question of whether Paris or the nation as a whole was in charge.In general, Paris was the most “modern” part of France. The economy was more highly developed; the great universities were there with the best connected, most creative and most ambitious students; the leading intellectuals sat in its cafes and wrote for its journals; it was the cultural and financial center of the country as well. Imagine New York, Los Angeles, Boston, Chicago and Washington DC all rolled up into one city: that is something of what Paris has meant to France in modern times.
In the first French Revolution the radical Jacobins and their allies in the poor Paris suburbs drove the conservative Girondins and their allies scattered across the country from power. Later, Napoleon I, King Louis Philippe and Napoleon III were able to use the conservative instincts of the provincial cities and the rural masses to keep the ‘progressives’ and the revolutionaries in check; in a similar way the Third Republic triumphed over the Paris Commune of 1871 as the more conservative countryside threw its weight behind the more conservative alternative.
Cairo, of course, is something like the Paris of Egypt today. It is not the only city in Egypt, but it is the center of Egyptian intellectual, religious, cultural, political and economic life. It is more “advanced” than most of the rest of the country: more international, more affected for good or bad by the forces of international capitalism, and it is the center of the country’s politics, media and business.
The drama now playing out in Cairo is in some respects very French. The demonstrators in Tahrir Square think of themselves as the advance guard of the Egyptian population, the true representatives of an emerging national consensus. From their perspective they are the ‘voice’ and the ‘conscience’ of the nation, even if much of the nation doesn’t understand that yet. The demonstrators want to use their position at the center of Egyptian life to make fundamental changes. The military command believes that the peasants and the provincial elites are more afraid of anarchy and disorder than they are committed to radical change; by appealing to the “silent majority” and proposing a national referendum the generals are hoping to sideline the demonstrators and base their continuing power on the conservatism of the countryside…
November 29, 2011
Economics is at the start of a revolution that is traceable to an unexpected source: medical schools and their research facilities. Neuroscience – the science of how the brain, that physical organ inside one’s head, really works – is beginning to change the way we think about how people make decisions. These findings will inevitably change the way we think about how economies function. In short, we are at the dawn of “neuroeconomics.”
Efforts to link neuroscience to economics have occurred mostly in just the last few years, and the growth of neuroeconomics is still in its early stages. But its nascence follows a pattern: revolutions in science tend to come from completely unexpected places. A field of science can turn barren if no fundamentally new approaches to research are on the horizon. Scholars can become so trapped in their methods – in the language and assumptions of the accepted approach to their discipline – that their research becomes repetitive or trivial.
Then something exciting comes along from someone who was never involved with these methods – some new idea that attracts young scholars and a few iconoclastic old scholars, who are willing to learn a different science and its different research methods. At a certain moment in this process, a scientific revolution is born.
The neuroeconomic revolution has passed some key milestones quite recently, notably the publication last year of neuroscientist Paul Glimcher’s book Foundations of Neuroeconomic Analysis – a pointed variation on the title of Paul Samuelson’s 1947 classic work, Foundations of Economic Analysis, which helped to launch an earlier revolution in economic theory. And Glimcher himself now holds an appointment at New York University’s economics department (he also works at NYU’s Center for Neural Science).
To most economists, however, Glimcher might as well have come from outer space. After all, his doctorate is from the University of Pennsylvania School of Medicine’s neuroscience department. Moreover, neuroeconomists like him conduct research that is well beyond their conventional colleagues’ intellectual comfort zone, for they seek to advance some of the core concepts of economics by linking them to specific brain structures.
Much of modern economic and financial theory is based on the assumption that people are rational, and thus that they systematically maximize their own happiness, or as economists call it, their “utility.” When Samuelson took on the subject in his 1947 book, he did not look into the brain, but relied instead on “revealed preference.” People’s objectives are revealed only by observing their economic activities. Under Samuelson’s guidance, generations of economists have based their research not on any physical structure underlying thought and behavior, but only on the assumption of rationality.
As a result, Glimcher is skeptical of prevailing economic theory, and is seeking a physical basis for it in the brain. He wants to transform “soft” utility theory into “hard” utility theory by discovering the brain mechanisms that underlie it.
In particular, Glimcher wants to identify brain structures that process key elements of utility theory when people face uncertainty: “(1) subjective value, (2) probability, (3) the product of subjective value and probability (expected subjective value), and (4) a neuro-computational mechanism that selects the element from the choice set that has the highest ‘expected subjective value’…”
While Glimcher and his colleagues have uncovered tantalizing evidence, they have yet to find most of the fundamental brain structures. Maybe that is because such structures simply do not exist, and the whole utility-maximization theory is wrong, or at least in need of fundamental revision. If so, that finding alone would shake economics to its foundations.
Another direction that excites neuroscientists is how the brain deals with ambiguous situations, when probabilities are not known, and when other highly relevant information is not available. It has already been discovered that the brain regions used to deal with problems when probabilities are clear are different from those used when probabilities are unknown. This research might help us to understand how people handle uncertainty and risk in, say, financial markets at a time of crisis…
November 29, 2011
November 28, 2011
This image has been posted with express written permission. This cartoon was originally published at Town Hall.
The Prussian reforms of 1808 to 1812 granted all citizens freedom of trade, and put an end to serfdom and what until then had been utterly unchecked arbitrariness towards the Jews. The Jews were still only allowed to become public servants in exceptional cases and certainly never officers in the military, but unlike the Christian majority, they made the most of the new opportunities. They emancipated themselves and at high speed. Germany, with its halfhearted reformism, sluggish economic development (until 1870), and strong legal security provided a fertile ground. To top it all, Germany had some of the best Gymnasiums and universities in Europe, as well as some of the worst primary education.
Unlike the majority of their Christian and still largely illiterate peers, Jewish boys as a rule had always been taught to read and write Hebrew. Their parents did not put silver spoons in their cradles, but all manner of educational nourishment. Jewish parents knew exactly how much cultural skills such as reading, writing and arithmetic would improve their children’s chances, whereas Christian parents and clerics were still claiming, right up into the 20th century, that “reading is bad for the eyes!”
This constellation led to huge differences in levels of education and rates of social advancement. In 1869, 14.8 of pupils in Berlin’s Gymnasium schools came from Jewish families, although only four percent of the population was Mosaic by confession. In 1886, 46.5 percent of Jewish pupils in Prussia continued their education beyond primary school, and by 1901, this number had risen to 56.3 percent. During the same period of time the Christian interest in higher education crept up from 6.3 to 7.3 percent. Eight times more Jewish schoolchildren completed middle school and high school than their Christian counterparts. Likewise in Berlin 1901, in terms of population percentages, 11.5 times more Jewish girls attended girls’ high schools than Christians.
Of course the successes in Gymnasium educations then translated to the universities. In Prussia, Jewish students made up just short of ten percent of university students in 1886/1887, and Jews constituted just short of one percent of the population. As a rule Jews went to university significantly earlier and completed their studies faster than their Christian peers, and as the Prussian statisticians confirmed: “On average Jewish students seem to possess more ability and to develop more diligence than the Christians.”
In the school year of 1913/14 the Viennese commercial college teacher Dr. Ottokar Nemecek looked into the educational successes of Christian and Jewish commercial college students. He did not try to establish what percentage of the two groups attended such institutions of higher eduction in the first place (the differences were evident), but how to measure average performance levels. To this end he analysed the school reports of 1539 schoolboys and girls and carried out a variety of additional tests to determine articulacy, memory, and speeds of association and writing.
The tests weighed overwhelmingly in favour of the Jewish pupils, except when it came to marks for comportment and diligence. Nemecek ascribed this to “the greater liveliness of the Jews who as chatterboxes and disturbers of the peace as every teacher will confirm stand head and shoulders above the Christian pupils.” Despite their lack of discipline and diligence, the Jewish children clearly emerged on top (26:16 percent) in the category of “very good” and “good” marks for overall performance, whereas they hardly featured at all (4:23 percent) in the “average” performance group. In German, French, English and History they achieved consistently better results. The same picture emerged from marks in Mathematics, Chemistry and Physics as well as in Business and Law studies. The reasons Nemecek listed were “the greater maturity of the Jewish pupils in the areas of abstract thinking”, mental agility, writing speed, width of vocabulary and emotional alertness. Only in drawing, calligraphy and gymnastics did the Christian children perform better.
Whatever reasons experts gave for the educational advantage of the Jews, the nonJews felt the difference and reacted violently. In 1880 the liberal member of parliament Ludwig Bambeger talked about the “unusual learning drives” of the Jews, of “the visible haste” with which they were catching up on everything they had been denied for so long, and concluded: “Undoubtedly the recrudescence of illfeeling is closely linked to these things.”
The Social Democrat August Bebel described in a similar way the differing levels of educational zeal among Jews and Christians; in 1912 the difference between stubborn perseverance and quickwitted elasticity became the focal point of Werner Sombart’s sociological analysis because he held this to be the principle contributor to the intellectual divide between Jews and Christians, and thus to the modern form of social and envy driven anti Semitism. Sombart found that the influence of the Jews was greater “the heavier, the more viscous and the less businessoriented” the conduct of the surrounding population, and he concluded that on average the Jews “are so much brighter and more bustling than us”. With these words Sombart justified their widespread exclusion from university teaching positions. In the interest of science, he wrote, it was unfortunate that of two applicants the all round “more stupid party” would almost always be chosen over the Jewish one. Nevertheless, he believed such protective measures were necessary because otherwise “all lectureships and professorships would be occupied by Jews” whether baptised or unbaptised, it remains the same.” In Sombart’s comments “baptised and unbaptised” Jews “the fact remains the same” it becomes obvious where the racial antiSemitism begins. He based his conclusions on the simple experience that the intellectual superiority of the Jews was no way eradicated by conversion to Christianity.
The “Jewish ingenuity’s sanguine, bold humour that borders on the frivolous” and its “wonderfully agile, sarcastic, skeptical spirit that is impossible to discipline” incensed the placidly obedient Christian popular majority, as the Social Democrat Karl Kautsky commented and concluded: “The mental qualities of the Jews are the bone of contention.” The British historian John Foster Fraser scoffed in 1915 that German academics were falling over themselves to keep the Jews out because the competition “between the sons of the North with their blonde hair and sluggish intellect and the sons of the Orient with their black eyes and alert minds” was so unequal.
In other words, the extent to which the latecomers were catching up reflected their own shortcomings in education and dexterity. These shortcomings were becoming embarrassing and could easily be concealed behind racial theory. A good example stems from the Leipzig student Curt Mueller who in 1890 wrote a pamphlet on “Judaism among German students.” There were two things he didn’t like about his fellow students: they would do anything “to the point of selfsacrifice” for their fellow believers and that in terms of percentages there were “not nearly as many failed Jewish students as there were Germanic.” And why? Mueller of course had the answer. The Jews are “more hard working and assiduous” you have to give them that”, they “swot like mad at home”: “like all moneyloving tribes” the Jews eat modestly. Over a glass of beer the Jewish law student speaks about his studies far more than is necessary! He doesn’t stop chattering and that impresses people. He understands rapidly but with no depth. Why should he? Like this he gets through his exams in the prescribed time, and Germany is blessed with another Jewish referendarius.” Later on they earn fast money as doctors, lawyers and chemists! This is the sort of language that informs every second sentence in Mueller’s pamphlet, until he finally chimes in: “Stand up to the Jewish students with superiority and pride!” German racial pride fed exclusively on feelings of inferiority…
November 28, 2011
Since the Iranian Revolution of 1979, the United States has vacillated between engagement and confrontation with the Islamic Republic, with sanctions filling the gap. As Iran has moved closer to achieving its nuclear ambitions in recent years, tensions are rising once again. The latest round of U.S. sanctions, signed into law in 2010, has hurt the Iranian government by restricting finance for oil refineries and discouraging foreign companies from conducting business with it. Yet sanctions have not delayed Iran’s nuclear drive, foiled its support for terrorism abroad, or kept it from meddling in its neighbors’ affairs.
In the wake of revelations about an Iranian plot to kill Saudi Arabia’s ambassador to the United States, Abdel al-Jubeir, some in Congress are making the case for another round of sanctions, ostensibly to ramp up the pressure even more. But such a strategy leaves much to be desired. Over the past year, for example, Iran has enacted economic reforms and reduced the price of subsidies, riding out and adapting to sanctions.
Washington will only neutralize Iran by exploiting the regime’s main vulnerability: its false claim to legitimacy. The ayatollahs’ hold on power is inherently unstable because they have no popular mandate. Since staging a rigged election in 2009 to keep Iranian President Mahmoud Ahmadinejad in power, they have relied on repression and brutality to silence opposition, jailing journalists, torturing detainees, and executing critics (both real and imagined). By highlighting these crimes on the world stage and actively supporting Iran’s dissidents, the United States can place a new, more effective kind of pressure on Tehran and support the movement for democratic change from within. Focusing on human rights violations will allow the United States to expose the hypocrisy of the regime and remind Iran of its domestic troubles as it tries to expand its power and influence.
The current state of affairs in Iran began with the Green Movement uprising in 2009. As hundreds of thousands flowed into the streets to protest the sham victory of Ahmadinejad in the nation’s presidential election, security forces cracked down, worsening the country’s already severe level of oppression. The Iranian authorities admit to having arrested more than 4,500 protesters during the crackdown. Opposition groups report that there are at least 1,000 political prisoners still in jail, reflecting Iran’s long-practiced tactics of attempting to break dissidents with prolonged imprisonment and isolation and by harassing their families.
Iran has the highest per-capita execution rate in the world, with 252 confirmed executions in 2010 and reports of 300 more (out of a population of over 70 million). In absolute numbers, that is second only to China. And there has been no reprieve in 2011. Official Iranian media and human rights groups report 450 executions this year, many conducted in secret, unannounced to the lawyers and relatives of the accused. There have been 33 public executions so far this year; three men have been hanged for being homosexuals, a capital crime in Iran. In September alone, the state executed more than 100 of its citizens.
Those who fight back often end up arrested, too. One such case is Nasrin Sotoudeh, an Iranian lawyer who has represented juveniles on death row for over a decade and, more recently, defended several prominent human rights activists. She was arrested in September 2010 for “acting against national security” and “propaganda against the regime” and was sentenced to 11 years in prison. When Sotoudeh’s husband, Reza Khandan, and their two small children visited her in jail recently, they were detained for five hours because Khandan would not hand over his notebook to the authorities before the visit.
Some may argue that exposing Iran’s human rights record is a poor means of undermining its regime. But it is actually sound statecraft. At little cost, the United States can mobilize international condemnation of Iran’s oppression more effectively than it can unite countries against Iran’s nuclear program, which is a far more contentious issue.
Consider the left-wing parties in Europe, such as Germany’s Green and Britain’s Labour, whose mantra before the 2009 elections was, in effect, “no war against Iran.” After the Iranian regime beat its own people in the streets, major European parties joined in condemning the regime, becoming advocates for jailed human rights activists, students, and labor leaders. This public effort spurred diplomatic action. Although European countries have been slow to enforce economic measures against Iran, they have sanctioned more Iranian officials for human rights violations than has the United States, implementing travel bans and freezing their European assets. Domestic pressure also changed policy in Brazil, where the government went from congratulating Ahmadinejad on his reelection in 2009 to offering asylum to Sakineh Mohammadi Ashtiani, an Iranian woman sentenced to be stoned for alleged adultery, in July 2010.
The Iranian government takes such campaigns seriously. Last year, for example, Iran announced that it would apply for a seat on the United Nations Human Rights Council. An international outcry ensued, with human rights organizations and governments loudly opposing its candidacy. Once Tehran realized that it could not secure enough votes to win a place on the council, it withdrew its bid rather than suffer the embarrassment of defeat…
A Virginia company leading a national movement to replace classrooms with computers — in which children as young as 5 can learn at home at taxpayer expense — is facing a backlash from critics who are questioning its funding, quality and oversight.
K12 Inc. of Herndon has become the country’s largest provider of full-time public virtual schools, upending the traditional American notion that learning occurs in a schoolhouse where students share the experience. In K12’s virtual schools, learning is largely solitary, with lessons delivered online to a child who progresses at her own pace.
Conceived as a way to teach a small segment of the home-schooled and others who need flexible schooling, virtual education has evolved into an alternative to traditional public schools for an increasingly wide range of students — high achievers, strugglers, dropouts, teenage parents and victims of bullying among them.
“For many kids, the local school doesn’t work,” said Ronald J. Packard, chief executive and founder of K12. “And now, technology allows us to give that child a choice. It’s about educational liberty.”
Packard and other education entrepreneurs say they are harnessing technology to deliver quality education to any child, regardless of Zip code.
It’s an appealing proposition, and one that has attracted support in state legislatures, including Virginia’s. But in one of the most hard-fought quarters of public policy, a rising chorus of critics argues that full-time virtual learning doesn’t effectively educate children.
“Kindergarten kids learning in front of a monitor — that’s just wrong,” said Maryelen Calderwood, an elected school committee member in Greenfield, Mass., who unsuccessfully tried to stop K12 from contracting with her community to create New England’s first virtual public school last year. “It’s absolutely astounding how people can accept this so easily.”
People on both sides agree that the structure providing public education is not designed to handle virtual schools. How, for example, do you pay for a school that floats in cyberspace when education funding formulas are rooted in the geography of property taxes? How do you oversee the quality of a virtual education?
“There’s a total mismatch,” said Chester E. Finn Jr., president of the Thomas B. Fordham Institute, a conservative think tank, who served on K12’s board of directors until 2007. “We’ve got a 19th-century edifice trying to house a 21st-century system.”
Despite questions, full-time virtual schools are proliferating.
In the past two years, more than a dozen states have passed laws and removed obstacles to encourage virtual schools. And providers of virtual education have been making their case in statehouses around the country.
K12 has hired lobbyists from Boise to Boston and backed political candidates who support school choice in general and virtual education in particular. From 2004 to 2010, K12 gave about $500,000 in direct contributions to state politicians across the country, with three-quarters going to Republicans, according to the National Institute on Money in State Politics.
“We understand the politics of education pretty well,” Packard told investors recently.
K12’s push into New England illustrates its skill. In 2009, the company began exploring the potential for opening a virtual school in Massachusetts in partnership with the rural Greenfield school district.
But Massachusetts education officials halted the plan, saying Greenfield had no legal authority to create a statewide school. So Greenfield and K12 turned to legislators, with the company spending about $200,000 on Beacon Hill lobbyists…