June 23, 2012
June 23, 2012
Why Women Still Can’t Have It All: It’s time to stop fooling ourselves- the women who have managed to be both mothers and top professionals are superhuman, rich, or self-employed.
June 23, 2012
EIGHTEEN MONTHS INTO my job as the first woman director of policy planning at the State Department, a foreign-policy dream job that traces its origins back to George Kennan, I found myself in New York, at the United Nations’ annual assemblage of every foreign minister and head of state in the world. On a Wednesday evening, President and Mrs. Obama hosted a glamorous reception at the American Museum of Natural History. I sipped champagne, greeted foreign dignitaries, and mingled. But I could not stop thinking about my 14-year-old son, who had started eighth grade three weeks earlier and was already resuming what had become his pattern of skipping homework, disrupting classes, failing math, and tuning out any adult who tried to reach him. Over the summer, we had barely spoken to each other—or, more accurately, he had barely spoken to me. And the previous spring I had received several urgent phone calls—invariably on the day of an important meeting—that required me to take the first train from Washington, D.C., where I worked, back to Princeton, New Jersey, where he lived. My husband, who has always done everything possible to support my career, took care of him and his 12-year-old brother during the week; outside of those midweek emergencies, I came home only on weekends.
As the evening wore on, I ran into a colleague who held a senior position in the White House. She has two sons exactly my sons’ ages, but she had chosen to move them from California to D.C. when she got her job, which meant her husband commuted back to California regularly. I told her how difficult I was finding it to be away from my son when he clearly needed me. Then I said, “When this is over, I’m going to write an op-ed titled ‘Women Can’t Have It All.’”
She was horrified. “You can’t write that,” she said. “You, of all people.” What she meant was that such a statement, coming from a high-profile career woman—a role model—would be a terrible signal to younger generations of women. By the end of the evening, she had talked me out of it, but for the remainder of my stint in Washington, I was increasingly aware that the feminist beliefs on which I had built my entire career were shifting under my feet. I had always assumed that if I could get a foreign-policy job in the State Department or the White House while my party was in power, I would stay the course as long as I had the opportunity to do work I loved. But in January 2011, when my two-year public-service leave from Princeton University was up, I hurried home as fast as I could.
A rude epiphany hit me soon after I got there. When people asked why I had left government, I explained that I’d come home not only because of Princeton’s rules (after two years of leave, you lose your tenure), but also because of my desire to be with my family and my conclusion that juggling high-level government work with the needs of two teenage boys was not possible. I have not exactly left the ranks of full-time career women: I teach a full course load; write regular print and online columns on foreign policy; give 40 to 50 speeches a year; appear regularly on TV and radio; and am working on a new academic book. But I routinely got reactions from other women my age or older that ranged from disappointed (“It’s such a pity that you had to leave Washington”) to condescending (“I wouldn’t generalize from your experience. I’ve never had to compromise, andmy kids turned out great”).
The first set of reactions, with the underlying assumption that my choice was somehow sad or unfortunate, was irksome enough. But it was the second set of reactions—those implying that my parenting and/or my commitment to my profession were somehow substandard—that triggered a blind fury. Suddenly, finally, the penny dropped. All my life, I’d been on the other side of this exchange. I’d been the woman smiling the faintly superior smile while another woman told me she had decided to take some time out or pursue a less competitive career track so that she could spend more time with her family. I’d been the woman congratulating herself on her unswerving commitment to the feminist cause, chatting smugly with her dwindling number of college or law-school friends who had reached and maintained their place on the highest rungs of their profession. I’d been the one telling young women at my lectures that you can have it all and do it all, regardless of what field you are in. Which means I’d been part, albeit unwittingly, of making millions of women feel that they are to blame if they cannot manage to rise up the ladder as fast as men and also have a family and an active home life (and be thin and beautiful to boot).
Last spring, I flew to Oxford to give a public lecture. At the request of a young Rhodes Scholar I know, I’d agreed to talk to the Rhodes community about “work-family balance.” I ended up speaking to a group of about 40 men and women in their mid-20s. What poured out of me was a set of very frank reflections on how unexpectedly hard it was to do the kind of job I wanted to do as a high government official and be the kind of parent I wanted to be, at a demanding time for my children (even though my husband, an academic, was willing to take on the lion’s share of parenting for the two years I was in Washington). I concluded by saying that my time in office had convinced me that further government service would be very unlikely while my sons were still at home. The audience was rapt, and asked many thoughtful questions. One of the first was from a young woman who began by thanking me for “not giving just one more fatuous ‘You can have it all’ talk.” Just about all of the women in that room planned to combine careers and family in some way. But almost all assumed and accepted that they would have to make compromises that the men in their lives were far less likely to have to make.
The striking gap between the responses I heard from those young women (and others like them) and the responses I heard from my peers and associates prompted me to write this article. Women of my generation have clung to the feminist credo we were raised with, even as our ranks have been steadily thinned by unresolvable tensions between family and career, because we are determined not to drop the flag for the next generation. But when many members of the younger generation have stopped listening, on the grounds that glibly repeating “you can have it all” is simply airbrushing reality, it is time to talk.
I still strongly believe that women can “have it all” (and that men can too). I believe that we can “have it all at the same time.” But not today, not with the way America’s economy and society are currently structured. My experiences over the past three years have forced me to confront a number of uncomfortable facts that need to be widely acknowledged—and quickly changed…
In 2003, the city of New Haven, Connecticut, sought to fill 15 vacancies for supervisory positions in its fire department by promoting from within. As required by law, the city administered to applicants a written and oral civil-service exam created with the help of personnel experts and fire-department officials. In all, 118 firefighters took the exam; when the test scores came back, it turned out that white applicants had passed at roughly twice the rate of black applicants. If the fire department had followed the city’s civil-service placement rules, no black applicants, and at most two Hispanic applicants, would have been promoted to fill the 15 vacancies.
To avoid this outcome, the city eventually threw out the exam results. Officials were concerned, in part, that the promotions mandated by the test results would prompt a lawsuit by minority applicants. But some of the applicants who had passed the exam protested the city’s decision, claiming they were being denied a fair chance at a promotion for which they had proved themselves qualified. Seventeen successful white test-takers and one successful Hispanic test-taker sued to have the results reinstated; in 2009, their lawsuit reached the United States Supreme Court as Ricci v. DeStefano.
At the heart of the Ricci case was the doctrine of disparate-impact discrimination, which the Supreme Court first articulated in its 1971 decision in Griggs v. Duke Power Company. At issue in Griggs was the requirement that employees hired into service jobs at the power company’s facilities had to possess a high-school diploma and achieve a minimum score on an IQ test. The plaintiffs argued that these rules disqualified too many black job applicants, thereby violating Title VII of the Civil Rights Act of 1964, which prohibits employment discrimination based on race, color, religion, sex, or national origin.
The Supreme Court agreed, ruling that job criteria with an adverse or exclusionary effect on minorities — even if those criteria were “neutral on their face, and even neutral in terms of intent” — could violate the Title VII ban on race discrimination in hiring. The Court further stipulated that employers could escape liability for “disparate impact” only if they demonstrated that their adverse selection practices had “a manifest relationship to the employment in question” or that they were justified by “business necessity.” In examining the criteria for positions at the Duke Power Company, the Court found insufficient evidence to satisfy the job-relatedness defense, and so ruled against the utility.
According to the Griggs Court, the purpose of the newly established disparate-impact rule was to “achieve equality of employment opportunities” by removing “built-in headwinds” and “barriers that had operated in the past” to impede minorities’ workplace advancement. In Griggs and several subsequent cases, the Court has repeatedly stressed that the doctrine’s goal is fully consistent with a competitive meritocracy — one in which businesses remain free to seek out, hire, and promote the best and most productive workers regardless of race and to adopt personnel practices that best achieve that result. The purpose of the rule, according to the Court, is not to enact affirmative-action or group quotas for employment, but simply to eliminate arbitrary disadvantages suffered by minority job-seekers.
Despite this assertion, the development of the Griggs doctrine has proved anything but friendly to meritocratic objectives. Although the Supreme Court has never held that all workplaces must be racially balanced, lower courts and the Equal Employment Opportunity Commission (EEOC), which is charged with administering Title VII, have firmly embraced the presumption that the racial profiles of particular workplaces should reflect the racial composition of the broader population.
This presumption makes no sense, however, unless people from all racial groups are equally qualified for positions at all levels of the economy; only then will every racial group be represented in each occupation exactly in proportion to its share of the broader population. If members of one racial group are more qualified for particular positions than others, they will be hired in disproportionately greater numbers; persons from a less qualified group will be under-represented in those jobs.
The unfortunate reality is that there today exist pronounced differences in worker qualifications by race. That pattern is rooted in historical and social circumstances that may well call for policy reforms and other remedies. But the Court’s disparate-impact doctrine does nothing to change those circumstances or to bring about such reforms; indeed, it stands only to further disadvantage minority groups by setting their members up to underperform and by draining attention and resources away from the true causes of minority under-representation. Moreover, by burdening employers with an arcane tangle of perverse requirements — and by making it virtually impossible for companies to match the most qualified candidates to available jobs — the disparate-impact rule clearly does more harm than good.
These insights have so far had little influence on the law of disparate impact. In its decision in the Ricci case, a 5-4 majority of the Court read the facts narrowly to conclude that New Haven’s civil-service exam was sufficiently related to the jobs in question to survive scrutiny and ultimately sided with the firefighters who had sued to have their scores reinstated. The opinions in that case assumed the continuing vitality of the disparate-impact framework, suggesting that the Court is disinclined to question its decision in Griggs.
But a review of the premises and implications of the disparate-impact doctrine shows that, where the Court has chosen not to act, Congress should step in. The legislative branch should revise Title VII to abolish liability based on adverse impact, at least as applied to race in employment. Doing so would revive the core anti-discrimination principle of the law — a principle that has been undermined by the misguided conflation of equal opportunity and equal results arising from Griggs and its aftermath.
Understanding the perversities of the disparate-impact rule requires a review of the ways in which employers make personnel decisions and of how these practices shape the composition of the work force.
In evaluating candidates for hiring or promotion, companies rely on a panoply of selection criteria, both formal and informal. These include years of education, type of educational experience, and specialized training (collectively known in the field of industrial and organizational psychology, or IOP, as “biodata”), with entry to higher-level jobs often restricted to persons who have obtained high-school, college, or graduate degrees. Although the use of standardized tests of pure intelligence or cognitive ability has declined in the wake of Griggs, many employers still rely on specialized assessments of job knowledge, competence, and skill (including civil-service and professional qualifying exams), as well as on standard personality tests. Many employers also conduct structured or unstructured interviews and solicit letters of recommendation. Recently, prompted by the racially adverse impact of measures of verbal and abstract analysis — areas in which some minority groups underperform — experts have also developed alternative instruments that employ audio or video techniques, or that make use of so-called “assessment center” protocols based on job simulations, real-time problem solving, or actual work samples.
By collecting data on screening methods and correlating the scores on these measures with actual on-the-job ratings, IOP experts have documented the factors that best predict work performance over a wide range of occupations. A strong consensus has emerged, based on hundreds of studies performed over decades, that general cognitive ability — known alternatively as IQ or g — is the best predictor of work performance for all types of positions, from least to most skilled. (For a more extensive discussion of this evidence and other points raised here, see my 2011 article “Disparate Impact Realism,” 53 William and Mary Law Review 621.) Such measures are also “unbiased,” in that the correlation of cognitive ability with job outcomes is independent of a candidate’s race, background, or identity.
The process of demonstrating a link between hiring criteria and subsequent work outcomes is known in IOP as “validation.” And the measured validity of g is in the range of approximately 0.5 to 0.6 (on a scale that runs from -1 for a total negative correlation to 1 for a complete positive correlation), which represents a relatively powerful social-scientific prediction. Moreover, the usefulness of job-selection criteria is observed to vary with their emphasis on IQ-dependent skills. Criteria that rely more on intelligence are the most effective predictors of occupational success, while screening methods that de-emphasize intelligence in favor of other personal attributes or factors are less accurate in selecting the best workers…
June 23, 2012
Hunter syndrome is a terrible disease that cripples, and often kills, children. The illness robs its victims of the ability to produce a crucial enzyme used by the body to break down certain sugar molecules that are found in vital organs. In Hunter-syndrome patients, these molecules accumulate in places like the heart, brain, and joints with debilitating and extremely painful consequences. The disease is genetically inherited and very rare: At any given time, there are only about 2,000 cases worldwide. Before a treatment came along, parents had to stand idly by and watch as the disease destroyed their children.
By the 1990s, however, there was cause for optimism. Drugs were developed that could function as replacements for the enzymes missing because of Hunter syndrome and a series of related rare disorders called lysosomal diseases. By 2004, the Food and Drug Administration had approved enzyme replacements for four conditions very similar to Hunter syndrome; patients using these drugs were seeing promising results. The basis for understanding how to treat these genetic disorders was firmly established: If scientists could replace the missing enzymes in the blood, then the advance of these diseases could be slowed. In some cases, the damaging effects could even be partially reversed. The drugs were helping patients live longer, less painful lives.
When an experimental enzyme-replacement drug for Hunter syndrome came along a decade ago, parents of children with the disorder were understandably desperate to get their kids the new medicine, called Elaprase. Many families traveled hundreds of miles so that their children could take part in the drug’s key clinical trial. They may have expected that the trial would be an example of effective regulation hastening the timely arrival of a safe, new treatment. Instead, what these families experienced exemplified a broken and dysfunctional approach to drug trials, driven by an FDA culture poorly suited to serving the needs of the sickest patients.
In an effort to satisfy an increasingly unreasonable hunger for statistical certainty on the part of the FDA, the trial imposed extraordinary hardships on the children and families involved. In order to approve the drug for use, the FDA required the trial to involve 96 patients with Hunter syndrome — some 20% of all Americans afflicted with the disease. Moreover, for the first time in such a study of enzyme-replacement therapy, the FDA also insisted that patients be randomly assigned to receive either the experimental drug or an inert placebo. The course of Hunter syndrome is well documented and follows a very regular pattern in most afflicted children; the results for patients who got the experimental therapy could easily have been compared against readily available historical databases that track the normal course of the disease. It is hard to see why a placebo was necessary in such circumstances, especially when the requirement for a placebo group meant that some of the kids involved wasted a full year of the most able portion of their short lives effectively going untreated.
These and several other requirements meant that the Elaprase trial took longer, and was far more complex and difficult, than trials of similar drugs in the past, thus delaying the drug’s approval. Previous trials with drugs targeting one of these rare enzyme disorders had lasted six months or less. The Elaprase trial, by contrast, was designed to last at least a full year. To make matters worse, only a small number of doctors treated Hunter syndrome, so there were not many sites around the country that participated in the clinical trial. This meant that many parents had to travel long distances so that their children could get the required weekly doses. And all that time, the parents, the doctors, and the children did not know if they were getting the new drug or the useless placebo: By the time the trial was finished, if Elaprase worked (as was widely expected), many of the children who had been put on the placebo would be crippled.
The story of the Elaprase trial is important not because it stands out as an exception, but rather because it is increasingly characteristic of the FDA’s drug-review culture. That culture is the product of a poorly understood, but now well-established, attitude within the agency: an excessive desire for certainty. This desire is primarily driven not by fear of unforeseen dangerous side effects caused by drugs under review, but rather by a deepening mistrust of the doctors who eventually prescribe such medicines and the companies that market them. And that mistrust, in turn, is impeding the availability of safe, effective drugs that could today be helping real patients.
Fortunately, however, this harmful culture can be readily improved — by implementing a few straightforward reforms of the FDA’s responsibilities and structure.
A CULTURE OF MISTRUST
Many observers quite plausibly trace the origin of the modern FDA review process to the 1960s and the public-health tragedy caused by the drug thalidomide. In that dreadful episode, thousands of women (mostly in Europe) had been prescribed thalidomide for morning sickness; the drug, it turned out, arrested the limb formation of babies during the early stages of pregnancy. This was before the era of ultrasounds, so it was not until babies were born months later — with ghastly birth defects — that the full magnitude of the drug’s toxicity was discovered. Though thalidomide had been approved for sale in Europe, it was held up in the United States precisely over safety concerns that, clearly, were well founded. The FDA reviewer who delayed the drug’s approval in the U.S. , Frances Kelsey, became something of a national hero, and was given the President’s Award for Distinguished Federal Civilian Service by President John Kennedy.
The episode had a lasting effect on the FDA’s work. First, it led to the passage of the Drug Efficacy Amendment in 1962, a new law that created the modern clinical-trial requirements. An equally important development, however, was the way in which the thalidomide episode transformed the FDA’s review culture. It fostered an idealization of the lone reviewer championing an issue of safety against the prevailing orthodoxies, especially when it meant taking on corporate interests. Every FDA reviewer wanted to be the next Frances Kelsey. To this day, the thalidomide episode influences the agency’s staff; in 2010, the FDA created an annual Kelsey award for a staffer who tilts against standard conventions.
The thalidomide episode also had a more subtle influence on the way the FDA goes about its work: It focused the agency’s attention on a certain category of risks in which the problems are latent, meaning they do not become manifest until many months, and perhaps years, after exposure to a drug. Particularly prominent among such dangers are the risk of birth defects (teratogenicity) and of cancer (carcinogenicity). In the years since the thalidomide episode, the FDA has become extremely proficient at uncovering these kinds of delayed side effects.
This is more or less how the FDA understands its own review culture — as devoted to averting risks and protecting the public, and as being very good at doing so. This devotion comes with some downsides, to be sure: In so heavily prioritizing one of its obligations — the protection of consumers — the FDA has sometimes subordinated and neglected its other key obligation, which is to guide new medical innovations to market. Even now, many FDA employees see these two roles as fundamentally in conflict, despite the fact that the timely approval of effective, life-improving, and life-saving drugs is also a big part of the agency’s responsibility. But on the whole, the agency’s reviewers believe it is appropriate to prioritize safety over speed.
The trouble is that another set of priorities, motivated by another set of transformative events, has shaped the FDA review culture just as profoundly, but in ways that have not been adequately noticed or acknowledged. To truly understand today’s FDA review culture, we must look past the thalidomide tragedy to another (and much more recent) episode involving drug safety. During one relatively brief period in the 1990s, there were suddenly reasons to question the safety of four FDA-approved drugs: the diabetes medicine Rezulin, the antibiotic Trovan, the pain drug Duract, and the bowel drug Lotronex. The clinical problems caused by these four new drugs, the ways in which these problems were managed by doctors, and the political pressure applied to the FDA as a consequence combined to dramatically alter how the agency understood its mission…