The Conversation:

The ethics of eating red meat have been grilled recently by critics who question its consequences for environmental health and animal welfare. But if you want to minimise animal suffering and promote more sustainable agriculture, adopting a vegetarian diet might be the worst possible thing you could do.

Renowned ethicist Peter Singer says if there is a range of ways of feeding ourselves, we should choose the way that causes the least unnecessary harm to animals. Most animal rights advocates say this means we should eat plants rather than animals.

It takes somewhere between two to ten kilos of plants, depending on the type of plants involved, to produce one kilo of animal. Given the limited amount of productive land in the world, it would seem to some to make more sense to focus our culinary attentions on plants, because we would arguably get more energy per hectare for human consumption. Theoretically this should also mean fewer sentient animals would be killed to feed the ravenous appetites of ever more humans.

But before scratching rangelands-produced red meat off the “good to eat” list for ethical or environmental reasons, let’s test these presumptions.

Published figures suggest that, in Australia, producing wheat and other grains results in:

  • at least 25 times more sentient animals being killed per kilogram of useable protein
  • more environmental damage, and
  • a great deal more animal cruelty than does farming red meat.

How is this possible?

Agriculture to produce wheat, rice and pulses requires clear-felling native vegetation. That act alone results in the deaths of thousands of Australian animals and plants per hectare. Since Europeans arrived on this continent we have lost more than half of Australia’s unique native vegetation, mostly to increase production of monocultures of introduced species for human consumption.

Most of Australia’s arable land is already in use. If more Australians want their nutritional needs to be met by plants, our arable land will need to be even more intensely farmed. This will require a net increase in the use of fertilisers, herbicides, pesticides and other threats to biodiversity and environmental health. Or, if existing laws are changed, more native vegetation could be cleared for agriculture (an area the size of Victoria plus Tasmania would be needed to produce the additional amount of plant-based food required).

Most cattle slaughtered in Australia feed solely on pasture. This is usually rangelands, which constitute about 70% of the continent.

Grazing occurs on primarily native ecosystems. These have and maintain far higher levels of native biodiversity than croplands. The rangelands can’t be used to produce crops, so production of meat here doesn’t limit production of plant foods. Grazing is the only way humans can get substantial nutrients from 70% of the continent.

In some cases rangelands have been substantially altered to increase the percentage of stock-friendly plants. Grazing can also cause significant damage such as soil loss and erosion. But it doesn’t result in the native ecosystem “blitzkrieg” required to grow crops.

This environmental damage is causing some well-known environmentalists to question their own preconceptions. British environmental advocate George Monbiot, for example, publically converted from vegan to omnivore after reading Simon Fairlie’s expose about meat’s sustainability. And environmental activist Lierre Keith documented the awesome damage to global environments involved in producing plant foods for human consumption.

In Australia we can also meet part of our protein needs using sustainably wild-harvested kangaroo meat. Unlike introduced meat animals, they don’t damage native biodiversity. They are soft-footed, low methane-producing and have relatively low water requirements. They also produce an exceptionally healthy low-fat meat.

In Australia 70% of the beef produced for human consumption comes from animals raised on grazing lands with very little or no grain supplements. At any time, only2% of Australia’s national herd of cattle are eating grains in feed lots; the other 98% are raised on and feeding on grass. Two-thirds of cattle slaughtered in Australiafeed solely on pasture.

To produce protein from grazing beef, cattle are killed. One death delivers (on average, across Australia’s grazing lands) a carcass of about 288 kilograms. This is approximately 68% boneless meat which, at 23% protein equals 45kg of protein per animal killed. This means 2.2 animals killed for each 100kg of useable animal protein produced.

Producing protein from wheat means ploughing pasture land and planting it with seed. Anyone who has sat on a ploughing tractor knows the predatory birds that follow you all day are not there because they have nothing better to do. Ploughing and harvesting kill small mammals, snakes, lizards and other animals in vast numbers. In addition, millions of mice are poisoned in grain storage facilities every year.

However, the largest and best-researched loss of sentient life is the poisoning of mice during plagues

Read it all.

Harpers:

In the fifth century B.C., the philosopher Democritus proposed that all matter was made of tiny and indivisible atoms, which came in various sizes and textures—some hard and some soft, some smooth and some thorny. The atoms themselves were taken as givens. In the nineteenth century, scientists discovered that the chemical properties of atoms repeat periodically (and created the periodic table to reflect this fact), but the origins of such patterns remained mysterious. It wasn’t until the twentieth century that scientists learned that the properties of an atom are determined by the number and placement of its electrons, the subatomic particles that orbit its nucleus. And we now know that all atoms heavier than helium were created in the nuclear furnaces of stars.

The history of science can be viewed as the recasting of phenomena that were once thought to be accidents as phenomena that can be understood in terms of fundamental causes and principles. One can add to the list of the fully explained: the hue of the sky, the orbits of planets, the angle of the wake of a boat moving through a lake, the six-sided patterns of snowflakes, the weight of a flying bustard, the temperature of boiling water, the size of raindrops, the circular shape of the sun. All these phenomena and many more, once thought to have been fixed at the beginning of time or to be the result of random events thereafter, have been explained as necessary consequences of the fundamental laws of nature—laws discovered by human beings.

This long and appealing trend may be coming to an end. Dramatic developments in cosmological findings and thought have led some of the world’s premier physicists to propose that our universe is only one of an enormous number of universes with wildly varying properties, and that some of the most basic features of our particular universe are indeed mere accidents—a random throw of the cosmic dice. In which case, there is no hope of ever explaining our universe’s features in terms of fundamental causes and principles.

It is perhaps impossible to say how far apart the different universes may be, or whether they exist simultaneously in time. Some may have stars and galaxies like ours. Some may not. Some may be finite in size. Some may be infinite. Physicists call the totality of universes the “multiverse.” Alan Guth, a pioneer in cosmological thought, says that “the multiple-universe idea severely limits our hopes to understand the world from fundamental principles.” And the philosophical ethos of science is torn from its roots. As put to me recently by Nobel Prize–winning physicist Steven Weinberg, a man as careful in his words as in his mathematical calculations, “We now find ourselves at a historic fork in the road we travel to understand the laws of nature. If the multiverse idea is correct, the style of fundamental physics will be radically changed.”

The scientists most distressed by Weinberg’s “fork in the road” are theoretical physicists. Theoretical physics is the deepest and purest branch of science. It is the outpost of science closest to philosophy, and religion. Experimental scientists occupy themselves with observing and measuring the cosmos, finding out what stuff exists, no matter how strange that stuff may be. Theoretical physicists, on the other hand, are not satisfied with observing the universe. They want to know why. They want to explain all the properties of the universe in terms of a few fundamental principles and parameters. These fundamental principles, in turn, lead to the “laws of nature,” which govern the behavior of all matter and energy. An example of a fundamental principle in physics, first proposed by Galileo in 1632 and extended by Einstein in 1905, is the following: All observers traveling at constant velocity relative to one another should witness identical laws of nature. From this principle, Einstein derived his theory of special relativity. An example of a fundamental parameter is the mass of an electron, considered one of the two dozen or so “elementary” particles of nature. As far as physicists are concerned, the fewer the fundamental principles and parameters, the better. The underlying hope and belief of this enterprise has always been that these basic principles are so restrictive that only one, self-consistent universe is possible, like a crossword puzzle with only one solution. That one universe would be, of course, the universe we live in. Theoretical physicists are Platonists. Until the past few years, they agreed that the entire universe, the one universe, is generated from a few mathematical truths and principles of symmetry, perhaps throwing in a handful of parameters like the mass of the electron. It seemed that we were closing in on a vision of our universe in which everything could be calculated, predicted, and understood.

However, two theories in physics, eternal inflation and string theory, now suggest that the same fundamental principles from which the laws of nature derive may lead to many different self-consistent universes, with many different properties. It is as if you walked into a shoe store, had your feet measured, and found that a size 5 would fit you, a size 8 would also fit, and a size 12 would fit equally well. Such wishy-washy results make theoretical physicists extremely unhappy. Evidently, the fundamental laws of nature do not pin down a single and unique universe. According to the current thinking of many physicists, we are living in one of a vast number of universes. We are living in an accidental universe. We are living in a universe uncalculable by science.

“Back in the 1970s and 1980s,” says Alan Guth, “the feeling was that we were so smart, we almost had everything figured out.” What physicists had figured out were very accurate theories of three of the four fundamental forces of nature: the strong nuclear force that binds atomic nuclei together, the weak force that is responsible for some forms of radioactive decay, and the electromagnetic force between electrically charged particles. And there were prospects for merging the theory known as quantum physics with Einstein’s theory of the fourth force, gravity, and thus pulling all of them into the fold of what physicists called the Theory of Everything, or the Final Theory. These theories of the 1970s and 1980s required the specification of a couple dozen parameters corresponding to the masses of the elementary particles, and another half dozen or so parameters corresponding to the strengths of the fundamental forces. The next step would then have been to derive most of the elementary particle masses in terms of one or two fundamental masses and define the strengths of all the fundamental forces in terms of a single fundamental force.

There were good reasons to think that physicists were poised to take this next step. Indeed, since the time of Galileo, physics has been extremely successful in discovering principles and laws that have fewer and fewer free parameters and that are also in close agreement with the observed facts of the world. For example, the observed rotation of the ellipse of the orbit of Mercury, 0.012 degrees per century, was successfully calculated using the theory of general relativity, and the observed magnetic strength of an electron, 2.002319 magnetons, was derived using the theory of quantum electrodynamics. More than any other science, physics brims with highly accurate agreements between theory and experiment…

Read it all.

The Chronicle:

Drawing on survey responses, transcript data, and results from the Collegiate Learning Assessment (a standardized test taken by students in their first semester and at the end of their second year), Richard Arum and Josipa Roksa concluded that a significant percentage of undergraduates are failing to develop the broad-based skills and knowledge they should be expected to master. Here is an excerpt from Academically Adrift: Limited Learning on College Campuses (University of Chicago Press), their new book based on those findings.

 “With regard to the quality of research, we tend to evaluate faculty the way the Michelin guide evaluates restaurants,” Lee Shulman, former president of the Carnegie Foundation for the Advancement of Teaching, recently noted. “We ask, ‘How high is the quality of this cuisine relative to the genre of food? How excellent is it?’ With regard to teaching, the evaluation is done more in the style of the Board of Health. The question is, ‘Is it safe to eat here?’” Our research suggests that for many students currently enrolled in higher education, the answer is: not particularly. Growing numbers of students are sent to college at increasingly higher costs, but for a large proportion of them the gains in critical thinking, complex reasoning, and written communication are either exceedingly small or empirically nonexistent. At least 45 percent of students in our sample did not demonstrate any statistically significant improvement in Collegiate Learning Assessment [CLA] performance during the first two years of college. [Further study has indicated that 36 percent of students did not show any significant improvement over four years.] While these students may have developed subject-specific skills that were not tested for by the CLA, in terms of general analytical competencies assessed, large numbers of U.S. college students can be accurately described as academically adrift. They might graduate, but they are failing to develop the higher-order cognitive skills that it is widely assumed college students should master. These findings are sobering and should be a cause for concern.

While higher education is expected to accomplish many tasks—and contemporary colleges and universities have indeed contributed to society in ways as diverse as producing pharmaceutical patents as well as prime-time athletic games—existing organizational cultures and practices too often do not put a high priority on undergraduate learning. Faculty and administrators, working to meet multiple and at times competing demands, too rarely focus on either improving instruction or demonstrating gains in student learning.

More troubling still, the limited learning we have observed in terms of the absence of growth in CLA performance is largely consistent with the accounts of many students, who report that they spend increasing numbers of hours on nonacademic activities, including working, rather than on studying. They enroll in courses that do not require substantial reading or writing assignments; they interact with their professors outside of classrooms rarely, if ever; and they define and understand their college experiences as being focused more on social than on academic development.

Moreover, we find that learning in higher education is characterized by persistent and/or growing inequality. There are significant differences in critical thinking, complex reasoning, and writing skills when comparing groups of students from different family backgrounds and racial/ethnic groups. More important, not only do students enter college with unequal demonstrated abilities, but those inequalities tend to persist—or, in the case of African-American students relative to white students, increase—while they are enrolled in higher education.

Despite the low average levels of learning and persistent inequality, we have also observed notable variation in student experiences and outcomes, both across and within institutions. While the average level of performance indicates that students in general are embedded in higher-education institutions where only very modest academic demands are placed on them, exceptional students, who have demonstrated impressive growth over time on CLA performance, exist in all the settings we examined. In addition, students attending certain high-performing institutions had more-beneficial college experiences in terms of experiencing rigorous reading/writing requirements and spending more hours studying. Students attending these institutions demonstrated significantly higher gains in critical thinking, complex reasoning, and writing skills over time than did students enrolled elsewhere.

The Implications of Limited Learning

Notwithstanding the variation and the positive experiences in certain contexts, the prevalence of limited learning on today’s college campuses is troubling indeed. While the historian Helen Horowitz’s work reminds us that the phenomenon of limited learning in higher education has a long and venerable tradition in this country—in the late 18th and early 19th centuries, “college discipline conflicted with the genteel upbringing of the elite sons of Southern gentry and Northern merchants”—this outcome today occurs in a fundamentally different context. Contemporary college graduates generally do not leave school with the assumption that they will ultimately inherit the plantations or businesses of their fathers. Occupational destinations in modern economies are increasingly dependent on an individual’s academic achievements. The attainment of long-term occupational success in the economy requires not only academic credentials, but very likely also academic skills. As report after blue-ribbon report has reminded us, today’s jobs require “knowledge, learning, information, and skilled intelligence.” These are cognitive abilities that, unlike Herrnstein and Murray’s immutable IQ construct, can be learned and developed at school.

Something else has also changed. After World War II, the United States dramatically expanded its higher-education system and led the world for decades, often by a wide margin, in the percentage of young people it graduated from college. Over the past two decades, while the U.S. higher-education system has grown only marginally, the rest of the world has not been standing still. As Patrick Callan, president of the National Center for Public Policy and Higher Education, has observed: “In the 1990s, however, as the importance of a college-educated work force in a global economy became clear, other nations began making the kinds of dramatic gains that had characterized American higher education earlier. In contrast, by the early 1990s, the progress the United States had made in increasing college participation had come to a virtual halt. For most of the 1990s, the United States ranked last among 14 nations in raising college-participation rates, with almost no increase during the decade.”…

Read it all.

Latest On Tebow

December 23, 2011

This image has been posted with express written permission. This cartoon was originally published at Town Hall.

Corzine’s Admission

December 23, 2011

This image has been posted with express written permission. This cartoon was originally published at Town Hall.

Follow

Get every new post delivered to your Inbox.

Join 81 other followers