October 6, 2011
New pursuit of Schrödinger’s cat: Quantum mechanics is more than a hundred years old, but we still don’t understand it
October 6, 2011
Quantum mechanics is more than a hundred years old, but we still don’t understand it. In recent years, however, physicists have found a fresh enthusiasm for exploring the questions about quantum theory that were swept under the rug by its founders. Advances in experimental methods make it possible to test ideas about why objects on the scale of atoms follow different rules from those that govern objects on the everyday scale. In effect, this becomes an enquiry into the sense in which things exist at all.
In 1900 the German physicist Max Planck suggested that light—a form of electromagnetic waves—consists of tiny, indivisible packets of energy. These particles, called photons, are the “quanta” of light. Five years later Albert Einstein showed how this quantum hypothesis explained the way light kicks electrons out of metals—the photoelectric effect. It was for this, not the theory of relativity, that he won his Nobel prize.
The early pioneers of quantum theory quickly discovered that the seemingly innocuous idea that energy is grainy has bizarre implications. Objects can be in many places at once. Particles behave like waves and vice versa. The act of witnessing an event alters it. Perhaps the quantum world is constantly branching into multiple universes.
As long as you just accept these paradoxes, quantum theory works fine. Scientists routinely adopt the approach memorably described by Cornell physicist David Mermin, as “shut up and calculate.” They use quantum mechanics to calculate everything from the strength of metal alloys to the shapes of molecules. Routine application of the theory underpins the miniaturisation of electronics, medical MRI imaging and the development of solar cells, to name just a few burgeoning technologies.
Quantum mechanics is one of the most reliable theories in science: its prediction of how light interacts with matter is accurate to the eighth decimal place. But the question of how to interpret the theory—what it tells us about the physical universe—was never resolved by founders such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger. Famously, Einstein himself was unhappy about how quantum theory leaves so much to chance: it pronounces only on the relative probabilities of how the world is arranged, not on how things fundamentally are.
Most physicists accept something like Bohr and Heisenberg’s Copenhagen interpretation. This holds that there is no essential reality beyond the quantum description, nothing more fundamental and definite than probabilities. Bohr coined the notion of “complementarity” to express the need to relinquish the expectation of a deeper reality beneath the equations. If you measure a quantum object, you might find it in a particular state. But it makes no sense to ask if it was in that state before you looked. All that can be said is that it had a particular probability of being so. It’s not that you don’t “know,” but rather that the question has no physical meaning. Similarly, Heisenberg’s uncertainty principle is not a statement about the limits of what we can know about a quantum particle’s position, but places bounds on the whole concept of position.
Einstein attacked this idea in a thought experiment in which two quantum particles were arranged to have interdependent states, whereby if one were aligned in one direction, then the other had to be aligned in the opposite direction. Suppose these particles move many light years apart, and then you measure the state of one of them. Quantum theory insists that this instantly determines the state of the other. Again, it’s not that you simply don’t know until you measure. It is that the state of the particles is literally undecided until then. But this implies that the effect of the measurement is transmitted instantly, and therefore faster than light, across cosmic distances to the other particle. Surely that’s absurd, Einstein argued.
But it isn’t. Experiments have now established beyond doubt that this instantaneous action at a distance, called entanglement, is real—that’s just how quantum mechanics is.
This is not an abstruse oddity. Entanglement is exploited in quantum cryptography, where a message is encoded in entangled quantum particles, making it impossible to intercept and read in transit without the tampering being detected. Entanglement is also used in quantum computing, where the ability of quantum particles to exist in many states at once allows huge numbers of calculations to be conducted simultaneously, greatly accelerating the solution of mathematical problems. Although these technologies are in early development, already there are signs of commercial interest. Earlier this year the Canadian company D-Wave Systems announced the first sale of a quantum computer to Lockheed Martin, while fibre-optic-based quantum cryptography was used (admittedly more for publicity than for extra security) to transmit ballot information in the 2007 Swiss elections.
“Discussions of relations between information and physical reality are now of interest because such questions can have practical implications,” says Wojciech Zurek, a quantum theorist at the Los Alamos National Laboratory in New Mexico.
The quantum renaissance hinges on experimental innovations. Until the 1970s, experiments on quantum fundamentals relied mostly on indirect inference. But now it’s possible to make and probe individual quantum objects with great precision. Many technological advances have contributed to this, among them the advent of laser light composed of photons of identical, precise energy. So has the ability to make measurements with immense precision in time, space and mass; methods to hold atoms in electrical and magnetic traps (the subject of the 1997 Nobel prize in physics); and the manipulation of light with fibre optics (helped by advances in optical telecommunications).
But even if you accept the paradoxical aspects of the theory and just use the maths, the fundamental questions won’t go away. For example, if the act of measurement turns probabilities into certainties, how exactly does it do that? Physicists have long spoken of measurements “collapsing the wavefunction,” which expresses how the smeared-out, wave-like mathematical entity encoding all possible quantum states (the wavefunction) becomes focused into a particular place or state. But this was seen largely as metaphor. The collapse had to be imposed by fiat, since it didn’t feature in the mathematical theory…
October 6, 2011
Some years ago, Dr. Robert A. Burton was the neurologist on call at a San Francisco hospital when a high-profile colleague from the oncology department asked him to perform a spinal tap on an elderly patient with advanced metastatic cancer. The patient had seemed a little fuzzy-headed that morning, and the oncologist wanted to check for meningitis or another infection that might be treatable with antibiotics.
Dr. Burton hesitated. Spinal taps are painful. The patient’s overall prognosis was beyond dire. Why go after an ancillary infection? But the oncologist, known for his uncompromising and aggressive approach to treatment, insisted.
“For him, there was no such thing as excessive,” Dr. Burton said in a telephone interview. “For him, there was always hope.”
On entering the patient’s room with spinal tap tray portentously agleam, Dr. Burton encountered the patient’s family members. They begged him not to proceed. The frail, bedridden patient begged him not to proceed. Dr. Burton conveyed their pleas to the oncologist, but the oncologist continued to lobby for a spinal tap, and the exhausted family finally gave in.
As Dr. Burton had feared, the procedure proved painful and difficult to administer. It revealed nothing of diagnostic importance. And it left the patient with a grinding spinal-tap headache that lasted for days, until the man fell into a coma and died of his malignancy.
Dr. Burton had admired his oncology colleague (now deceased), yet he also saw how the doctor’s zeal to heal could border on fanaticism, and how his determination to help his patients at all costs could perversely end up hurting them.
“If you’re supremely confident of your skills, and if you’re certain that what you’re doing is for the good of your patients,” he said, “it can be very difficult to know on your own when you’re veering into dangerous territory.”
The author of “On Being Certain” and the coming “A Skeptic’s Guide to the Mind,” Dr. Burton is a contributor to a scholarly yet surprisingly sprightly volume called “Pathological Altruism,” to be published this fall by Oxford University Press. And he says his colleague’s behavior is a good example of that catchily contradictory term, just beginning to make the rounds through the psychological sciences.
As the new book makes clear, pathological altruism is not limited to showcase acts of self-sacrifice, like donating a kidney or a part of one’s liver to a total stranger. The book is the first comprehensive treatment of the idea that when ostensibly generous “how can I help you?” behavior is taken to extremes, misapplied or stridently rhapsodized, it can become unhelpful, unproductive and even destructive.
Selflessness gone awry may play a role in a broad variety of disorders, including anorexia and animal hoarding, women who put up with abusive partners and men who abide alcoholic ones.
Because a certain degree of selfless behavior is essential to the smooth performance of any human group, selflessness run amok can crop up in political contexts. It fosters the exhilarating sensation of righteous indignation, the belief in the purity of your team and your cause and the perfidiousness of all competing teams and causes.
David Brin, a physicist and science fiction writer, argues in one chapter that sanctimony can be as physically addictive as any recreational drug, and as destabilizing. “A relentless addiction to indignation may be one of the chief drivers of obstinate dogmatism,” he writes. “It may be the ultimate propellant behind the current ‘culture war.’ ” Not to mention an epidemic of blogorrhea, newspaper-induced hypertension and the use of a hot, steeped beverage as one’s political mascot.
Barbara Oakley, an associate professor of engineering at Oakland University in Michigan and an editor of the new volume, said in an interview that when she first began talking about its theme at medical or social science conferences, “people looked at me as though I’d just grown goat horns. They said, ‘But altruism by definition can never be pathological.’ ”
To Dr. Oakley, the resistance was telling. “It epitomized the idea ‘I know how to do the right thing, and when I decide to do the right thing it can never be called pathological,’ ” she said.
Indeed, the study of altruism, generosity and other affiliative behaviors has lately been quite fashionable in academia, partly as a counterweight to the harsher, selfish-gene renderings of Darwinism, and partly on the financing bounty of organizations like the John Templeton Foundation. Many researchers point out that human beings are a spectacularly cooperative species, far surpassing other animals in the willingness to work closely and amicably with non-kin. Our altruistic impulse, they say, is no mere crown jewel of humanity; it is the bedrock on which we stand.
Yet given her professional background, Dr. Oakley couldn’t help doubting altruism’s exalted reputation. “I’m not looking at altruism as a sacred thing from on high,” she said. “I’m looking at it as an engineer.”
And by the first rule of engineering, she said, “there is no such thing as a free lunch; there are always trade-offs.” If you increase order in one place, you must decrease it somewhere else…
October 6, 2011
I was born in 1945. When someone my age forecasts something that will happen fifty or a hundred years from now, he needn’t worry about being teased by friends if it doesn’t pan out. Without trepidation, then, I offer the following prediction: One century hence, if a roster of professional economists is asked to identify the intellectual father of their discipline, a majority will name Charles Darwin.
If the same question were posed today, of course, more than 99 percent of my colleagues would name Adam Smith. My views about Darwin’s significance reflect no shortage of admiration for Smith. On the contrary, reading any random passage from the 18th-century Scottish moral philosopher’s masterwork, The Wealth of Nations, still causes me to marvel at the depth and breadth of his insights.
I base my prediction on a subtle but extremely important distinction between Darwin and Smith’s views of the competitive process. As I’ll explain, it’s a distinction that sheds a bright light on a still raging debate about the nature and desirability of government regulation.
The Competitive Process
Today, Smith is best remembered for his invisible hand theory, which, according to some of his modern disciples, holds that impersonal market forces channel the behavior of greedy individuals to produce the greatest good for all. This characterization is an oversimplification, but it nonetheless captures an important dimension of Smith’s narrative. In any event, the invisible hand theory’s optimistic portrayal of unregulated market outcomes has become the bedrock of the worldview of libertarians and other anti-government activists. They believe that economic regulation is unnecessary—indeed, generally counterproductive—because unbridled market forces can take care of things quite nicely on their own.
In fairness to Smith, he was well aware—as many of his latter-day acolytes are not—that unregulated markets didn’t always produce the best outcomes. For the most part, the market failures he recognized involved underhanded practices by business leaders in a position to wield power. Thus he wrote:
To widen the market and to narrow the competition, is always in the interest of [those who live by profit]. . . . [Such interest] comes from an order of men, whose interest is never exactly the same with that of the public, who have generally an interest to deceive and even to oppress the public, and who accordingly have, upon many occasions, both deceived and oppressed it.1
When markets failed, in Smith’s view, it was because of an absence of effective competition. A firm might deceive its customers about the quality of its offerings, or it might cut prices to drive rivals out of business, only to raise them again once they were gone. Such abuses were common in Smith’s day, and Smith himself did not object in principle to government action to curtail them.
Such abuses are less frequent now, but their continuing presence has led social critics on the Left to focus on anti-competitive behavior as the key to understanding why markets fail. The late John Kenneth Galbraith, for example, stressed the contrast between the “traditional sequence” envisioned by Adam Smith’s modern disciples and a “revised sequence” that Galbraith saw as a more accurate portrayal of the modern marketplace.2 In the traditional sequence, consumers enter the market with well-formed preferences and firms struggle to meet their demands as well and cheaply as possible. But in Galbraith’s revised sequence, powerful corporations first decide which products would be most convenient and profitable for them to produce and then hire Madison Avenue hucksters to persuade consumers to want those products.
Many economists remain skeptical about Galbraith’s revised sequence, citing conspicuous examples of corporate failure, such as the Ford Edsel in Galbraith’s day.3 Ford introduced the Edsel with great fanfare in September 1957. It was named for Edsel B. Ford, son of company founder Henry Ford, and its outsized promotional budget included a widely viewed national television special, The Edsel Show. But customers lacked enthusiasm for the car, and its production ceased in 1960. More recently, Microsoft spent almost a billion dollars to develop and promote the Kin, a smartphone targeted at the youth market. The phone, which hit stores in April 2010, was unceremoniously pulled from shelves just 45 days later because of abysmal sales.
Notwithstanding such failures, there is little doubt that advertising can shift consumer tastes. But advertising wizardry is a double-edged sword. The driving force behind the invisible hand is greed, and if producers are currently selling inferior products at inflated prices, there’s cash on the table. If a rival producer can persuade consumers that a better, cheaper model is available, that producer can make lots of money. Modern marketing methods are surely up to that task. Competition is obviously still far from perfect, but today’s markets are much closer to the perfectly informed, frictionless ideal than were those of Adam Smith’s day.
In time, I predict the invisible hand will come to be seen as a special case of Darwin’s more general theory of competition, which was fundamentally different. Darwin trained his sights on competition not among merchants but among individual members of plant and animal species. But the two domains, he realized, share deep similarities. His observations revealed a systemic flaw in the dynamics of competition: The interests of individual animals were often profoundly in conflict with the broader interests of their own species or larger subgroups within it. The failures he identified resulted not from too little competition but from the very logic of the competitive process itself. Many of the most cherished beliefs held by libertarians, while perfectly plausible within Smith’s framework, don’t survive in Darwin’s.
Darwin’s central premise was that natural selection favored variants of traits and behaviors insofar as they enhanced the reproductive fitness of the individual animals that bore them. If a trait made the individual better able to survive and reproduce, it would proliferate. Otherwise, it would eventually vanish. In many cases, Darwin recognized, the same variant that served the individual’s interest would also serve the interests of larger groups within its species. But he also saw that many traits promoted individual interest to the detriment of larger groups.
As an example in the former category, consider the speed of the gazelle. Mature members of this species can sustain speeds of thirty miles per hour for extended periods and can reach sixty in short bursts. How did they become so fast? It might seem that being faster would be unambiguously better from an evolutionary point of view, but that can’t be true or else all species would be fast. Tapeworms are slow. In their particular environmental niche, being fast never mattered. Gazelles are fast because they evolved in an environment in which being faster than others was often decisive for survival. The gazelle’s predators, which include the cheetah, are also very fast, and there are few places to take shelter on the terrain where both groups evolved. Slower genetic variants among the modern gazelle’s ancestors were more likely to be caught and eaten.
Since the selection pressure that forged speed in gazelles was the threat of being caught by predators from other species, greater speed posed no conflict between the interests of individual gazelles and the interests of gazelles as a species.4 Up to some point, being faster conferred advantages for both individual and species. With respect to this particular trait, then, Darwin’s natural selection narrative closely parallels Smith’s invisible hand narrative about the proliferation of cost-saving innovations and attractive new product designs.
Many other traits, however, increase the reproductive fitness of an individual while simultaneously imposing significant costs on the species as a whole, or on large subgroups of the species. Such conflicts are especially likely for traits that confer advantage in an individual’s head-to-head competition with members of its own species…