The Crisis of Big Science

April 24, 2012

New York Review Of Books:

Last year physicists commemorated the centennial of the discovery of the atomic nucleus. In experiments carried out in Ernest Rutherford’s laboratory at Manchester in 1911, a beam of electrically charged particles from the radioactive decay of radium was directed at a thin gold foil. It was generally believed at the time that the mass of an atom was spread out evenly, like a pudding. In that case, the heavy charged particles from radium should have passed through the gold foil, with very little deflection. To Rutherford’s surprise, some of these particles bounced nearly straight back from the foil, showing that they were being repelled by something small and heavy within gold atoms. Rutherford identified this as the nucleus of the atom, around which electrons revolve like planets around the sun.

This was great science, but not what one would call big science. Rutherford’s experimental team consisted of one postdoc and one undergraduate. Their work was supported by a grant of just £70 from the Royal Society of London. The most expensive thing used in the experiment was the sample of radium, but Rutherford did not have to pay for it—the radium was on loan from the Austrian Academy of Sciences.

Nuclear physics soon got bigger. The electrically charged particles from radium in Rutherford’s experiment did not have enough energy to penetrate the electrical repulsion of the gold nucleus and get into the nucleus itself. To break into nuclei and learn what they are, physicists in the 1930s invented cyclotrons and other machines that would accelerate charged particles to higher energies. The late Maurice Goldhaber, former director of Brookhaven Laboratory, once reminisced:

The first to disintegrate a nucleus was Rutherford, and there is a picture of him holding the apparatus in his lap. I then always remember the later picture when one of the famous cyclotrons was built at Berkeley, and all of the people were sitting in the lap of the cyclotron.

1.

After World War II, new accelerators were built, but now with a different purpose. In observations of cosmic rays, physicists had found a few varieties of elementary particles different from any that exist in ordinary atoms. To study this new kind of matter, it was necessary to create these particles artificially in large numbers. For this physicists had to accelerate beams of ordinary particles like protons—the nuclei of hydrogen atoms—to higher energy, so that when the protons hit atoms in a stationary target their energy could be transmuted into the masses of particles of new types. It was not a matter of setting records for the highest-energy accelerators, or even of collecting more and more exotic species of particles, like orchids. The point of building these accelerators was, by creating new kinds of matter, to learn the laws of nature that govern all forms of matter. Though many physicists preferred small-scale experiments in the style of Rutherford, the logic of discovery forced physics to become big.

In 1959 I joined the Radiation Laboratory at Berkeley as a postdoc. Berkeley then had the world’s most powerful accelerator, the Bevatron, which occupied the whole of a large building in the hills above the campus. The Bevatron had been built specifically to accelerate protons to energies high enough to create antiprotons, and to no one’s surprise antiprotons were created. What was surprising was that hundreds of types of new, highly unstable particles were also created. There were so many of these new types of particles that they could hardly all be elementary, and we began to doubt whether we even knew what was meant by a particle being elementary. It was all very confusing, and exciting.

After a decade of work at the Bevatron, it became clear that to make sense of what was being discovered, a new generation of higher-energy accelerators would be needed. These new accelerators would be too big to fit into a laboratory in the Berkeley hills. Many of them would also be too big as institutions to be run by any single university. But if this was a crisis for Berkeley, it wasn’t a crisis for physics. New accelerators were built, at Fermilab outside Chicago, at CERN near Geneva, and at other laboratories in the US and Europe. They were too large to fit into buildings, but had now become features of the landscape. The new accelerator at Fermilab was four miles in circumference, and was accompanied by a herd of bison, grazing on the restored Illinois prairie.

By the mid-1970s the work of experimentalists at these laboratories, and of theorists using the data that were gathered, had led us to a comprehensive and now well-verified theory of particles and forces, called the Standard Model. In this theory, there are several kinds of elementary particles. There are strongly interacting quarks, which make up the protons and neutrons inside atomic nuclei as well as most of the new particles discovered in the 1950s and 1960s. There are more weakly interacting particles called leptons, of which the prototype is the electron.

There are also “force carrier” particles that move between quarks and leptons to produce various forces. These include (1) photons, the particles of light responsible for electromagnetic forces; (2) closely related particles called W and Z bosons that are responsible for the weak nuclear forces that allow quarks or leptons of one species to change into a different species—for instance, allowing negatively charged “down quarks” to turn into positively charged “up quarks” when carbon-14 decays into nitrogen-14 (it is this gradual decay that enables carbon dating); and (3) massless gluons that produce the strong nuclear forces that hold quarks together inside protons and neutrons.

Successful as the Standard Model has been, it is clearly not the end of the story. For one thing, the masses of the quarks and leptons in this theory have so far had to be derived from experiment, rather than deduced from some fundamental principle. We have been looking at the list of these masses for decades now, feeling that we ought to understand them, but without making any sense of them. It has been as if we were trying to read an inscription in a forgotten language, like Linear A. Also, some important things are not included in the Standard Model, such as gravitation and the dark matter that astronomers tell us makes up five sixths of the matter of the universe.

So now we are waiting for results from a new accelerator at CERN that we hope will let us make the next step beyond the Standard Model. This is the Large Hadron Collider, or LHC. It is an underground ring seventeen miles in circumference crossing the border between Switzerland and France. In it two beams of protons are accelerated in opposite directions to energies that will eventually reach 7 TeV in each beam, that is, about 7,500 times the energy in the mass of a proton. The beams are made to collide at several stations around the ring, where detectors with the mass of World War II cruisers sort out the various particles created in these collisions.

Some of the new things to be discovered at the LHC have long been expected. The part of the Standard Model that unites the weak and electromagnetic forces, presented in 1967–1968, is based on an exact symmetry between these forces. The W and Z particles that carry the weak nuclear forces and the photons that carry electromagnetic forces all appear in the equations of the theory as massless particles. But while photons really are massless, the W and Z are actually quite heavy. Therefore, it was necessary to suppose that this symmetry between the electromagnetic and weak interactions is “broken”—that is, though an exact property of the equations of the theory, it is not apparent in observed particles and forces.

The original and still the simplest theory of how the electroweak symmetry is broken, the one proposed in 1967–1968, involves four new fields that pervade the universe. A bundle of the energy of one of these fields would show up in nature as a massive, unstable, electrically neutral particle that came to be called the Higgs boson.1 All the properties of the Higgs boson except its mass are predicted by the 1967–1968 electroweak theory, but so far the particle has not been observed. This is why the LHC is looking for the Higgs—if found, it would confirm the simplest version of the electroweak theory. In December 2011 two groups reported hints that the Higgs boson has been created at the LHC, with a mass 133 times the mass of the proton, and signs of a Higgs boson with this mass have since then turned up in an analysis of older data from Fermilab. We will know by the end of 2012 whether the Higgs boson has really been seen.

The discovery of the Higgs boson would be a gratifying verification of present theory, but it will not point the way to a more comprehensive future theory. We can hope, as was the case with the Bevatron, that the most exciting thing to be discovered at the LHC will be something quite unexpected. Whatever it is, it’s hard to see how it could take us all the way to a final theory, including gravitation. So in the next decade, physicists are probably going to ask their governments for support for whatever new and more powerful accelerator we then think will be needed…

Read it all.

About these ads
Follow

Get every new post delivered to your Inbox.

Join 81 other followers

%d bloggers like this: