In the field of engineering, the inner workings of physical processes are usually, with effort, observable, so the theory, or simulation, can indeed be verified. This is not the case in highly abstruse particle physics,  where theories may be developed which depend for their predictive value on unobservable things happening, such as unobservable ‘intermediate particles’.


notes on


Constructing Quarks: A Sociological History of Particle Physics

by Andrew Pickering University of Chicago press 1984


Andrew Pickering is a sociologist, philosopher and historian of science at the University of Exeter. He was a professor of sociology and a director of science and technology studies at the University of Illinois at Urbana-Champaign until 2007. He holds a doctorate in physics from the University of London, and a doctorate in Science Studies from the University of Edinburgh. His book Constructing Quarks: A Sociological History of Particle Physics is a classic in the field of the sociology of science.


from the book jacket:


“Widely regarded as a classic in its field, Constructing Quarks recounts the history of the post-war conceptual development of elementary-particle physics. Inviting a reappraisal of the status of scientific knowledge, Andrew Pickering suggests that scientists are not mere passive observers and reporters of nature. Rather they are social beings as well as active constructors of natural phenomena who engage in both experimental and theoretical practice.”

"A prodigious piece of scholarship that I can heartily recommend."—Michael Riordan, New Scientist


“An admirable history…. Because his account is so detailed and accurate, and because it makes clear why physicists did what they did, it is eminently suited to be required reading for all young physicists entering or contemplating entering the practice of elementary particle physics.” – Hugh N. Pendleton, Physics Today






Etched into the history of twentieth-century physics are cycles of atomism. First came atomic physics, then nuclear physics and finally elementary particle physics. Each was thought of as a journey deeper into the fine structure of matter.


Atomic physics was the study of the outer layer of the atom, the electron cloud. Nuclear physics concentrated ….upon the nucleus, which itself was regarded as composite…protons and neutrons…Protons, neutron, and electrons were the original “elementary particles”. ….In the post world war II  period, many other particles were discovered which appeared to be just as elementary as the proton, neutron and electron, and a new specialty devoted to their study grew up within physics. The new specialty was known as elementary particle physics, after its subject matter, or as high energy physics, after it’s primary tool, the high energy particle accelerator.


In the 1960s and 70s, physicists became increasingly confident that they had plumed a new stratum of matter: quarks. The aim of this book is to document and analyse this latest step into the material world.


p. ix


The view taken here is that the reality of quarks was the upshot of particle physicist’s practice, and not the reverse.


……The main object of my account is not to explain technical matters per se, but to explain how knowledge is developed and transformed in the course of scientific practice. My hope is to give the reader some feeling for what scientists do and how science develops, not to equip him or her as a particle physicist.

There remain… sections of the account which may prove difficult for outsiders to physics…. especially the development of “gauge theory”, which provided the theoretical framework within which quark physics was eventually set. In this context, gauge theory was modeled on the highly complex mathematically sophisticated Quantum Electro Dynamics theory.


p. x


Part I: Introduction, the Prehistory of HEP and its Material Constraints


Chapter 1:



The archetypical scientists account begins in the 1960’s. At that time particle physicists recognized the four forces: Strong, electromagnetic, weak, and gravitational. Gravitational effects were considered to be negligible in the world of elementary particles.


p. 3


Associated with these forces was a classification of elementary particles. Particles which experienced the strong force were called hadrons, which include the proton and neutron. Particles which were immune to the strong force are called leptons, and include the electron.


In 1964 it was proposed that hadrons were made up of more fundamental particles known as quarks. Although it left many questions unanswered, the quark model did account for certain experimentally observed regularities of the hadron mass spectrum and of hadronic decay processes. Quarks could also explain the phenomena of scaling, which had recently been discovered in experiments on the interaction of hadrons and leptons. To the scientist, the quark represented the fundamental entities of a new layer of matter. Initially the existence of quarks was not regarded as firmly established, mostly because experimental searches had failed to detect any particles with the prescribed characteristic of fractional electric charge.


In the early 1970s, it was realized that the weak and electromagnetic interactions could be seen as the manifestation of a single electroweak force within the context of gauge theory. This unification carried with it the prediction of the weak neutral current, verified in 1973, and of charmed particles, verified in 1974. It was recognized that a particular gauge theory, known as quantum chromodynamics, or QCD,  was a possible theory of the strong interaction of quarks.  it explained scaling, and observed deviations from scaling. QCD became the accepted theory of the strong interactions. Quarks had not yet been observed, but both electroweak and QCD theory assumed the validity of the quark identity. It was noticed that since the unified electroweak and QCD were both gauge theories,  they could be unified with one another. This unification brought with it more fascinating predictions which were not immediately verified. Thus, with quarks came the unification of three of the four forces; strong, EM, and weak.


It is important to keep in mind that it is impossible to understand the establishment of the quark theory without understanding the perceived virtues of gauge theory.


p. 4-5


In the scientist’s account, experiment is seen as the supreme arbiter of theory. Experimental results determine which theories are accepted and which are rejected. There are however, two well know objections to this view.


p. 5


First, even if it is accepted that the result of experiment is unequivocal fact, it is always possible to invent an unlimited set of theories, each able to explain a given set of facts. Many of these theories may seem implausible, but plausibility implies a role for scientific judgment. The plausibility cannot be sen to reside in the data. While in principle one might imagine a given theory to be in perfect agreement with the relevant facts, historically this seems never to be the case. There are always misfits between theoretical predictions and experimental data.


Second, the idea that experiment produces unequivocal fact is deeply problematic. Experimental results  are fallible in two ways: scientists understanding of any experiment is dependent on theories of how the test apparatus performs, and if those theories change, then so will the data produced. More importantly, experimental results necessarily rest upon incomplete foundations.

For example, much effort goes into minimizing “background”: physical processes which are uninteresting in themselves, but which can mimic the phenomenon under investigation. A judgment is required that background effects cannot explain the reported signal.

[background can also mean processes that do not jive with the theory of interest]


The scientist’s account is a reference to the judgments made in developing said theories.


p. 6


Theoretical entities like quarks, and conceptualizations of natural phenomena like weak neutral current, are theoretical constructs. However, scientists typically claim these constructs  are “real” and then use these constructs to legitimize  their judgments.


For example, experiments which discovered the weak neutral current are now represented as closed systems just because the weak neutral current is seen as real. Conversely, other observation reports which were once taken to imply the non-existence of the neutral current are now represented as being erroneous.


Similarly, by interpreting quarks etc as real, the choice of quark models and gauge theories is made to seem unproblematic.


In the scientist’s account, do not appear as active agents; they are represented as passive observers. The facts of natural reality are revealed thru experiment. The experimenter’s duty is simply to report what he sees.


p. 7


The view of this book is that scientists make their own history; they are not the passive mouthpieces of nature. The historian deals with texts, which give him access not to natural reality, but to the actions of scientists; scientific practice.


My goal is to interpret the historical development of particle physics, including the pattern of scientific judgments in research.


The quark-gauge theory view of elementary particles was a product of the founding and growth of a whole constellation of experimental and theoretical research traditions. 


p. 8


Opportunism in context is the theme that runs through my historical account. I seek to explain the dynamics of practice in terms of the contexts within which researchers find themselves, and the resources they have available for the exploitation of those contexts.


p. 11


the most striking feature of the conceptual development of HEP is that it proceeded through a process of medelling or analogy. Two key analogies were crucial in the establishment of the quark/ gauge theory picture: the theorists had to se hadrons as quark composites, just as they had already learned to see nuclei as composites of neutrons and protons, and to see atoms as composites of nuclei and atoms. The gauge theories of quark and lepton interaction were modeled on QED. The analysis of composite systems is part of the training of all theoretical physicists. During this period, the methods of QED were part of the common theoretical culture of HEP.


p. 12


Part of the assessment of any experimental technique is an assessment of whether it “works”- of whether it contributes to the production of data which are significant within the frame work of contemporary practice. And this implies the possibility of the “tuning” of experimental techniques: their pragmatic adjustment and development according to their success in displaying the phenomena of interest.


If one assumes that the contents of theories represent the “reality” of natural phenomena, such tuning is simply development of a necessary skill of the experimenter. Otherwise tuning is much more interesting. “Natural phenomena” are then seen to serve a dual purpose: as theoretical constructs they mediate the symbiosis of theoretical and experimental practice, and they sustain and legitimate  the particular experimental practices inherent in their own production.


p. 14


Two constellations of symbiotic research traditions  characterize the historic period of development of the quark as reality concept; the old and new physics.


The “old physics” dominated HEP practice throughout the 1960s. Experimenters concentrated upon phenomena most commonly encountered in the lab, and theorists sought to explain the data  produced. Among the theories developed were the early quark model, as well as theories related to the so-called “bootstrap” conjecture which explicitly disavowed the existence of quarks.

Gauge theory did not figure in any of the dominate theoretical traditions of old physics. By the end of the 1970s the old physics had been replaced by the new physics, which was “theory dominated”, and focused not on the most conspicuous phenomena, but upon very rare processes. The new physics was the physics of quarks and gauge theory. Once more, theoretical and experimental research traditions reinforced one another, since the rare phenomena on which experimenters focused their attention were just those for which gauge theorists could offer a constructive  explanation. Experimental practice in HEP was almost entirely restructured to explore the rare phenomena at the heart of the new physics. The bootstrap approach largely disappeared from sight. The evolution from old to new physics involved much more than conceptual innovation. It was intrinsic to the transformation of the physicist’s way of interrogating the world thru their experimental practice.


p. 15


Chapter 9 discusses  a crucial episode in the history of the new physics:  the discovery of “new particles” and their theoretical explanation in terms of “charm” The discovery of the first new particle was announced in November 74, and by mid 76 the charm explanation had become generally accepted. The existence of quarks was established, despite the continuing failure of experimenters to observe isolated quarks in the lab. The new interest in gauge theory provided a context in which all of the traditions of new physics experiment could flourish, eventually to the exclusion of all else.


Chapter 10 more new particles are discovered; detailed experiment on on the weak neutral current  culminated in the “standard model”, the simplest and prototypical electroweak theory.


Chapter 12 shows by the late 1970s, the phenomenal world of the new physics had been built into both the present and future of experimental HEP: the world was defined to be one of quarks and leptons interacting as gauge theorists said they would.


Chapter 13 discusses the synthesis of electroweak theory with QCD in the so called Grand Unified Theories. We shall see gauge theory permeating the cosmos in a symbiosis between HEP theorists, cosmologists, and astrophysicists.




Chapter 2:

Men and Machines.

Gives some statistics on the size and composition of the HEP community, discusses the general features of HEP experiment, and outlines the post 1945 development of experimental facilities in HEP


Chapter 3:

The Old Physics: HEP 1945-1964: Sketches the major traditions in HEP theory and experiment. From 1945 to 1964, the year in which quarks were invented. It thus sets the scene for the intervention of quarks into old physics.


Experimenters explored high cross section processes, and theorists constructed models of what they reported.


By the early 1960s, old physics hadron beam experiments had isolated two broad classes of phenomena: At low energies, cross sections were bumpy- they varied rapidly with beam energy and momentum transfer, At high energies, cross sections were “soft”: they varied smoothly with beam energy and decreased very rapidly with momentum transfer. ???


The bumpy and soft cross sections were assigned different theoretical meanings. The low energy bumps were interpreted in terms of production and decay of unstable hadrons. As more bumps were isolated, the list of hadrons grew. This was the “population explosion” of elementary particles.



p. 46


S matrix theorizing was the growing list of hadrons, but in the early 1960s, one strand of S matrix development, ‘Regge theory’ came to dominate the analysis of high energy soft scattering regime.


In the 1960s and 1970s, the Regge oriented traditions of theory and experiment were a major component of old physics research, but led nowhere, in that they contributed little to the new physics of quarks and gauge theory. They enshrined a world view quite different from the new physics.


3.1: theoretical attempts to account for  the population  explosion of particles.


In 1951, Robert Marshak noted that: In 1932, with the discovery of the neutron, it appeared that the physical universe could be explained in terms of just three elementary particles: the proton and neutron in the nucleus, and the electron. these were seen as the basic building materials of the 92 kinds of atoms, the elements.


In 1951, Marshak counted 15 elementary particles. as years passed, the list grew longer.


p. 47


Many new hadrons were discovered in the weak interaction (radioactive decay of particles) 

and had lifetimes of 10 -8 to 10 -10 seconds


low energy accelerators showed production of many new particles and decay in much shorter times; 10-23 seconds, which is characteristic of strong force interactions.


  1. 48


by 1964, 90 to 100 subatomic objects (mostly hadrons) had been discovered.


Dirac’s equation predicts antiparticles for all particles. The anti- electron or positron was discovered first. The anti-proton was detected in 1955.


This swelled the list even more.


p. 50



3.2 Conservation laws and quantum numbers: from spin to the eightfold way.


section 2 outlines the way in which conservation laws were used to sharpen the distinction between the strong, weak, and EM interactions, and to classify the hadrons into families. This approach resulted in the eightfold way classification scheme, a precursor to the quark.


energy-momentum and angular momentum conservation laws were well rooted in fundamental beliefs about time. According to quantum mechanics, which prevails here, where orbital angular momentum is not a continuous arbitrary quantity, but is restricted to certain values, which are integral multiples of h bar, or quantized.


Physicists also concluded that it made sense to ascribe angular momentum or ‘spin’ to the elementary particles, in addition to their orbital angular momentum. This spin is also supposed to be quantized; integral or half integral multiples of h bar. Each species of particle was supposed to carry a fixed spin which could not be changed. Half integral spin particles as the electron and proton were known as fermions; integral spin particles were called bosons. Since angular momentum, both orbital and spin is a vector, and since vector is magnitude and direction, the

direction of the particle is also quantized; a spin 2 particle must have either +2 direction, a parallel alignment of the spin with a chosen axis, or -2 direction, in an anti-parallel alignment.


p. 50-51


Three other conservation laws have been established: conservation of electric charge, conservation of baryon number, and conservation of lepton number. conservation of charge is based on electrodynamics, while baryon and lepton numbers are analogical extensions of the charge concept to explain empirical regularities. In the late 1970s the conservation of baryon and lepton number came under theoretical challenge.


p 52-53


The above conservation laws were regarded by article physicists as having unlimited and universal scope. a second class of laws were believed to   apply in some interactions but not in others. Parity was regarded as absolute, since it was based on the mirror symmetry of physical processes. Then, in 1956 it was discovered that in certain processes parity conservation was violated: in the weak interactions. it was maintained in EM and strong interactions.


A similar comment applies to two other quantum numbers: ‘strangeness’ and ‘isospin’.




Strangeness can be thought of as a conserved charge, like electric charge, baryon number, and lepton number. Isospin on the other hand was a vector quantity like spin.


Isospin was an extension of the pre-war ‘charge independence’ hypothesis, and asserted that hadrons having similar masses, the same spin, parity and strangeness, but different electric charges, were identical as far as strong interactions were concerned. Thus in their strong interaction, neutron and proton appeared to be indistinguishable. This was formalized by assigning each group of particles an isospin quantum number: The neutron and proton, now collectively called neucleons, were assigned isospin ˝; the pion, in all of it’s charge states, isospin 1, etc.


Because isospin is supposed to be a vector, it’s ‘3-component’ I3 was also supposed to be quantized, and different values of I3 were taken to distinguish between different members of each isospin family of multiplet.


p. 53-54


Use of conservation laws resulted in the Eightfold Way classification scheme, of which the quark was the direct descendent.


SU(3): The Eightfold Way:

Isospin brought economy as well as order by grouping particles into multiplets. In the 1950s, many theroists attempted to build on this,  looking for ways to achieve greater economy by grouping  particles into larger families. These efforts culminated  in 1961 with the ‘Eightfold Way’, or SU(3) classification of hadrons.


In theoretical physics, there is a direct connection between conserved quantities and ‘symmetries’ of the underlying interaction. Thus the strong interaction, which conserves isospin,  is said to possess isospin symmetry. This corresponds to the fact that the choice of axis against which to measure the e-component of isospin is arbitrary. The strong interaction is said to be invarient under transformations of I3

Such changes of axes are called ‘symmetry operations’, and form the basis of a branch of math called ‘group theory.’ Different symmetries  correspond to different symmetry groups, and the group associated with arbitrariness of the orientation of the isospin axis is denoted ‘SU(2)’


By making the connection with group theory, physicists translated the search for a classification wider than isospin into a search for a more comprehensive group structure; a higher symmetry of the strong interaction with representations suitable to accommodate particles of different isospin and strangeness.


In 1961, Gell Mann and Israeli theorist Yuval Ne’eman proposed the Eightfold Way, which mathematicians denote as ‘SU(3)’.They were trying to set up a detailed quantum field theory of the strong interaction. However, the rapid  development of the quark theory alternative to field theory approach to the strong interaction caused field theory to fall out of fashion, so SU(3) was divorced from its roots in gauge theory and used as a classification system.


Agreement that the SU(3) multiplet classification was found in nature was only reached after several years debate within the HEP community.


One problem was that early experiments indicated the lambda and sigma baryons were of opposite parity, which made the SU(3) assignment  of the lambda and sigma to the same family impossible, and favored alternative classification schemes.   In 1963, CERN experimenters reported that lambda and sigma had the same parity, thus favoring the SU(3) assignment.


By 1964, the predictive and explanatory successes of SU(3) were so great that there was little argument in the HEP community that SU(3) was not appropriate classification scheme for hadrons.


3.3 Quantum Field Theory

p. 60- 73.

Discusses how HEP theorists attempted to extend the use of quantum field theory from EM to the weak and strong interactions. this attempt bore fruit in the early 1970s with the elaboration of gauge theory. This approach was pragmatically useful for the weak interaction, but failed miserably for the strong interaction.


The use of conservation laws, symmetry principles and group theory brought some order into the proliferation of particles. Besides a classification scheme for hadrons, the eightfold path (Su(3)) also broadly predicted some relationships. In pursuit of a more detailed dynamical scheme, HEP theorists inherited quantum field theory: the QM version of classical field theory; ie, the field theory of EM developed by Maxwell.





The QED Lagrangian p. 61


L(x) = Ψ(x)D Ψ(x) +m Ψ(x)Ψ(x) +(DA(x))2  + e A(x) Ψ(x)Ψ(x)



L(x) is the Lagrangian density at space-time point x

Ψ(x) and Ψ(x) represent the electron and positron fields at point x

A(x) is the electromagnetic field

D is the differential operator so that D Ψ and DA represent field gradients in space time

e and m represent the charge and mass of the electron


Each term of this equation can be represented by a diagram


The first term of the equation corresponds to a straight line with an arrow, and represents an electron or positron of zero mass traveling through space. The second term adds mass to the electron or positron. The third term represents a massless photon, the quantum of the EM field.


According to the uncertainty principle, the range over which forces act is limited by the mass of the particle responsible for them, so only zero mass particles can give rise to forces that travel macroscopic distances.


If only the first three terms are included in the Lagrangian, QED is an exactly soluble  theory. The dynamics of any collection of electrons, positrons, and photons can be exactly described. Whatever particles are present simply propagate through the space; a trivial result. The fourth term is ‘trilinear’, containing two electrons and a photon, and represents the fundamental interaction of coupling between electrons and photons.


In quantum field theory, all forces are mediated by particle exchange: In QED the force transmitting particle is the photon.


Exchanged particles in the Feynman diagrams are not observable thru recording of their tracks, (say using a bubble chamber) This is because there is a difference between ‘real’ and ‘virtual’ particles. All ‘real’ particles, as the incoming and out going particles in a Feynman diagram, obey the relationship E2 =p2 + m2, where E=energy, p=momentum, and m=mass. Exchanged particles do not obey this relationship, and are said to be ‘virtual’, or ‘off mass-shell’ particles.


In quantum physics, as a result of the Uncertainty Principle, virtual particles can exist, but only for an experimentally undetectable length of time. 


Pre WWII  QED suffered from a theoretical ‘disease’: calculation involved infinite integrals.


The cure was renormalization, which transformed it into the most powerful and accurate dynamic theory ever constructed.


It was found that if one puts up with the infinities, but at the end of the calculation set the apparent infinite mass and charge of the electron equal to their measured values, one arrives at sensible results. This absorption of the infinities into  the values of physical constants appeared intuitively suspect, but it immediately gave a quantitative explanation of the Lamb shift. In renormalized QED, calculations of EM processes could be pushed to arbitrarily high orders of approximation and always seemed to come out right.


The renormalization idea had been suggested before the war by Weisskopf and Kramers, and was carried through in the late 1940s by Sin-Itiro Tomonga et al in Japan, and by Julian Schwinger and Richard Feynman in the US. In 1949, the British theorist Freeman Dyson completed the proof of the renormalisability of QED to all orders of approximation and showed the equivalence of the different approaches of Tomonaga, Schwinger, and Feynman. Feynman’s approach was most intuitive, using diagrams now called Feynman diagrams, He derived a set of rules for associating a math expression with a diagram, now known as Feynman rules, and  the infinite integrals now known as Feynman integrals.


Because of the success of the renormalized QED, HEP theorists in the early 1950s tried to apply the same methodology to the strong and weak interactions; by quantizing appropriate fields. This did not bear fruit however.


The quantum field theory of weak interaction, despite its theoretical shortcomings, offered a coherent organizing principle for weak interaction research.


The quantum field theory of the strong force met severe difficulties which were not overcome.


Interest in quantum field theory lost favor until 1971, when there was an explosion of interest in one particular class of quantum field theories: gauge theories.



3.4: The S-Matrix: p. 73-78

Reviews the struggle to salvage something from the wreckage of applying quantum field theory to the strong interaction. What emerged was the ‘S matrix’ approach to strong interactions. the S matrix was founded in quantum field theory but achieved independence, and was seen, in the ‘bootstrap’ formulation,  as an explicitly anti-field-theory approach.


The entire array probabilities for transition between all possible initial and final states became known as the S-matrix, or scattering matrix. The S-matrix approach was taken up in response to the infinities problem in QED, but then dropped with the adoption of the renormalization solution.


All that can be observed in the strong interaction is that, in the course of an experiment, an initial collection of particles with specific quantum numbers and momentum is transformed into a different final state. One cannot observe nucleons exchange pions. Thus Fenyman type diagrams, and the corresponding equations, are not really applicable, so physicists tried applying quantum field theory and diagrammatic techniques to revisit the S-matrix.


In the 1960’s, work by Geoffrey Chew, Murry Gell Mann et al established that the S-matrix could be regarded as an ‘analytic’ function of the relevant variables, based on functions of complex variables. Thus S-matrix theory could be pursued as an autonomous research project independent of field theory. Many physicists abandoned the traditional field theory approach to interactions in favor of exploring this option.


Chew expounded on the explicitly anti-field theory ‘bootstrap’ philosophy: S-matrix led to an infinite set of coupled differential equations by which all properties of all hadrons could be determined. Each particle does not have to be assigned it’s own quantum field. By truncating the infinite sets of equations, it was possible to calculate the properties of certain particles.


Italian Tullio Regge used complex energy and momentum variables to study non-relativistic potential scattering. Chew et al translated  Regge’s ideas to a relativistic S-matrix format. For several types of particles at high energies and small momentum transfer rates, experimental results supported the theory.


Regge theory also predicted  cross sections that varied  smoothly with the square of energy, and softly with respect to time for the high energy regime above the resonance region, characteristics

Observed for high energy hadron scattering.  Enormous sets of data were generated at accelerator labs in the 1960s. But these developments were not related to the ‘new physics’ of the low energy quark concept.


Part II: Constructing Quarks and the Founding of the New Physics.


Chapter 4:

The Quark Model


In the 1960s, the old physics split into 2 branches: the high energy focused on soft scattering via Regge models, and at low energies, resonance was studied. In the early 1960s, group theory was the most popular framework for resonance analysis, but buy the mid 1960’s, quark models took over this role.


4.1 The Genesis of Quarks

In 1964, Murray Gell-Mann proposed that hadrons were composite, built up from more fundamental entities  which themselves manifested the SU(3) symmetry. Gell-Mann remarked that “a …more elegant scheme can be constructed if we allow non-integral values for the charges [of four basic components]. This was the quark model.


George Zweig also proposed a quark model, and both were elaborated into a distinctive tradition of experimental practice.


Gell-Mann abstracted the name ‘quark’ from James Joyce’s Finnegan’s Wake.


There were some similarities in their models, including the use of fractionalized spin. Apart from the 1/3 integral baryon number, the oddest thing about quarks was their electrical charge: +2/3 for the u(up) quark, and -1/3 for the d(down) and s(strange) quarks.


the group theory approach to hadronic symmetries invited the quark viewpoint.


Gell-Mann’s and Zweig’s quark formulations  were both open to the same empirical objection: although the properties of quarks were quite distinctive, no such objects had ever been detected.


Gell-Mann notes:


“…A search for stable quarks….at the highest energy accelerators would help to reassure us of the non-existence of real quarks.” 


Gell-Mann’s statement neatly exemplifies the ambiguities  felt about the status of quarks.


Gell-Mann was particularly prone to suspicions concerning their status.


Although experimenters continued to look for quarks throughout the 60’s, 70’s, and 80’s, an acknowledged quark was never found.


The lack of direct evidence did much to undermine the credibility of the quark model in it’s early years, but variants of the quark model  showed growing success in explaining a wide range of hadrionc phenomena. the two principle variants could be traced back to Gell-Mann’s model and Zweig’s model, called the Constituent Quark Model.


4.2 The Constituent Quark Model (CQM)  

Zweig treated quarks as physical constituents of hadrons, and thus derived all of the predictions of SU(3). In 1963 he concluded: “In view of the extremely crude manner in which we have approached the problem, the results we have obtained seem somewhat miraculous.”


Zweig further wrote: “The reaction of the theoretical physics community to the ace [quark] model was generally not benign…”


In the old physics, there were two principle frameworks for theorizing about the strong interaction:

quantum field theory and S-matrix theory. The CQM was unacceptable to protagonists of both of these frameworks.


To explain why free quarks were not experimentally observed, the obvious strategy was that they were too massive, so that the accelerators of the day (mid 1960’s) had insufficient energy. this implied quarks had masses of at least a vew GeV. However, in combination with one another, quarks were supposed to make up much lighter hadrons; the 140 MeV pion being the most extreme example.


The idea that the mass of the bound state should be less than the total mass of the constituents was familiar from nuclear physics, where the difference was known as the ‘binding energy’ of the nucleus.


However, that meant that the binding energy of  in hadrons was  of the same order as the masses of the quarks themselves, as opposed to the situation in nuclear physics, where binding energies were small fractions of the nuclear mass. This implied very strong binding, which field theorists did not know how to calculate. Field theorists were left cold by the quark theory.


On the other hand, at the heart of S-matrix theory, especially in the bootstrap version, there were no fundamental entities.


Although  CQM failed experimentally, and was disreputable,  it had great heuristic value.

it became a resource for explaining data.


It brought resonance physics down to earth. The group theory properties of hadrons were reformulated in terms of quarks.


Hadron spectroscopy

The straightforward  CQM conjecture was that hadrons constituted the spectrum of energy levels of composite quark systems.


in the late 1960’s, CQM analysis of hadron mass spectra grewinto a sophisticated phenomenological tradition.


 It was concluded that the simplest quark model could reproduce qualitatively the gross features of the established  hadron resonance spectra. 


Besides classifying hadrons and explaining their mass spectra, CQM enthusiasts analyzed hadron production and decay mechanisms. Again theorists drew their inspiration from the standard treatment of composite systems in atomic and nuclear physics.  Again the procedure was to load the properties of hadrons onto their constituent quarks.



The agreement between theory and data for EM decay of the positively charged  delta resonance was one of the first successes of the application of the CQM to hadronic couplings.  


Models were also constructed of the strong and weak hadronic couplings, which sometimes led to paradoxical results.


From the theorist’s perspective, CQM was an explanatory resource for the interpretation of data.

However, just as the theorist’s practice was structured by the products of experiment, so the experimenter’s practice was structured by the products of the theorist’s research. Through the  medium of CQM, theorists and experimenters maintained a mutually supportive symbiotic relationship.


With the advent of quarks, the symbiosis between theory and experiment became more intimate. This is because experimenters began to move on from the low mass classic hadrons to higher mass hadrons which were more difficult to identify. Zweig observed, even in 1963, that “particle classification was difficult because many [resonance] peaks …were spurious”, and that of 26 meson resonances listed in an authoritative compilation, 19 subsequently disappeared. Kokkedee noted: “because of the unstable experimental situation many detailed statements of the model are not guaranteed against the passage of time.”


CQM theory analysis offered experimenters specific targets to aim at. CQM also made exploration of the low energy resonance regime interesting: something worth the time, money, and effort. The experimental data on resonances made available encouraged more work on CQM.




4.3 Quarks and Current Algebra

Above all else, Gell-Mann was a field theorist. Characteristic of his approach was idea that one should take traditional perturbative field theory modeled on QED and extract whatever might be useful. He sought that is, to use field theory heuristically to arrive at new formulations. For example his exploration of the analytic properties of the S-matrix using perturbative quantum field theory as a guide.  The current algebra approach is another example. Gell-Mann proposed thst in discussing the weak and EM properties of hadrons, one should forget about traditional QFT, and regard weak and EM currents as the primary variables. Hadronic currents were postulated to obey an SU(3)XSU(3) algebra; hence, “current algebra” is the term for the work which grew up around his proposal. Current algebra had an immediate phenomenological  applications, since currents were directly related to observable quantities via standard theoretical techniques. Although Gell-Mann was clearly working within the field theory idiom, the algebraic structure relating the currents could not be said to derive from field theory [so what?] Gell-Mann then observed that the relationships between the weak currents required to explain experimental data were the same as those between currents constructed  from free quark fields.


4.4 The Reality of Quarks

Although the CQM provided the best available tools for the study of low energy strong interaction hadronic resonances, and current-algebra tradition, which frequently called on quark concepts was at the cutting edge of research into the weak and EM interaction of hadrons, throughout the 1960s and 1970s, papers, books and reviews were filled with caveats concerning quark reality.


British quark modeler Frank Close noted in 1978: “there was much argument …[in the 1960s] as to whether these quarks were real physical entities or just an artifact that was a useful Mnemonic aid when calculating with unitary symmetry groups.


George Zweig: ‘the quark model gives us an excellent description of half the world’: In low energy resonance physics, hadrons appeared to be quark composites, but in high –energy soft scattering, they were Regge poles. Hadronic structure differed according to who was analyzing it. 


 For a while, quark and Regge modelers co-existed peacefully, with no one group claiming priority.


The image of hadrons and quarks differed between the CQM and the current-algebra tradition.


In CQM, theorists used ‘constituent quarks’ to represent hadrons of definite spin which were observed in experiment. In  Current algebra, theorists built hadrons out of ‘current quarks’, but these were not the individual hadrons observed in experiment: they were combinations of different hadrons having different spin, so in some sense the two types were different objects. 


Gell-Mann’s view was that the two forms were somehow related. Lots of work was done to explore the possible relationship which led to new insights into the classification and properties of hadrons, but a full understanding was never achieved. The CQM and current-algebra traditions remained separate, with their own quark concept.


The theoretical wing of the old physics in the mid 1960s consisted of Regge theory, in which I was possible to discuss hadron dynamics without reference to qurks, CQM theory, in which QM. 





Chapter 5:

Scaling, Hard Scattering and the Quark-Parton model. (p. 125-158)


5.1 ScaIing at Stanford Linear Accelerator Center

n elastic electron-proton scattering, an incoming electron exchanges a photon with the target proton. The beam and target particles retain their identity and no new particles are produced.  The process is primarily EM and assumed to be understood by QED. Electrons are assumed to be  structureless points, while protons are assumed to occupy a finite volume and structure. Measurements of electron-proton scattering could therefore be considered to probe the proton’s structure, and in particular explore the spatial distribution of the proton’s electric  charge.


Electrons frequently scattered off one another at large angles, while electron  proton scattering was normally at small angles. Ie, electrons seemed “hard” while protons seemed “soft” and served to diffract incoming electrons, at both low and high energies.


In optical physics, the dimensions of a diffracting object was a function of the diffraction pattern it produced, and based on this pattern the proton had a diameter of about 10-13 cm.


To study inelastic electron-proton scattering, in 1997 a Stanford and MIT study group fired the electron beam at a proton target (liquid hydrogen) , and counted how many ‘system’ particles emerged with given energies and at given angles to the beam axis. (inclusive measurements)


For the small angle scattering of low energy electrons, experimenters found what they expected: the most important process was resonance production: resonance peaks are typically seen at three different values of mass of the produced hadronic systems.


However, for larger scattering angles  and higher beam energies, they found that although the resonance peaks disappeared, the measured cross sections remained “large”, suggesting that the proton contained hard point like scattering centers, which resembled electrons. however, the experimenters did not know what these results meant.


In the 1960s, James Bjorken concluded, based on a variety of non-rigorous perspectives, that the inelastic cross sections  for larger scattering angles and higher beam energies would have to be large. He also inferred that in the limit where q2 (momentum transfer) and v (a measure of the energy lost by the electron in collision) become large, but with a fixed ratio (the Bjorken limit)  W1 and vW2 would not depend on upon v and q2 independently, but rather, they would  only be functions of the ratio v/q2, where W1 and W2 are ‘structure functions’. Drell and Walecka had previously determined that the inelastic electron-proton scattering cross section could be expressed in terms of these ‘structure functions’. Curves of W1 and vW2 became known as scaling curves. To experimenters and many theorists these  relationships were esoteric to the point of incomprehensibility.   A few years later, in 199, the response of the HEP community was more enthusiastic.


5.2 The Parton Model

In 1968, Feynman formulated the ‘parton model’ explanation of scaling on a visit to SLAC.







feynman abc

Feynman Diagram for:

non interacting electron;

a single point in space


emission and reabsorption of a photon by an electron;

(an EM reaction; a first order approximation to QED)


photon conversion into an electron-positron pair

(an EM reaction; a second order approximation to QED)


At higher orders of approximation to QED,  ever more photons and electron-positron pairs appear.

when the higher terms are taken into account, the electron appears not as a single point, but as an extended cloud of electrons, positrons, and photons, which together carry the quantum numbers, energy and momentum of the physically observed electron.

There are an indefinite number of possible orders.


However, because of the weakness of the EM interaction, the higher order QED corrections

(successive perturbations) are smaller .

This means that for most practical purposes, electrons may be regarded as point-like particles.


feynman proton



in  an analogous fashion for a proton, or any hadron:


For a proton, the strong as well as EM interactions need to be considered



Feynman Diagram for:

non interacting proton;

emission and reabsorption of a pion by a proton;

pion conversion into a nucleon-antinucleon pair


up to this point the analog between electrons and protons is valid, but the strong interaction is large, and successive terms in the perturbation are larger.

From the field theory perspective, since there are also an indefinite number of orders, the proton has a complex structure.


The field theory perspective was welcome because the finite size explained why the elastic electron –proton scattering cross sections decreased diffractively with momentum transfer, but unwelcome because having to deal with a proton as a particle cloud rather than as a single point

made the task too difficult.


Fenyman took the view that the swarms of particles within hadrons; whether quarks and anti quarks, or nucleons and anti nucleons, were of indefinite quantum numbers, and called them ‘partons’. He reasoned that for high energy collisions between protons,  each proton will see the other as relativistically contracted into a flat disc, and have very short time to react to one another, so the interaction of partons within an individual disc would be negligible, so the partons of each proton would act as a unit.


Scaling emerged as an exact prediction of the parton model, which became central to the practice of increasing numbers of HEP scientists.


Popularity of the parton model could not lie in it’s success at explaining scaling per se; Bjorken and others had explanations that worked just as well. Neither can it’s popularity be ascribed to the validity of it’s predictions. Straight forward extensions often led to conflicts with observation.

Bjorken’s calculations led first to the prediction of scaling, then to the observation that scaling corresponded to point like scattering. Feynman started  with point partons, and predicted scaling. The parton model provided an analog between the new phenomena of scaling and the idea of composite systems known and comfortable to physicists. 


A well known analogy existed between Rutherford’s  interpretation of a point like scattering center, being the nucleus within the atom, and Feyneman’s interpretation of scaling in terms of point like partons within the proton. A Scientific American article  written by physicists concluded that the Stanford experiments were fundamentally the same as Rutherford’s. But this was not an accurate statement, since the atom is believed to be made of a single central nucleus surrounded by an electron cloud, while the parton model implied that the proton is made of a single amorphous cloud of partons.


Feynman’s parton model effectively bracketed off the still intransigent strong interactions: all the effects of the strong interactions were contained in the parton momentum distributions. Knowing or guessing these distributions, calculations of electron hadron scattering could be reduced to simple first order QED calculations. The only difference was that the quantum numbers for the partons were unknown. One popular speculation held that the partons had the same quantum numbers as quarks, and this quickly came to be accepted.


The parton approach and the constituent quark model approach were both composite model approaches. However, to explain scaling, one had to treat partons as free particles, while the existence of strong interactions binding quarks into hadrons was at the heart of the CQM. Further, the CQM had its greatest success in low energy resonance physics, while the parton model was applied to the newly discovered high energy high momentum transfer scaling regime.


Partons became quickly identified with quarks for most particle physicists. This means that if the quark remains a free particle, it should shoot out of the proton and appear as debris of the collision. However, quarks have never been observed. It was then assumed that after it’s initial hard interaction,  it subsequently undergoes a series of soft, low momentum strong interactions with it’s fellow partons. Such assumptions were unavoidable and theoretically unjustifiable.


5.3 Partons, Quarks, and Electron Scattering

Although the parton model was Feynman’s creation, it was taken up at SLAC. Field theorists, following Feynemanlead, began to practice their art in the realm of the strong interaction, leading to the formulation of QCD.


The parton model was use phenomenologically to give structure and coherence to the experimental program at SLAC. The identification of quarks with partons was grounded in the   phenomenological analysis of scaling. Besides their scaling property, the magnitudes and shapes of the structure functions could be measured. Attempts were made to try to back out attributes of the parton based on data fitting of the structure functions.


More information on parton quantum numbers was sought from an exam of the individual structure functions. According to QED, each species of parton was supposed to contribute to the total structure function of the proton in proportion to the square of its electric charge. Hence, by making assumptions about the parton composition of the proton, estimates of the proton structure functions could be made.


Such estimates proved to be in error, even when, to conform with parton requirements, a quark-antiquark cloud (indefinite number of quark-antiquarks) was adopted.


Still, theorists elaborated the quark-parton model, and introduced ‘glue’ into the make up of the proton. The argument was that IF the nucleon was simply a composite of non-interacting quarks, THEN it would fall apart unless some ‘glue’ held it together; thus the ‘gluon’ was hypothesized, which was incorporated into the structure function model; the gluons were assumed to be electrically neutral, but would have momentum. Like the quark-antiquark cloud, the gluon component was another free parameter which a critic might argue could be tweaked to help  reconcile the expected properties of quarks with experimental findings. Many physicists were skeptical of such an approach. Something more was required to persuade the HEP community to

Accept the parton model, and the equation of partons to quarks. Neutrino scattering experiments provided that something more.


5.4 Neutrino Physics.

Theorists quickly extended the parton model to the neutrino-antineutrino scattering from nucleons. The weak neutrino-parton interaction could be seen as very similar to the EM electron-parton interaction. In this case neutrino-parton scattering is said to be mediated by exchange of a W-particle, as the EM electron-parton interaction is mediated by a photon. Although  HEP neutrino physics was bolstered by Brookhaven AGS and CERN studies in the early 1960s, no W-particles (intermediate vector bosons; IVBS) were found in the “first round experiments”


Nevertheless, by 1973 there was general agreement that the parton model, with quarks identified as partons, was a promising approach to the analysis of both electron and neutrino scattering. Theoretical arguments could still be brought against the quark-parton idea- why were quarks not produced as free particles in deep inelastic scattering; what was the mysterious gluon component?? But Feyneman remarked that “there is a great deal of evidence for, and no experimental evidence against, the idea that hadrons consist of quarks….. let us assume it is true.” Many physicists did just that.


5.5 Lepton-Pair Production, Electron-Positron Annihilation and Hadronic Hard Scattering

The quark-parton model was extended to certain other processes; notably Lepton-Pair Production, Electron-Positron Annihilation and Hadronic Hard Scattering.


Lepton pair production could be visualized as a quark and antiquark being produced by the collision of protons made of partons; the [never observed] quark-antiquark combine to emit an electron which then produces a lepton pair.  Again,  another  Brookhaven AGS study conducted in 1968 bolstered lepton physics, yet again failed to find any W-particles.


Electron-positron annihilation was visualized as electron and positron annihilate to form a photon which materializes as a [never observed] quark-antiquark pair which would then somehow  rearrange themselves into hadrons. Collider data from Frascati Italy in nthe late 1960s confirmed general expectations of the quark-parton model, but the model did not explain the data. This encouraged theorists to endow quarks with yet another set of properties known as ‘color’.


With the advent of the proton-proton collider, the Intersecting Storage Rings (ISR) at CERN, several groups of experimenters again set out to hunt for IVBS (the W particle), but were unsuccessful. It was not clear how the parton model should be extended to hadronic hard scattering, yet an exciting  new phenomenon was discovered: the excess production of high transverse momentum (pt) hadrons.


Chapter 6:

Gauge Theory, Electroweak Unification and the Weak Neutral Current

New physics theory incorporated two sets of resources: quark models of hadron structure, and a class of quantum field theories known as gauge theories.


Gauge theory was invented in 1954 by CN Yang and RL Mills. Modeled closely on QED, gauge theory initially enjoyed some popularity, and was one of the resources leading to the development of the Eightfold Way symmetry classification of hadrons. Soon however, quantum field theory went into decline, and with it gauge theory.


Electroweak unification amounted to the representation of the EM and weak interactions in terms of a single gauge theory. There was a drawn out struggle to show that gauge theory, like QED, renormalizable. Once this was established, gauge theory became a major theoretical industry, and one important branch of that industry was the construction of unified gauge theory models of the electroweak interaction. Such models predicted the existence of a new phenomena in neutrino experiments, and one such phenomena, the weak neutral current, was reported from the Gargamelle neutrino experiment at CERN in 1973 and confirmed at Fermilab the following year. Thus the gauge theory proposed by Yang and Mills in 1954 finally made contact with experiment 19 years later.


6.1 Yang-Mills Gauge Theory  p 160-165

Yang became interested in the strong interactions, and studied quantum field theory, specifically referring to articles published by Wolfgang Pauli which emphasized the importance of gauge invariance in QED.


There is a certain arbitrariness in classical EM as represented by Maxwell’s equations, which are formulated in terms of electric and magnetic fields, which can be expressed as derivatives of vector and scalar potentials. The arbitrariness arises because one can modify the potentials in space and time, without changing the associated fields. Thus, classical EM is said to exhibit gauge invariance. This invariance carries over into the quantized version of electromagnetism, QED.


The QED Lagrangian p. 61:


L(x) = Ψ(x)D Ψ(x) +m Ψ(x)Ψ(x) +(DA(x))2  + e A(x) Ψ(x)Ψ(x)



L(x) is the Lagrangian density at space-time point x

Ψ(x) and Ψ(x) represent the electron and positron fields at point x

A(x) is the electromagnetic field

D is the differential operator so that D Ψ and DA represent field gradients in space time

e and m represent the charge and mass of the electron


is invariant under the following  transformations:


Ψ(x) = Ψ(x) e (x)


A(x) = A(x) + (x)


ie,  L(x) remains unchanged with the substitution of the above quantities into the above equation, which means L(x) remains unchanged under these gauge transformations.


The existence of photons interacting with electrons is a formal requirement of a gauge invariant theory of EM, or QED.


Yang and Mills tried to model a theory of the strong interaction based on QED. They then attempted to follow a standard route  to get physical predictions from it by Feynman rules and diagrams, but failed.  The mathematical complexities were such  that Yang later recalled ‘We completely bogged down…’  but nevertheless published a paper on the topic. This was the starting point of the gauge theory tradition in HEP.


Theorists then began developing different versions of  this gauge theory and tried to bring the predictions of gauge theory to the data.


A major obstacle was the ‘zero mass problem’. The Yang Mills Lagrangian,  which was developed from the QED (EM interaction) Lagrangian  for the strong interaction, as the QED version,  contains no mass terms.  So, according to the uncertainty principle, any forces mediated by the exchange of the massless W particles of Yang Mills (gauge theory) would be long range. However, the strong and weak interactions are short range. Gauge theory thus appeared inapplicable to these short range interactions.


This situation met with various responses; for example, agreeing that gauge theory indeed had nothing to do with elementary particles; arguing that gauge theory was incompletely understood, and that with more study the W particles may be found to have mass. Another response was to just give the gauge particles mass by inserting ‘by hand’ an appropriate mass term in the Yang Mills Lagrangian. This disturbed the analogy with QED and destroyed the gauge invariance of the theory, but did make gauge theory a candidate for description of the short range weak and strong interactions.


Later it was proposed that in addition to the triplet of W vector particles there existed two other vector meson particles, which were identified experimentally shortly afterwards. But this predictive success was not unique to gauge theory: the disregarded S-matrix bootstrap theory could also produce vector mesons.


Gauge theory was a central concern of Gell-Mann in the construction of the Eightfold Way of Hadrons. However, in Gauge theory, the vector mesons were fundamental entities; the quanta of the gauge field, but as the Eightfold Way prospered and transformed into the quark model, vector mesons came to be regarded as ordinary hadrons; quark-antiquark composites just like all the other mesons.  This signaled the downfall of gauge theory as an approach to the strong interaction.


In the 1970s another set of candidate gauge particles emerged in the study of the strong interactions; the gluons.


6.2 Electroweak Unification and Spontaneous Symmetry Breaking

The Fermi and ‘V minus A’ theories of the weak interaction envisioned the weak interaction as taking place between two currents at  a point in space. Both were subject to two theoretical objections: they were non-renormalisable, and violated the physically reasonable ‘unitary limit’ at ultra high energies. Theorists conjectured these objections might be overcome if the weak force was represented as mediated  by particle exchange. To reproduce the space-time structure of the ‘V minus A’ theory, the exchanged particles; the carrier of the weak force, should be intermediate vector bosons (IVBS). two such particles of opposite polarity were sufficient to recover the ‘V minus A’ phenomenology. Another conjecture was that the vector gauge particles of the Yang Mills Gauge theory, in a suitable formulation, could be identified with the IVBS of the weak interaction.  Further, since gauge theory was closely related to QED, there was speculation that in some sense the weak and EM interactions were manifestations of a single underlying ‘electorweak’ force.


Sheldon Glashow, and Abdus Salam working with JC Ward, developed unified electro weak gauge theories in the 1960s. In both theories, the intermediate particles were given a suitable mass ‘by hand’, making them non-renormalisable. In 1967, the ‘Weinberg-Salam’ model appeared. It’s distinctive feature was that the IVBS acquired masses by ‘slight of hand’, with no explicit IVBS mass terms appearing in the Lagrangian.


Spontaneous Symmetry Breaking and the Higgs Mechanism

In the perturbative approach to quantum field theory of the 1940s and 1950s, a direct correspondence was assumed between the terms in the Lagrangian and physically observable particles. In QED for example, the propagation of real massed electrons and real massless photons (remembering of course that the electrons and photon particles are NOT directly observable) However, in the 1960s, this assumption was challenged. The impetus for the challenge came from solid state physics, where all sorts of quasi particles were used to explain experimental observation, and these particles did not map onto the fundamental fields of a field theory approach. There was a considerable conceptual gulf between solid state and HEP physics, and how the ideas of one might be transformed to apply to the other was not obvious.


One of the first to try transforming the ideas of one to the other was Yoichiro Nambu, who had worked in both superconductivity and HEP. In 1961 two papers were published with the title

A Dynamical Model of Elementary Particles Based Upon an Analogy with Superconductivity.  

These papers introduced the new concept of “Spontaneous Symmetry breaking” (SSB)


The thrust of SSB was that it is possible for a field theory Lagrangian to possess a symmetry which is not manifest in the physical system which the theory describes.


The concept of SSB may be explained by analogy with ferromagnetic material. Magnetism is produced by the mutual interaction of atomic spins; each spin behaving like a little magnet. The Lagrangian for a system of interacting spins shows no particular direction in space; it is rotationally invariant. Yet in the actual physical system, the spins of a ferromagnet line up to produce macroscopic magnetism. So physical ferromagnetism is an example of SSB, and superconductivity can be explained likewise.


The SU(3) Eightfold Way symmetry of the (strong interaction) hadrons seemed to be only approximate, and it was conjectured that SSB had something to do with that.


Jeffery Goldstone however, concluded that SSB must be accompanied by the appearance of massless spin zero particles- Goldstone Bosons. This result became the object of increasingly forma proofs, which implied that a theory of SSB of the SU(3) symmetry was out of the question; since the strong interaction was of short range, massless particles were out of the question.

Theorists then proposed that the pion, which was much lighter than all other hadrons, could be regarded as a “pseud-Goldstone boson”


The original inspiration for Goldstone’s particle physics SSB was superconductivity, but there are no massless particles in superconductivity; even the photon acquires an effective non-zero mass. A controversy arose in the HEP community over whether phenomena seen in superconductivity persisted in relativistic situations. The upshot of the resolution was that there existed a class of relativistic field theories where the Goldstone theorem could be evaded: those theories having a local gauge symmetry: QED and Yang Mills theories.


Peter Higgs exhibited and analyzed the evasion of the Goldstone theorem, introducing a model, and a ‘Higg’s mechanism’ .


The model consisted of the standard QED Lagrangian augmented by a pair of scalar (spin zero) fields which were coupled to the photon and to one another in such a way as to preserve gauge invariance of EM. Higgs found that if he gave the scalar fields a negative mass term in the Lagrangian, the physical result of the model would be a massive photon and one massive scalar particle; a ‘Higgs particle’. The physical interpretation of the Higgs mechanism was that massless photons can only be polarized in two directions, while massive vector particles have three possible axes of polarization. The massless photon can be seen as ‘eating up’ one of the scalar particles.


In 1967, Steven Weinberg and Abdus Salam, working independently, adapted the unified electroweak gauge theory model proposed by Glashow, Salam and Ward, and replaced the IVB mass terms previously generated by hand with masses generated by the Higgs mechanism. The result was a unified electro weak Weinberg Salam model.


Weinberg’s model was essentially that proposed 6 years earlier by Glashow, except that certain mass relationships between the IVBS were determined in Weinberg’s model in terms of a single free patrameter, the ‘Weinberg angle’.


As early as 1962,  Salam had discussed the possibility of mass generation in gauge theories by means of SSB.     


The Weinberg Salam model was initially ignored: Field theory was in decline, the model was focused on the weak interaction of leptons, and except as an exercise in virtuosity, the model had no special appeal.  When applied to hadrons, it led to predictions in conflict with experimental results.


When theorists found that Weinberg and Salam’s theory of weak interactions was  renormalizable, interest in unified gauge theories exploded.


6.3 The Renormalisation of Gauge Theory


In 1971, through the efforts of Dutch HEP theorist Martin Veltman and his student, ‘t Hooft, it was demonstrated that electroweak gauge theories were renormalizable. Proof of the renormalizability of QED had been difficult, and for gauge theory had been formidable. Any attempt to go into this proof would be futile. The author focuses instead on the activities of Veltman and his student, Gerard ‘t Hooft.


In 1968, Veltman investigated the renormalizability. He decided to look at massive Yang Mills theory, the version in which masses were inserted by hand, since pure gauge-invarient theory with massless gauge particles was unrealistic. He encountered the first of the infinite integrals, which were commonly assumed to make the theory non- renormalizable. But he quickly convinced himself that many, if not all of the infinite integrals were of opposite sign and cancelled. Their contribution to the physical process was zero. He concluded that it was conceivable that massive gauge theory might be  renormalizable, contrary to the conventional wisdom. Looking for a more elegant approach, he reformulated the Feynman rules of the theory in such a way that the cancellation between infinite integrals was manifest by inspection. But to do this he had to include in the set of diagrams the interaction of a ‘ghost’ particle, which appeared only in closed loops and not as a physical incoming or outgoing particle. He found that work done of massless gauge theories included ‘ghosts’, and resulted in a set of Feynman diagrams similar to his. Further, analysis of the massless theory had been carried through arbitrarily many loops, and the theory was shown to be renormalizable. Veltman then investigated a two loop diagram in the massive theory, but established a puzzling result, which forced him to conclude that at the two loop level, massive gauge theory was non-renormalizable. He then began to entertain the possibility that perhaps appropriate scalar fields could be introduced into the massive Yang Mills Lagrangian in such a way as to cancel the two loop divergences.


In 1971, ‘t Hooft published the first detailed argument that massless gauge theory was renormalizable. Veltman told ‘t Hooft that what was needed was a realistic theory involving massive vector particles; ie a massive Yang Mills Lagrangian. ‘t Hooft’s resulting paper ushered in the ‘new physics’, though that was not apparent at the time. By adding multilets of scalar particles into the massless YM Lagrangian, t’ Hooft in effect re-invented the Higg’s mechanism.


Like pure massless theory, but unlike the massie-b-hand theories explored by Veltman, t’ Hooft found that gauge theories in which vectors acquired mass by SSB were renormalizable.

However, Veltman’s tools for analyzing gauge theory were unfamiliar to many physicists, and the path integral formalism t’ Hooft inherited from him was widely regarded as being dubious.


tHooft’s work needed support. Enter the influential US theorist Benjamin Lee. It was a joint paper by Lee and Abraham Klein on SSB in QED which in 11964 precipitated the debate leading to the construction of the Higgs mechanism. Lee looked into the renormalizability of a theory of spontaneously broken QED. He discovered that many divergences cancelled in one loop diagrams. Veltman gave Lee preprints of ‘t Hoof’s solution to the massive YM theory. Lee proved to his own satisfaction that ‘t Hooft’s papers were correct, and extended the proof to spontaneously broken gauge theories. Thus it became generally accepted that gauge theories in which the IVBS gained masses via the Higgs mechanism were   renormalizable.


In a very short time period; late 1971 to early 19722, Yang Mills gauge theory ceased to be regarded as a mathematical curiosity, and seen instead as a respectable, even profound field theory.


The next question was, did it work? Veltman always had the weak interaction in mind, and the spontaneously broken theories investigated by ‘t Hooft and Lee had precisely the from of the Weinberg-Salam unified electroweak model. How did the renormalized model agree with data?


6.4 Electroweak Models and the Discovery of the Weak Neutral Current


Weinberg had based his model on the simplest choice of gauge group, the minimal set of scalar particles, and made a choice concerning the multiplet structure of the known leptons. Alternative models could easily be constructed by varying the initial choices. Constructing such models at random was not very interesting. The question arose as to why one particular model should be preferred over another. Could support be found for any of these new unified models from the data.


The massive IVBS predicted as intermediaries for the weak force had never been detected, but this was not surprising, since the energy required to produce such massive particles  was unattainable.


Standard V-A theory predicted the weak interactions were mediated by two electrically charged IVBS, W+ and W-, leading to a charged current situation. However, the WS model also predicted a massive Z0 particle. The Z0, being electrically neutral, could mediate ‘neutral current’ processes: weak interactions in which no change in charge occurred between incoming and outgoing particles. No data was available as long as the WS model was applied only to leptons.


In the 1970s, the obvious extension to hadrons was through quarks. The idea that quarks were vehicles of the hadrionic weak and EM currents was central to current algebra and CQM traditions, and furthermore, the success of the parton model suggestd that quarks were point like entities- just like the leptons of the Weinberg Salam model.


However, when theorists incorporated hadrons into the WS model by assuming the IVBS coupled to quarks in the same way they coupled to leptons, a conflict with accepted data arose.


The GIM Mechanism and Alternative Models…


6.5 Neutral Currents and Neutron Background


In the archetypical ‘scientist’s account’, the experimental discovery of the weak neutral current would be an independent verification of unified electroweak gauge theory. The author however, suggests this view cannot withstand historical scrutiny based on two observations:

1) reports of observation of neutral currents from CERN in the 1960s and 1970s were based on questionable interpretive procedures. Physicists had to choose whether to accept or reject these procedures and the reports which went with them,  and 2) the communal decision to accept one set of interpretative procedures in the 1960s, and another set in the 1970s can best be understood in terms of the symbiosis of theoretical and experimental practice.


The Symbiosis of Theory and Experiment


The history of the neutral current can be divided into two periods of stability: in the period from the 1960s to 1971 communal agreement was that the neutral current did not exist. In the period from 1974 onwards,  communal agreement was that the neutral current did not exist. In each period, experimental practice generated both justification and subject matter  for theoretical practice, and vice versa.


Experimenters reappraised their interpretive procedure in the early 1970s, and succeeded in finding a new set of practices which made the neutral current manifest. The new procedures remained pragmatic, and were, in principle, as open to question as the earlier ones, but like the earlier ones, the new procedures  were then sustained within a symbiosis of theory and experiment.


Acceptance of the neutral current theory left theorists with a problem. The GIM mechanism created a significant difference  between kaon decay and neutrino scattering experiments, making it possible that neutral currents should be observed in the later but not the former. it did this at the expense of the introduction of new and unobserved charmed particles.


Chapter 7


Quantum Chromodynamics: A Gauge Theory of the Strong interaction


7.1 From Scale Invariance to Asymptotic Freedom

Field theorists work culminated in the 1973 discovery that gauge theory was ‘asymptotically free’. The implication of this discovery appeared to be that gauge theories were the only field theories capable of underwriting the phenomenological success of the quark-parton model, and hence of giving an explanation of scaling.



7.2 Quantum Chromodynamics (QCD)

Although the construction of QCD was to have far reaching implications for HEP development, in 1973-1974 it remained of interest only to field theorists.


Once asymptotic freedom was discovered, given that gauge theory  reproduced the predictions of the quark-parton model, it was conjectured that quark fields were the fundamental fields of the strong interaction. In gauge theory, the quark fields would interact via the exchange of gauge vector fields, conjectured to be gluons.


For a gauge group it was tempting to conjecture that the theory should be invariant under local transformations of the SU(3), Eightfold Way symmetry group, but the consequences of this choice intermixed the weak and strong interactions, spelling disaster. To get around this, theorists argued that quarks carried not one but two sets of quantum numbers, which they referred to as ‘flavors’ and ‘colors’.  


In the mid 1960s, color appeared to have little phenomenological relevance, and was generally regarded as being a theorist’s trick.


However, in the late 1960s, two sources of empirical support for color appeared.


7.3 The Failings of QCD

A central obstacle to further development was the lack of any constructive approach to the problem of quark confinement. Although QCD predicted deviations from the exact scaling behavior predicted by the parton model, there was no prospect of immediate experimental investigation of these deviations, and thus little incentive for theorists to explore them in depth.


QCD theorists were unable to appropriate the Higgs trick to give masses to the gluons, as had been done for the unified electroweak models.


Because quarks and gluons appeared in the QCD Laganagian, a naive reading of the physical particle states led directly to a world populated by real colored quarks and gluons. Furthermore,

the gauge theories shown to be asymptotically free were pure gauge theories, in which the gauge vector particles , the gluons, were massless. Many years of experimental effort had been extended without success in searches for quarks (colored or not) and strongly interacting massless gluons.


By 1974, gauge theorists were arguing that imputing to QCD an unrealistic particle spectrum (ie, assuming  that quarks and massless gluons would exist as real particles) was not legitimate, because it was based on perturbative activity, so made no sense. However, gauge theorists had talked themselves into a corner: perturbative arguments could not be applied to the long distance properties of QCD, but pertubative arguments  were what field theorists were acquainted with. They had no math tools to use, so used the verbal doctrine of confinement instead. Gauge theorists admitted that once the effective coupling constant approached unity they could no longer compute its evolution as one considered larger and larger distance scales. Insead they simply asserted that the coupling continued to grow, or at least remain large. They also stated their faith that because of this, color was “confined”: all colored particles, quarks and gluons, would forever be confined to hadrons.


Although several variations on the QCD theme arose; the 1/N  expansion, ‘monopoles’, ‘instantons’, and ‘lattice gauge’ theories, none provided an agreed upon solution to the confinement problem.  


In 1973, it was unclear to theorists what to do next for QCD, except to point out that it was the only field theory of strong interactions not obviously wrong.




Chapter 8


HEP in 1974; The State of Play


1974 was the year of the “November Revolution”. In November 1974, the discovery of the first of a series of highly unusual elementary particles was announced. within 5 years, the  “old physics’ was eclipsed by the  field theory oriented “new physics” traditions of current algebra, the parton model, unified electro-weak, and QCD.


By 1974 it was possible to represent each of the fundamental forces of HEP; the strong, EM and weak interactions, in terms of gauge theory.


Gauge theories are a class of quantum field theories, wherein all forces are ascribed to particle exchange.  There is no place for “action at a distance”. the exchanged particles are massless. unless the theory is suitably modified.


The gauge theory of electromagnetism is QED.


The “new orthodoxy” which SLAC theorist James Bjorken references in his 1979 presentation

was the belief that the Weinberg-Salam electroweak model and QCD could explain all the phenomena of elementary particle physics. The smallest feasible gauge-theory-structure

(SU(2) X U(1) for electroweak, SU(3) for strong, and Su(5) for grand unification accounts very well for the observations.  Bjorken continued: While searches for what is predicted by the orthodoxy will proceed, searches for phenomena outside the orthodoxy will suffer… marginally significant data supporting orthodoxy will tend to be presented to and accepted by the community, while data of comparable or even superior quality which disagrees with orthodoxy will tend to be suppressed…and even if presented, not taken seriously.


The key phase in the communal establishment of the new physics centered on the debate over the newly discovered particles; the seeds of revolution were planted by those who advocated charm and gauge theory. Charm advocacy was sustained by two groups: the old guard of gauge theorists; Gell-Mann, Weinberg, and Glashow on one hand, but also younger theorists who were not trained in gauge theory on the other. The question is why this second group aligned themselves at a crucial time with the gauge theory camp.   


Pickering suggests cultural dynamics, such as ‘opportunism in context’  resulted in theorists  aligning themselves at a crucial time with the gauge theory camp, so that such a major change in HEP could occur in such a short time period. “Opportunism in context’  means that individuals deploy resources (expertise) in contexts defined by the practice of their peers, and in the process, new resources are acquired.


He provides biographies for three of these younger non gauge theory trained theorists: Mary Gaillard, Alvaro De Rujula and John Ellis. All three began their research within the current-algebra tradition. Unlike the dominant CMQ and Regge traditions, current algebra never lost touch with its field- theory ancestry. He shows that career development and the and the parallel acquisition of new expertise were structured by the local context of day-to-day practice rather than by the overall public culture of HEP, such as professional literature.   


De Rujula learned gauge theory calculation and was familiar with sophisticated perturbative techniques and the parton model. By 1974, he was presenting a review of “Lepton Physics and Gauge Theories” at the London Conference. When the revolution came, the entire HEP theory group at Harvard was on the side of gauge theory and charm.


Ellis recalled that theoretical research at Cambridge in the late 60s was Regge-oriented, and struck him as being boring.  He then arranged for post grad work on field and group theory, and current algebra.  At CERN, Ellis worked on two fast growing areas of HEP: one was a new development in Regge tradition, and the use of new techniques in the theoretical analsis of scaling. In 1971, Ellis left CERN for SLAC, because he wanted to learn about partons and scaling.

Ellis supported gauge theory and believed that QCD was ‘the only sensible theory’. in 1973-4 a major HEP issue was the data on electron-positron annihilation emerging from SLAC. These data were in conflict with the ideas of scale invariance, and Ellis recalled that for many they were seen as ‘total nemesis’ for field theory. Because of his expertise, he was asked to give reviews on theoretical ideas concerning  electron-positron annihilation. It then became ‘quite obvious to me that there was only one solution’ ie, the existence of a new hadrionic degree of freedom such as charm. The difference between Ellis’ reaction and that of other theorists  to the electron positron data illustrates the role of prior experience in determining responses in particular contexts.


Ellis recalled that many of his colleagues attitude was “who believed in that parton light-cone crap anyway?” He however argued that the idea of partons worked in so many different cases that they must be in some sense correct, so that  if there was a violation of those predictions for electron positron annihilation, it could not be that the ideas of light-cone expansion were breaking down. It had to be that there was some new parton coming in, or some new degree of hadronic freedom. By autumn 1974, it was clear to Ellis that something like charm must exist.


Thus, when the first of the new particles was discovered (in electron-positron annihilation), Ellis had an explanation: charm, and the tools with which to construct arguments in it’s favor: asymptotically free gauge theory.


Gaillard attained the status of an expert in weak interaction phenomenology by 1972. In 1973, she spent a year at Fermilab, where she found herself in the local context where the phenomenological  implications of the electroweak force was an important topic, and she was in daily contact with Benjamin Lee, leader of the theory  group at Fermilab, and a leading gauge theorist. She began the investigation of the detailed phenomenology of electroweak theories, learning gauge theory techniques with the help of Lee. She subsequently investigated, with Lee, the consequences of the existence of GIM charm. This resulted in a major article ‘Search for Charm”, co-authored by Gaillard. She also reviewed prospective experimental searches for charm at an international conference in Philadelphia, and reviewed gauge theories and the weak interaction at a London conference. When the new particles were discovered in 1974, she at once sided with the charm camp.


In a 1978 interview by Pickering, Gaillard noted that many older physicists were resistance to the charm concept, who did not take asymptotic freedom seriously. She did not understand why, but noted that “I think that well, there are also people who are just not used to thinking that field theory is going to be relevant to strong interactions at all. They are not people who are so familiar with the details of the weak interactions and their implications for the stron interaction symmetry that you see all these things falling out at once. I guess its largely a matter of kind of what you did before- what your perception of the world is [laughter]



Part III

Establishing the New Physics: The November Revolution and Beyond


Chapter 9: Charm: The Lever that Turned the World


The author’s aim is to set the revolution in the context of the traditions described in Part II, and to show how, in turn, the revolutionary developments defined the context for subsequent developments of HEP.


9.4 Charm

The peculiarity of the new particles were the combination of large masses with long lifetimes. Normally the heavier a hadron, the shorter its lifetime. The J-psi, discovered in 1974,  was the heaviest particle known, but also the longest lived.


In the Search for Charm paper by Gaillard, Lee, and Rosner, one proposed mechanism for this result was the production of ‘hidden charm’ states in the electron positron annihilation: mesons composed of a charmed quark plus a charmed antiquark in which the net charm cancelled out. G,L, and R observed that these hidden charm states would be expected to be relatively long lived via application of the empirically successful Zweig rule.


This rule offered a qualitative explanation of the longevity or the psi, but quantitatively it failed: even when the Zweig rule was applied, the measured lifetimes were 40 times longer than expected. This discrepancy was, as Gaillard, Lee, and Rosner put it, ‘either a serious problem or an important result, depending on one’s point of view’.


A number of theorists chose to see the Zweig discrepant lifetime of the J-psi as an important result rather than a serious problem. These were the gauge theorists, and in justification o f their position they rolled out the ‘charmonium model’, which explained the properties of hidden charm states. This model explained the properties of the hidden charm states as an analog of the ‘positronium’, the name given  to the atomic bound states of an electron positron pair. So, just as positronium atoms decay to photons, so too the hidden charm hadrons decay could be supposed to proceed via charmed quark charmed anti-quark annihilation to two or three gluons, which then rematerialize as conventional quarks.


It is clear that after 1975, experimenters at SPEAR and elsewhere oriented their research strategy  around the investigation of the phenomena expected in the charmonium model. If the experimenters  had oriented their strategy around some other model, the pattern of experiments around the world would have been quite different, and the outcome might well have been changed.


The experimental techniques which were adapted  allowed discovery of five “intermediate hidden charm particles”. Some considered that this discovery constituted a direct verification of the charmonium model. However, two of the new particles had reported masses and widths in conflict with the predictions of the charmonium model. By 1979, further experimentation convinced most physicists that these two particles did not exist at all, at least not with the masses originally reported. Thus the empirical basis of the charmonium model, a key element I the November revolution, was retrospectively destroyed.


Chapter 10


The Standard Model of Electroweak Interactions


This chapter reviews the development of electroweak physics in the late 1970s. By 1979, consensus had been established on the validity of the standard Weinberg-Salam electroweak model. However, data on both neutrino scattering and neutral-current electron interactions at times was not in line with this theory.


An authoritative review of the experimental data on neutral currents was given by CERN experimenter F. Dydak at the European Physical Society Conference held at CERN summer of 1979. It was noted that several experiments   were in disagreement  with the standard model.


10.1 More New Quarks and Leptons


10.2 The Slaying of Mutants


A so called  high-y” anomaly reported in Chicago was denied by a later CERN study using more advanced equipment. No more “high-y” anomalies were reported.


The HPWF team observed six “trimuon” events, and Caltech two “trimuon” events at Fermiland in Chicago, while  CERN noted  two trimuon events. CERN did not comment on the Fermilab results, saying ‘the number of events are small, and the beams different, so we feel that the differences may not be significant’. No more trimuon events were reported.


The third major anomaly which challenged the standard model was atomic parity violating electron-hadron interactions. Atomic physics experiments at the Universities of Oxford and Washington (Seattle) found an order of magnitude smaller value for parity violation than predicted by the standard model, and thus the Weinberg Salam theory was called into question. These experiments were novel however, and included an element of uncertainty.


Steven Weinberg was quoted as saying that he was willing to embrace ‘an attractive class of theories which are not radical departures from the original model’.


There were two ways of looking for neutral-current effects in the weak interaction of electrons. One way was the atomic physicist’s bench top approach,. The other was the particle physicist’s use of high energy electron beams and HEP equipment. Using the HEP approach appeared to be technically impossible, but Charles Prescott of SLAC pursued this technique anyway. Using precise techniques, the SLAC team found what they were looking for, which supported the Weinberg-Salam model.  In mid 1978, Soviet atomic physicists reported results (E122)  which agreed with the W-S model. Support for the Oxford and Washington results declined.


10.3 The Standard Model Established: Unification, Social and Conceptual

By summer 1979, all anomalies threatening the W-S model had been dispelled to the satisfaction of the HEP community at large. Weinberg, Salam, and Glashow shared the 1979 Nobel Prize for their work on establishing the unified electroweak interactions. It is easy to claim that the W-S model, with an appropriate complement of quarks and leptons, make predictions which were verified by the facts. However, in asserting the validity of the W-S electroweak model, particle physicists chose to accept certain experimental reports and reject others. The Soviet E122 results were just as novel as the Washington-Oxford results, and therefore just as open to challenge. But particle physicists chose to accept the results of the SLAC experiment, chose to interpret them as supporting the W-S model, chose to reject the Washington-Oxford results, and chose to accept the Soviet results. Experimental results did not exert a decisive  influence on theory. The standard electroweak model unified no only the weak and EM interactions; it also unified research practice in HEP.




Chapter 11 QCD in Practice


Phenomenological QCD- An Overview


There were two QCD based phenomenological traditions; relating to hadron spectroscopy and hard scattering phenomena, and were extensions of the charmodium and parton models respectively.  Within all three major areas of HEP experiment: electron-positron; lepton-hadron; and hadron-hadron physics, traditions existed for the exploration of spectroscopic and hard scattering phenomena. Data from these traditions were seen as both justification and subjet matter for phenomenological QCD analyses. In turn, these analyses generated a context for further growth of the relevant experimental programs. The upshot was that, by the end of the 1970s, experimental practice in HEP was almost entirely centered on phenomena of interest to QCD theorists. Data on other phenomena, for example hadronic soft scattering, for which QCD theorists could not supply a ready made analysis, were no longer generated in significant quantities. Proponents  of alternative theories – principally Regge theorists, were starved of data.

HEP experimenters had come to inhabit the phenomenal world of QCD, and in effect, obliged

non-QCD theorists to do likewise.


Never, during the 1970s, was QCD alone brought to bear on existing data. Thus the story does not even begin to resemble the confrontation of theory with experimental facts. Instead we will be dealing with what can best be described as the construction of the QCD world view. Within this world view, a particular set of natural phenomena were seen to be especially meaningful and informative, and these were phenomena conceptualized by the conjunction of perturbative QCD calculations and a variety of models and assumptions (different models and assumptions being appropriate to different phenomena within the set.)


In a given phenomenal context, mismatched arose between data and the prediction of particular QCD based models. But the overall context was such that these mismatches could be represented more readily as subject matter for future research, rather than as counterarguments to QCD itself.



QCD spectroscopy covered only a limited low energy domain of experimental practice. However, in parallel with these, new traditions  based on QCD hard scattering developed from the parton model traditions.  through the 1970s these traditions grew to dominate high energy research at major HEP labs., completing the gauge theory takeover of experimental practice.


Initially, in the early 1970s, QCD only underwrote a limited subset of parton model predictions. By the late 1970s, with a change of QCD calculation, QCD was seen to be applicable to all processes described by the parton model. The new calculation approach was called ‘intuitive perturbation theory’ (IPT). Its departure from the original ‘formal’ approach can be characterized by how theorists coped with gluons.  Gluons having specified interactions with one another were the distinguishing feature of QCD. In the formal  approach, gluons appeared in simple perturbative diagrams of only a few closed loops. IPT computed infinite sets of diagrams in which quarks emitted an arbitrary number of gluons, an approach made possible by importing and adapting techniques used by QED in representing an arbitrary number of photons. 


This resulted in the acceptance of three types of gluon: soft, emitted with low energy; collinear, of high momentum traveling parallel to the direction of said quark; and hard, of high momentum traveling transverse to the direction of said quark.


The status of QCD in its many applications to hard scattering was reviewed in a plenary session by a Caltech theorist in Tokyo. On the face of it, the QCD prediction fits of scaling violation to a whole range of data on muon, electron, and neutrino scattering were impressive, but not hailed as proof of the validity of QCD, as Frank Close noted.


While HEP experimenters were busy collecting data in support of QCD, theorists cutting the ground from beneath their own feet. This is because early predictions of scaling violations were truly asymptotic; they were expected to apply only at infinitely high energy.


Perhaps the most accurate statement of the theoretical position in 1978 is that it was confused.

Calculations of higher order corrections to scaling violations were complex, and experts disagreed on their results. In 1978-79 these differences were sorted out. Although QCD predictions continued to the data, said predictions were an amalgam of non-asymptotic contributions, perturbative calculation, and nonperturbative model contributions. Thus it was possible that the agreement between QCD prediction and scaling violation data was accidental.


The arguments for and against QCD from the data became highly technical. Thus, even as HEP entered the 80s, particle physicists were unable to convince themselves that the scaling violations seen in deep inelastic scattering demonstrated the validity of QCD, but nonetheless QCD was solidly entrenched as te central resource in HEP physics.


In the late 1970s, the situation in high energy electron positron annihilation experiment resembled that of deep inelastic lepton-hadron scattering. QCD alone could not be used to make detailed and well defined predictions. However, in conjunction with various other models and assumptions, perturbation QCD could be used to motivate a world view, centering on “three jet” phenomena.



Chapter 12: Gauge Theory and Experiment: 1970-90

By 1980, HEP experimenters had effectively defined the elementary particle world to be one of quarks and leptons interacting according to the twin gauge theories electroweak and QCD.


Chapter 13: Grand Unification

We have seen how in the late 70s, electroweak and QCD dominated HEP. But the “new orthodoxy” also included an SU(5) Grand Unification theory.


Since the basic building blocks of nature had been identified, there was no further need for HEP theorists, so they turned to “Grand Unified Theories”, which had little impact on HEP experiment.


GUTss embraced, within a single gauge theory, the weak, electromagnetic, and strong interactions. Weak and electromagnetic had been unified in electroweak. It was only a mater of time until a unification of QCD and electroweak in an extended group structure was attempted.


GUTs were modeled on electroweak theories. In electroweak, an exact gauge invariance was spontaneously broken by a set of Higgs scalars, suitably chosen to give large masses to the IVBS while leaving the photon massless. GUTs were based on a larger gauge group, broken in such a way as to eave the photon and 8 gluons massless, whlle giving appropriate masses to the IVBS, W, and Z0       . Because of the group structure however,  GUTs had to incorporate more than just these 12 gauge vector bosons. Su(5) would require another 12 “X-bosons” The Higgs sector had to be suitably chosen to give these bosons extremely high masses, to account for the observed decoupling of strong and electroweak at currently accessible energies.


As far as mainstream HEP  was concerned, the single major prediction of GUTs involved the electroweak mixing angle, θw    In subsequent work, a two standard deviation discrepancy was found between theory and experiment. As usual, this discrepancy was considered an important result, rather than a serious problem.


Aside from this, GUTs offered little to the experimenter, largely because th X-bosons were located at unattainable high energies to provide the needed masses. There were two approaches to this problem; one was to go to the Big Bang for the high energies. his led to a social and conceptual unification of HEP and cosmology. The other approach was to look for evidence of X-boson exchange in proton  decay.


Although HEP theorists had thoroughly explored the SU(5) gauge model, and although it is the lowest level in gauge theory compatible with grand unification, SU(5) was seen by some to be a danger to the future of HEP. The danger, stressed from 1978 onwards by Bjorken, Glashow and others, was that it implied a vast ‘desert’, stretching in energy from102 GEV to 1015 GEV. At 100 GEV, new physics would emerge in the shape of the intermediate vector bosons of electroweak. Thereafter, according to SU(5), increasing energy would reveal no new phenomena until the X-boson region of 1015 GEV, which was the unattainable energy. For these reasons, the HEP community was more then happy to look at more complicated gauge structures, and experimenters were happy to coordinate with them.


Proton decay experiments were all conceived along the same lines. If the proton had a lifetime of 1031 years, and if one monitored a total of 1031   protons for a year, then one could expect to see a single decay. Various proton decay detection systems were implemented around the world. The first positive data came from the Kolar gold mine in India. An Indian-Japanese collaboration of scientists  had been monitoring 140 tons of iron for 131 days, and had observed 200 events. Almost all of these could be ascribed to the passage thru their apparatus of muons, but the experimenters concluded that three of these events could not. These three events were tentatively regarded as candidate proton decays.


Chapter 14 Producing a World


“The world is not directly given to us, we have to catch it through the medium of traditions”

                                                                                                Paul Feyerabend 1978


Judgments were crucial in the development of the “new physics”.  The potential for legitimate dissent exists in a discussion of what may be considered key experimental discoveries.


A plurality of theories can be advanced for any set of data held to be factual. No theory has ever fitted the experimental data or the “facts” exactly. Particle physicists were continually obliged to choose which theories to elaborate on and which to abandon, and the choices which were made produced the world of the ‘new physics’; it’s phenomenon and its theoretical entities.


The existence or nonexistence of pertinent natural phenomena was a product of scientific judgments. Judgments of which theory is ‘valid’ results in the view that misfits between prediction and data be viewed as grounds for further study and elaboration of the theory rather than its rejection. Within HEP, judgments displayed a social or group coherence. The world view produced by HEP was socially produced.


The author argues that the construction and elaboration of new physics theories depends on the recycling of theoretical resources from established scientific areas. Two major analogies were at the heart of the conceptual development of the new physics: analogs from the well developed area of atomic and nuclear physics, whereby hadrons were represented as composites of more fundamental entities: quarks, and analogs of QED applied to the weak and strong forces. How valid are the analogs ???


Thomas Kuhn’s argument was that if scientific knowledge were a cultural product, then scientific knowledge of different cultures would create different worlds (world views) for the different scientific communities. Each world view would recognize different natural phenomena and explain it with different theories. Therefore the theories of different cultures would be immune to testing from one another. They would be ‘incommensurable. Thus the world of ‘new physics’ is incommensurable with the world of the physics preceding it.


To summarise, the quark-gauge theory picture of elementary particles should be seen as a culturally specific product. The theoretical entities of the new physics, and the natural phenomena which pointed to their existence, were the joint products of a historical process- a process which culminated in a communally congenial representation of reality….


There is no obligation to take account of what 20th century science has to say about a world view. Particle physicists of the late 1970s were themselves quite happy to abandon much of the world view constructed by the science of the previous decade. There is no reason for outsiders to show the present HEP world view any more respect.


World views are cultural products. One scientist put it this way: ‘The lofty simplicity of nature [presented to us by scientists] all too often rests upon the un-lofty simplicity of the one who thinks he sees it.”