Let's start by reviewing the example I considered before: the annihilation of electrons and positrons to a pair of muons. This is a relatively simple process both theoretically and experimentally, and has experimental use in calibrating realistic experiments. At low energies, the first term in the perturbation expansion involves an intermediate photon:
There are several diagrams that contribute to the next term in the series, but the one I considered involved an additional photon:
When we evaluate this diagram, we integrate over all possible energy and momentum of the particles in the loop. This then necessarily includes points where the photon satisfies
$E_\gamma = p_\gamma c$
Unfortunately, these two quantities appear in the analytic expression as
$M \propto \frac{1}{E_\gamma^2 - p_\gamma^2 c^2}$
which leads to a divide-by-zero error.
Now, I've mentioned that there are several other diagrams that appear at this order, so you might hope that by including all of them this problem might cancel in some way. This turns out not to be true; if we only consider Feynman diagrams for electron-positron to muon-antimuon, we get nowhere.
However, let us think about this process from an experimental point of view. What we are actually producing is a pair of muons and nothing else. If we had extra stuff in the final state, we'd have more and different diagrams to consider. But how accurately can we say that more stuff isn't there? Consider photons, since that's where our problems started. No matter how good our equipment is, there will always be a lower limit in energy, such that photons with less energy are invisible. And it is here that our problem can be resolved.
What we actually measure, in any real experiment, is the production of a muon-antimuon pair and any number of low energy photons. So when calculating the rate for this process, we must include the contributions of diagrams like the following:
The relevant thing about this diagram is that it also features a divide-by-zero infinity. Specifically, this problem arises when the energy of the external photon goes to zero, and is associated with the internal muon (the muon leg between the two photon vertices). When we combine this diagram, the similar case with the photon emitted from the antimuon, and our loop diagram above, all the infinities cancel.1 To prove that all possible such divergences can be solved this way is no small feat, but it has been done; our theory continues to make sense.
However, including the extra diagram does have an effect on the result. In particular, the rate for the production of muon pairs ends up depending on the energy limit for detecting photons. The lower the photon energy we can observe, the lower the rate for the production of muon pairs and nothing else. Stated that way, it seems somewhat obvious, but it's the type of thing to bear in mind. It illustrates the need to define so-called infrared-safe observables, that are unaffected by the presence of low-energy and unobservable states. Here, the infrared-safe observable is the inclusive rate for producing two muons plus any number of low energy photons; the exclusive rate for producing two muons being improperly defined.
Now, here this is something of a minor curiosity; the numerical effect is small in practice. But in QCD, this can be very important. The coupling strength in QCD is very large at low energies, so the probability of emitting low-energy gluons from quarks is very high. Of course, we don't measure quarks and gluons directly, but rather composite particles made out of them. In experiments like the LHC these appear as jets of particles, roughly following the direction of the original quark or gluon. Jet algorithms are rigorous ways to define jets, and infrared safety is an essential characteristic. Somewhat embarrassingly, one of the earliest candidates, the cone algorithm, is not infrared safe. We now have multiple algorithms that are, so this is not a problem; but jet substructure techniques also require care to be sure they do not make the same mistake.
1. To be technically careful, we first give the photon mass; this explicitly removes the infrared infinities. Then, we combine all the relevant contributions. Finally we send the photon mass to zero, and note that doing this returns a finite result.↩
What we actually measure, in any real experiment, is the production of a muon-antimuon pair and any number of low energy photons. So when calculating the rate for this process, we must include the contributions of diagrams like the following:
The relevant thing about this diagram is that it also features a divide-by-zero infinity. Specifically, this problem arises when the energy of the external photon goes to zero, and is associated with the internal muon (the muon leg between the two photon vertices). When we combine this diagram, the similar case with the photon emitted from the antimuon, and our loop diagram above, all the infinities cancel.1 To prove that all possible such divergences can be solved this way is no small feat, but it has been done; our theory continues to make sense.
However, including the extra diagram does have an effect on the result. In particular, the rate for the production of muon pairs ends up depending on the energy limit for detecting photons. The lower the photon energy we can observe, the lower the rate for the production of muon pairs and nothing else. Stated that way, it seems somewhat obvious, but it's the type of thing to bear in mind. It illustrates the need to define so-called infrared-safe observables, that are unaffected by the presence of low-energy and unobservable states. Here, the infrared-safe observable is the inclusive rate for producing two muons plus any number of low energy photons; the exclusive rate for producing two muons being improperly defined.
Now, here this is something of a minor curiosity; the numerical effect is small in practice. But in QCD, this can be very important. The coupling strength in QCD is very large at low energies, so the probability of emitting low-energy gluons from quarks is very high. Of course, we don't measure quarks and gluons directly, but rather composite particles made out of them. In experiments like the LHC these appear as jets of particles, roughly following the direction of the original quark or gluon. Jet algorithms are rigorous ways to define jets, and infrared safety is an essential characteristic. Somewhat embarrassingly, one of the earliest candidates, the cone algorithm, is not infrared safe. We now have multiple algorithms that are, so this is not a problem; but jet substructure techniques also require care to be sure they do not make the same mistake.
1. To be technically careful, we first give the photon mass; this explicitly removes the infrared infinities. Then, we combine all the relevant contributions. Finally we send the photon mass to zero, and note that doing this returns a finite result.↩
No comments:
Post a Comment