## Wednesday, 27 June 2012

### How Constrained is Constrained SUSY?

It has become standard lore in the theoretical physics community that the fact that the LHC is already on the edge of ruling out Supersymmetry (SUSY).  The reason is quite simple: the standard argument for SUSY, the hierarchy problem, would suggest that the supersymmetric partners (superpartners) of the Standard Model should have masses less than about one thousand GeV (where the proton has mass of about one GeV).  The LHC has not found those partners, and has published exclusion plots like this one:
 ATLAS LHC limits on Supersymmetry; stolen from Michael Kobel's talk at Planck 2012.
The different coloured lines correspond to the limits from different types of signals that could have been seen.  The areas below the lines are ruled out.  The coloured regions were either ruled out from earlier direct searches or theoretically.  The grey dashed lines correspond to superpartner masses in GeV; horizontally for the gluon superpartner, vertically for the quark superpartners.  Note that the regions for masses less than one thousand are almost entirely within the excluded region.

Now, there are a number of caveats, and a lot of work has been done in the last year to eighteen months exploring ways to get around these restrictions.  However, a recent paper by Balazs and his collaborators went back and examined the simplest situation more rigorously, and suggested that the LHC results have not actually had that much effect on the allowed parameter space.  How did they conclude this?  Join me below the fold!

To explain what's going on here, I'll need to describe a bit more detail about the theory of SUSY.  I want to construct the smallest supersymmetric theory possible.  I must include the Standard Model (since we've found that!), so my result will be the Minimal Supersymmetric Standard Model or MSSM.  The only new particles I add are the superpartners of the Standard Model1, and I the only interactions I add are those needed to preserve SUSY.  Doing this adds only one unmeasured parameter, which is essentially the Higgs mass2.  Unfortunately, the result is a theory that is ruled out by the failure to find, for example, the superpartner of the electron at the same mass!  That is, as I noted in an earlier post we must break SUSY.

Breaking SUSY is a bit tricky.  I need to break it in a way that keeps the good features, such as the solution to the hierarchy problem, that motivated me adding supersymmetry in the first place.  It turns out that the way to do this is to add terms that are relevant at low energies.  Then, at high energies I have a supersymmetric theory; and at low energies a broken theory.  Because temperature is just a form of energy, I can think of this as a phase transition: the Universe cooled from a hot "gassy" supersymmetric phase to a cold "liquid" state like the one we observe.

So, to complete the MSSM I add all possible terms that only break SUSY at low energies, without adding any new particles.  How many new parameters do I get?

One hundred and five!

That's far too many to do be amenable to analysis.  What we have to do instead is find some simple model that "predicts" those 105 numbers.  As might be expected, a lot of different models have been looked at over the years; but one of the most popular is the Constrained MSSM, or cMSSM.  This model is based on the recognition that the 105 parameters can be grouped to:

• Masses for the superpartners of the Standard Model quarks and leptons (squarks and sleptons);
• Masses for the superpartners of the photon, gluon, W and Z (the gauginos);
• Couplings between the Higgs and the squarks and sleptons;
• A couple of extra Higgs parameters.
The cMSSM then assumes that one number defines each of the first three types of parameters; the last point consists of one number and a sign3.

The simplicity of the cMSSM has lead to it being the most common way for experimenters to present search limits on SUSY.  Such is the case for the example I used above; indeed, the horizontal (vertical) axis is the squark (gaugino) mass.  The simplicity also makes this model the most vulnerable to LHC searches.  Which leads us to the comments I made in the first paragraph, the fact that superpartners haven't turned up at the LHC seems close to ruling out (this version of) supersymmetry.

--

However, the LHC is far from the only or even the first experiment to be placing limits on the cMSSM.  In addition to direct searches from earlier experiments, there are a number of indirect constraints based on precision measurements of the W and Z, and even some low energy constraints based on rare decays.  One of the most important constraints relates to the Higgs.  Remember that the advantage of symmetries is that they restrict the forms of our theories.  Supersymmetry restricts the form of the Higgs sector, leading to the following restriction:
The Higgs should be lighter than the Z boson.
The mass of the Z is about 90 times the proton.  Direct searches at LEP force the Higgs to be heavier than 114 times the mass of the proton.  Oops.

Obviously, there is a catch.  The prediction above is subject to quantum corrections, and those corrections turn out to be quite strong.  In particular, the large top mass means a large coupling between the Higgs and the top (remember that the top mass comes from its coupling to the Higgs).  This leads to a large coupling between the Higgs and the superpartners of the top, the stops; and this leads to the large quantum corrections.

The fact that the Higgs-stop interactions give the needed boost to the Higgs mass means that the stop cannot be very heavy.  Intuitively, if the stop is heavy, it is harder to produce; this is true even if the production is of a "virtual" top in a vacuum fluctuation.  So the combination of the unobserved Higgs and the assumption of supersymmetry predicts a relatively light stop, independent of anything else.

However, if the stop is light then it can also lead to quantum corrections in other areas.  In particular, it can effect the couplings and masses of the W and Z gauge bosons.  These parameters were measured with great precision at LEP, and have only been improved on elsewhere.  And those observations agree very well with the Standard Model; the supersymmetric corrections must be small; which in turn suggests the superpartners are heavy.  This tension between the two predictions is known as the Little Hierarchy Problem.

The Little Hierarchy is not insurmountable.  But it does force our theories to live in small regions of parameter space.  And this, finally, is where Balazs et al come in.  Their perspective was to use a Bayesian approach to ask how much less credible the cMSSM is as a result of various experimental results.  This rests on Bayes' Theorem, of course:
$p (T | D) = \frac{p(D | T) p(T)}{p(D)}$
Here, p (D) is the confidence we should have in the theory T as the result of the data D; p (D | T) is the probability, given the theory, of getting the data; and p (D) is an incalculable normalisation.

Actually, this is not the exact form that they used.  First, they followed a standard method to take into account the parameter space of the model:

where $\theta$ represents the five parameters of the cMSSM.  This requires making some theoretical prediction for what the parameters should be, and two well-motivated examples were considered.  The first is a Logarithmic set of priors; these are scale-free, that is the probability of a parameter being between 1 and 10 (in some units) is the same as it being between 10 and 100.4  The second choice was a Natural one, which correspond to preferring parameters that don't require large cancellations to give the correct Higgs mass.

By integrating over the parameters of the model, these authors could make a statement about the credibility of the cMSSM as a whole.  This is something that is difficult if not impossible to do in Frequentist methods.  However, the integral is computationally costly, which is the reason this is not done so often.  Also, because we can not calculate the numerator p (D), we can not make statements in the absolute sense; all we can do is compare models.  Balasz et al chose to compare the cMSSM (with the two different sets of priors) to the Standard Model, and also to compare the cMSSM with different sets of experimental data.  The essential results are given in this table:
 Results of Balasz et al.  The most important columns are the two labelled B, which are the Bayes factor, both for each individual set of data and cumulative; and the last column.  XENON is a Dark Matter search.
It is simplest to look at the last column, which interprets the numbers using standard terminology.  What we see is that for either choice of prior, the direct LHC searches don't have much effect on the credibility of the cMSSM.  In particular, the non-appearance of SUSY so far should have been expected  based on the existing Higgs searches.  Oh, and taking into account both LEP and LHC Higgs searches, the cMSSM is indeed pretty badly ruled out.

At least, that's how I see it.

1. And some extra Higgses for technical reasons I won't go into here.
2. I am brushing over some issues related to ensuring protons don't decay, that amount to setting a lot of new parameters to zero.
3. There are some additional assumptions about structure that I'm brushing over.  Also, all the 105 parameters receive quantum corrections that are vital for the phenomenology; again, this is not the post to into that.
4. This is modified by the need to include upper and lower limits, so that the integral is finite.