2:00 pm: Dark Matter at Colliders, Lian-Tao Wang
We start with a (invited) mini-review of what will serve as the topic for this particular parallel session. Like nearly everyone here, Lian-Tao is discussing WIMPs; like the longer talks, he's motivating them over other ideas. Of course, ultimately DM at colliders requires something WIMP-like, so that's a more practical motivation.
The standard story from ~10 years ago is that DM is embedded in SUSY (or a SUSY-like theory). While you have the characteristic MET signal, you also have production of coloured objects with large cross sections. Of course, that hasn't happened (yet).
Then we have the mono-X signals. These are arguably the modern standard search. Of course, the effective operator perspective is ... iffy at LHC and similar. This is due to the large momentum transfer at the LHC, so we can probe the structure of an effective operator unless the new physics is very heavy.
Even when we UV complete to realistic models and study those, there is an issue. For example, with an s-channel mediator, old Tevatron constraints based on searching for the mediator (dijets) are stronger than the LHC monojet limits can ever be. t-channel mediators have their own issues, including FCNCs and additional monojet channels.
In general, mediator searches give strongest limits.
Additionally, "simple" models have simple DM + new forces as mediators. Why not use SM mediators?
Then SUSY offers a selection of candidates that have SM mediators (W, Z, Higgs). One notable thing is that all the SUSY scenarios that get the correct relic density tend to feature small mass splittings (compressed spectra), with correspondingly less MET. So we return to a mono-X search.
We now have mono-X but with no mediators to search for. Limits here aren't that good; we can't directly probe above a few hundred GeV at the LHC. Can do better with other searches, e.g. disappearing tracks from Winos.
2:30 pm: Collider Searches for Dark Matter in the Mono-Everything Search Channels, Linda Carpenter
"Exaggeration": Mono-(W,Z,H,γ) channels. These offer the best hope for correlations with astrophysical (ID) searches. An enumeration of effective operators followed by recasting of prior searches. Looking at production through an s-channel boson.
Gauge invariance relates different mono-EW boson channels. e.g. coupling to Higgs kinetic term gives interaction strengths for W and Z proportional to masses.
Constraints come from interesting places, e.g. ZZ with one Z decaying invisibly.
Finally looking at combining LHC limits with Fermi excess. Can exclude some, not all models.
Strong constraints from mono-W models. Constraints from mono-Higgs (Higgs portal); some better, some worse than DD.
That talk ran late, so I'll need to skip the next talk to get to the session I want to go to. It turns out that the other session was also running slow!
3:00 pm: Checkmating Your Favourite BSM Model, Jamie Tattersall
Goal: program that takes model and tells if ruled out or not. Not there yet: needs event files and cross sections. Uses Delphes as a detector simulator, modified. Everything is tweaked to reproduce the experimental collaborations results for tagging efficiencies, reconstruction etc.
Can be run on all or a subset of analyses. Automatically decides what final states are needed.
Written in C++ with common structure so new analyses can be added.
Trigger efficiencies do need to be set by hand. Output is number of signal events.
Currently biased towards ATLAS searches, because of need to match tunings. 11 searches so far, covering SUSY-type and MET-type searches.
Can directly compare to expt 95% limits or compute CLs explicitly if desired.
Some places were analyses do not quite match expts, noted in manuals. Typically within 10%.
3:15 pm: MadMax - Tracking Regions of Significance, Peter Schichtel
Given a model, is it observable and where?
3:30 pm: Model-Independent Searches with Background Matrix Elements, Jamie Gainer
Looking for signal when we don't know what it is. Are most LHC searches optimised for particular signatures that are not generic?
The optimal discriminant is the Likelihood ratio (Pearson-Neyman Lemma). If we don't know the signal, we can't do this. Perhaps we can use the background likelihood as a test statistic? Essentially looking for things that don't show up just from backgrounds. Isn't this the usual type of thing?
Try to make flat background distributions. Then signals become easy to see. "Rank" background simulations, possibly in multi-dimensional space. Or reweight by some background expectation.
So I think the take-home message of this talk is:
- We want to express signals in ways where the background is flat, to make signals obvious;
- There are several ways this can be done;
- Some toy examples to illustrate this, quite dramatically actually.
A tool and a language. The tool aims to automate model comparison against LHC data. Glues to the end of the usual event generator chain. Short standalone software (written in Perl). If Delphes is the theorist's GEANT, this aims to be the theorist's Root.
Reads LHCO files, filters kinematics etc, jet clustering, even more complex stuff like MT2.
Meta-Language that describes cuts. Separates specific usage and general functionality. Cut list is ~ 20 to 30 lines. Importantly, allows recursive definition of cuts, e.g. all leptons satisfy criteria 1 and the hardest i = 1, ... N satisfy additional criteria 1_i. Or even more complex constructions. Cuts can be applied to any subset of detector objects, or to the event as a whole (MET etc).