We come to the final session of the conference, which has only a single talk scheduled. (Plus final thanks I'm sure.)
4:00 pm: Collider Phenomenology, hypotheses and measurements, Michael Spannowsky
Several approaches that we can take right now: SUSY, composite Higgs, effective operators. Whatever we choose, modern analyses use simplified models.
The whole community is emotionally effected by the lack of naturalness promised by our data! Plus, perhaps we have oversold what can be learned from the Higgs boson. We essentially found it in the bin but it tells us nothing about the underlying physics.
The interpretation of any measurement is model-dependent. Interpretation requires communication between different scales. Increasing model complexity also means increasing model flexibility. Our study of the Higgs physics at Run 1 has mostly used the very simple κ framework, modelling the Higgs couplings with a simple rescaling of the SM value.
There has also been a (somewhat long-running) struggle to come to a suitable unified language/basis for Higgs EFT. Different bases, how many operators, validity of measurements and theory. Even if flavour-blind, over 59 operators. Though many are constrained by EW physics. Focusing on operators relative to new Higgs physics gives us 8 operators.
Important that kinematic distributions set the validity of the EFT themselves. Deviations set limits on Wilson coefficients. Other observations more or less bound NP scale Λ. This lets us properly understand where the constraints lie in the EFT parameter space.
However, LHC probes varying scales but generally low compared to Λ. Need to renormalise from Λ to measurement scale, which is complicated due to the different scales measured. Compare flavour physics, where typically there is only a single scale, e.g. the B meson mass. Importantly get both running and mixing of Wilson coefficients with scale. For example, can have T parameter at high scale but not at low scale. Some efforts to place limits already. Some of these constraints are in region where EFT is dubious, at least as simply presented.
Simplified models another approach. Suitable for approaches with relatively light new particles. Examples include singlet (Higgs portal), 2HDM variants, and triplet models (Georgi-Machacek). Constraints from direct searches already push to alignment or decoupling limits. Georgi-Machacek interesting as no decoupling limit, so can be truly excluded at LHC.
Dark matter searches involved similar question: EFT or simplified model.
Interpretation of results depends on language used. Example: CMS width measurement, which uses on-shell and off-shell measurements of cross sections. Claim to give model-independent bound on the width. But only actually works for κ framework or EFT. Simplified models can contribute on-shell only (e.g. Higgs portal) or new light scalars can cancel the on-shell enhancement. In full UV models, the Higgs width is not a free parameter, so the constraint is not of general use.
Also, LEP measurements in same way give much stronger bounds than the LHC can ever do.
Matrix element method as tool for jet substructure, in contrast to previous use as hypothesis testing tool. Computational challenges all over the place. Parton shower with Sudakov factors and splitting functions. Idea seems to be to calculate a library of these for signals and backgrounds. Improves on tagging efficiency by factor of 2 to 4. Relatively insensitive to pile up. Application: discrimination of dijets, ditops and ditop resonances.
Summary: optimising data analysis and interpretations must be a primary goal at the LHC. There is always a trade off between generality and precision.
Question
Why stable to pile up? Using smaller jets. Pile up goes like R squared, so this makes things much better. Also, inbuilt pruning procedure to assign subjets to ISR/FSR.
4:00 pm: Collider Phenomenology, hypotheses and measurements, Michael Spannowsky
Several approaches that we can take right now: SUSY, composite Higgs, effective operators. Whatever we choose, modern analyses use simplified models.
The whole community is emotionally effected by the lack of naturalness promised by our data! Plus, perhaps we have oversold what can be learned from the Higgs boson. We essentially found it in the bin but it tells us nothing about the underlying physics.
The interpretation of any measurement is model-dependent. Interpretation requires communication between different scales. Increasing model complexity also means increasing model flexibility. Our study of the Higgs physics at Run 1 has mostly used the very simple κ framework, modelling the Higgs couplings with a simple rescaling of the SM value.
There has also been a (somewhat long-running) struggle to come to a suitable unified language/basis for Higgs EFT. Different bases, how many operators, validity of measurements and theory. Even if flavour-blind, over 59 operators. Though many are constrained by EW physics. Focusing on operators relative to new Higgs physics gives us 8 operators.
Important that kinematic distributions set the validity of the EFT themselves. Deviations set limits on Wilson coefficients. Other observations more or less bound NP scale Λ. This lets us properly understand where the constraints lie in the EFT parameter space.
However, LHC probes varying scales but generally low compared to Λ. Need to renormalise from Λ to measurement scale, which is complicated due to the different scales measured. Compare flavour physics, where typically there is only a single scale, e.g. the B meson mass. Importantly get both running and mixing of Wilson coefficients with scale. For example, can have T parameter at high scale but not at low scale. Some efforts to place limits already. Some of these constraints are in region where EFT is dubious, at least as simply presented.
Simplified models another approach. Suitable for approaches with relatively light new particles. Examples include singlet (Higgs portal), 2HDM variants, and triplet models (Georgi-Machacek). Constraints from direct searches already push to alignment or decoupling limits. Georgi-Machacek interesting as no decoupling limit, so can be truly excluded at LHC.
Dark matter searches involved similar question: EFT or simplified model.
Interpretation of results depends on language used. Example: CMS width measurement, which uses on-shell and off-shell measurements of cross sections. Claim to give model-independent bound on the width. But only actually works for κ framework or EFT. Simplified models can contribute on-shell only (e.g. Higgs portal) or new light scalars can cancel the on-shell enhancement. In full UV models, the Higgs width is not a free parameter, so the constraint is not of general use.
Also, LEP measurements in same way give much stronger bounds than the LHC can ever do.
Matrix element method as tool for jet substructure, in contrast to previous use as hypothesis testing tool. Computational challenges all over the place. Parton shower with Sudakov factors and splitting functions. Idea seems to be to calculate a library of these for signals and backgrounds. Improves on tagging efficiency by factor of 2 to 4. Relatively insensitive to pile up. Application: discrimination of dijets, ditops and ditop resonances.
Summary: optimising data analysis and interpretations must be a primary goal at the LHC. There is always a trade off between generality and precision.
Question
Why stable to pile up? Using smaller jets. Pile up goes like R squared, so this makes things much better. Also, inbuilt pruning procedure to assign subjets to ISR/FSR.
No comments:
Post a Comment