Stefan Schaefer started the first session with a review talk about pion physics and progress in simulation algorithms in lattice QCD. The low-energy physics of pions is described by Chiral Perturbation Theory (χPT), which is an effective field theory with a number of "low-energy constants" that need to be determined from the underlying ("high-energy") theory, i.e. from QCD. In order to determine these constants from lattice simulations, one needs to be able to simulate at low pion masses, so that one can establish the link to the domain of validity of χPT. This is where algorithmis developments come in, since ten years ago simulations at those quark masses would have been considered impossible, even given today's computers. The reason is that the effort needed for a lattice simulation grows rapidly with decreasing quark masses; in order to curtail that growth, new algorithms were needed. One such algorithm is the domain-decomposed Hybrid Monte Carlo (DD-HMC) algorithm by Martin Lüscher, other choices include the Rational Hybrid Monte Carlo (RHMC) algorithm. Further improvements have been obtained using "deflated" solvers for the Dirac equation, which separate the low-lying (infrared) modes from the ultraviolet, and thus manage to speed up calculations considerably.

Next was Takashi Kaneko speaking about disconnected quark loops. Many quantities are difficult to measure on the lattice, because they involve not only quark flows going from source to sink, but also quark loops attached only to one side, so-called disconnected diagrams. These are very hard to measure, since they require an "all-to-all" quark propagator, i.e. in principle the complete inverse of the Dirac operator on a given gauge background. A number of techniques have been developped to deal with this problem, all of which revolve around the idea to estimate the inverse of the Dirac operator stochastically, with a variety of methods devised to beat down the resulting noise in order not to lose the signal completely.

The last talk of the session was Dru Renner talking about the calculation of the leading-order hadronic contribution to the anomalous magnetic moment of the muon from lattice QCD. The anomalous magnetic moment of the muon has a well-known and persistent 3σ discrepancy between theory and experiment; however, the theory error is dominated by the error on the hadronic contributions, which currently are being estimated phenomenologically from the cross-section e

^{+}e

^{-}--> hadrons. A lattice calculation would provide a first-principles predictions for the hadronic contribution, and the leading hadronic contribution is given by the photon self-energy insertion, which can be measured as a current-current correlator on the lattice. Low-momentum contributions are dominant, however, and on a finite lattice there is a lower bound on momenta, so that the accuracy of the lattice prediction is still limited at present. Nevertheless, its good agreement with the phenomenological number provides a valuable cross-check. Determining the next-to-leading hadronic contribution will involve measuring light-by-light scattering amplitudes on the lattice, which is a completely unsolved problem as yet.

The second session had Owe Philipsen speaking on the QCD phase diagram at finite baryon desnity. At finite baryon density, i.e. finite baryon chemical potential, the fermionic determinant is in general no longer real and positive, and thus can no longer be interpreted as a probability measure, which prevents direct lattice simulations. A number of methods to work around this problem have been invented: one can Taylor-expand in the chemical potential and measure the Taylor coefficients at zero chemical potential, or one can simulate at a number of imaginary values of the chemical potential (where the determinant is positive) and analytically extend a fit to real values, or one can try to use reweighting techniques. Using a variety of these techniques, a number of groups are now exploring the QCD phase diagram in the region of low chemical potential, which however will probably be enough to clarify the existence, nature and location of the QCD critical point in the near future.

George Fleming gave a talk on the subject of "walking" technicolor models and lattice simulations of gauge theories that are not QCD-like. Depending on what the LHC finds, this may become a very very hot topic in the near future, as theorists may need to find a description of possible strong dynamics that exist at the TeV scale and give rise to electroweak symmetry breaking instead of a Standard Model Higgs.

Also related to Beyond-the-Standard-Model (BSM) physics was Philipp Gerhold's talk on the effects of a possible fourth-generation on Higgs boson mass bounds. By simulating a Higgs-Yukawa model (i.e. the Higgs and fermions, but no gauge fields), he has derived upper and lower mass bounds on the Higgs in the presence of a fourth generation; the result is that if a fourth generation with a top'-mass of about 700 GeV exists, then the Higgs mass is bounded to lie between about 600 and 900 GeV, or in reverse that if the Higgs is significantly lighter than that, there cannot be a fourth generation coupling to the Higgs in the same way as the first three generations.

For lunch, I met with some of the other ICHEP bloggers, which was very nice.

In the afternoon, the first session was started by Sinya Aoki with a talk about determining nuclear potentials from QCD, which is the programme of the HAL ("Hadrons to Atomic nuclei from the Lattice") collaboration. The potentials obtained from measuring Bethe-Salpeter amplitudes on the lattice and determining the non-local potential from them and expanding it in a derivative expansion (assuming an energy-independent potential) agree quite well with the qualitative features of phenomenological models used so far.

Colin Morningstar spoke about the calculation of the excited state spectrum using the "noisy LapH" or "distillation" method, which is a new method for improving the signal-to-noise ratio in lattice measurements of mass spectra. The method is based on realising a smearing operator for the quark fields as a projection on the space spanned by the lowest few eigenmodes of the 3d Laplace operator (hence the name Laplace-Heaviside or LapH), and on introducing a stochastic estimator of the inverse of the Dirac operator from this subspace.

This was followed by a presentation on nucleon structure calculations results from ETMC by Constantia Alexandrou. Nucleon structure is quite hard on the lattice -- there are significant systematic effects from the unphysically heavy light quarks used in simulations, and disconnected diagrams also contribute in many cases. As a result, lattice predictions disagree with experiment in many cases, although there is a notable tendency towards the experimental data as more sources of error come under control.

Jose Rodriguez-Quintero gave a talk on determinations of the strong coupling α

_{s}. This is a field in which a number of different methods have been used: the Schrödinger functional method pioneered by the ALPHA collaboration, calculations of Wilson loops within lattice perturbation theory as performed by the HPQCD collaboration, the use of moments of current-current correlators matched to continuum perturbation theory (also pursued by the HPQCD collaboration), and methods based on the determination of Landau-gauge propagators as presented by Jose.

The last session of the lattice track started off with Enno Scholz talking about results from 2+1 flavour QCD with domain wall fermions. Domain wall fermions are a formulation of fermions on the lattice that manages to realise chiral symmetry and evade the Nielsen-Ninomiya theorem by introducing a fictitious fifth coordinate in which the fermions are localised on a four-dimensional domain wall. The fifth dimension means that simulations are much more expensive than for other (e.g. Wilson or staggered) fermions that do not maintain the continuum chiral symmetry.

Michele Della Morte gave a review of heavy flavour physics, focussing on work done by the ALPHA collaboration. One important point he made was that in order for heavy flavours to be reliably simulated, fine lattice are needed (the Symanzik effective theory is an asymptotic expansion, and it only works for

*ma*small enough) -- e.g.

*a<0.08*fm for charm quarks. For bottom quarks, this means that one needs to resort to effective theories such as HQET, which has been studied non-perturbatively by the ALPHA collaboration in the quenched case, with results for

*N*forthcoming.

_{f}=2Another review of heavy flavour physics, this time focussing on work done by the HPQCD and FNAL/MILC collaborations, was given by Elvira Gamiz. A lot of work has done into reducing the errors on

*f*, and the previously reported tension between theory and experiment has vanished. Significant progress is also being made in the determination of semileptonic form factors.

_{Ds}The only thing I wasn't too happy with about the lattice track was that almost no non-lattice people attended, since the talks were selected specifically to give an overview of recent progress for the benefit of a wider audience (most of the lattice folks have been to the LATTICE 2010 conference and already know a lot of these results). So why don't non-lattice people come to lattice talks -- is it that the lattice has a reputation for being highly technical?

In my opinion, the problem is that lattice talks have been pushed into a single track which emphasizes the calculational technique over the physical consequences. Instead, lattice review talks should have been slotted into the appropriate physics category: LQCD heavy quark review in the heavy quark session, for example. If other talks were organized the existing way, you would have to change between the CDF session, the D0 session, the CKM/UT analysis session, and the FORM session.

ReplyDelete