Blogging ICHEP 2010


A collective forum about the 35th edition of
the International Conference on High Energy Physics (Paris, July 2010)

Tuesday, August 31, 2010

End of part two

All good things come to an end (some bad ones too) . Among the many duties of the organisers of a conference, the last one is to close the conference "officially", as well as all the peripheral activities... like this blog.

Since all conferences are as much a human adventure as a scientific one, there are a couple of thanks that I would like to address before switching the lights off.

First of all, thanks to the LHC, and all the 4 experiments, for having worked so hard to provide first results at the ICHEP conference. We hoped for these data since we thought of organising the conference in Paris, and we followed the ups and downs of the machine with mixed feelings. Would there be anything to show ? It turned out that all the collaborations worked very hard on the few months of data-taking, and obtained quite remarkable results (I am still impressed by the mu-mu spectra of ATLAS and CMS, and their nice resonance peaks).

Thanks also to the Tevatron, CDF and D0 for providing us with data that keep us wondering if something really new is just around the corner, trying to understand better the Bs-meson, pushing the limits on the Higgs boson... The race is not finished, and we should have rather interesting discussions across the pond very soon.

Thanks to all the technicians, secretaries, researchers (staff, post-docs, students) who helped in organising and running the conference. You cannot have a clear idea of how a conference of a thousand people will run... until it starts, with your fingers crossed, hoping for the best. Thanks to all of them, things ran very smoothly at each step (during the juggling of the parallel sessions, the ceremonial of the plenary talks, and even, yes, for the visit of N. Sarkozy).

Thanks to all my colleagues of this blog, who succeeded in giving a comprehensive (and sometimes very detailed) impression of ICHEP. I am quite convinced that people following the conference through the webcast and the website got a much more lively and interesting view of ICHEP thanks to this blog. Probably a lesson to remember for the next editions !

Thanks to the participants and speakers (which is approximately the same crew) for coming and taking part in this conference. Even though there were not many questions (in particular during the plenary talks), I caught many people at coffee breaks -- or during sessions -- in intense discussions not only about the food, the president, or the ICHEP bag, but also physics...

And finally, thanks to you, the readers of this blog, for your comments and suggestions all along this adventure. We hope that you enjoyed your time here. We definitely had a wonderful time sharing our two pence of knowledge with you...

"See" you in two years !

Monday, August 9, 2010

The calm after the storm

Marco, Barbara, Georg and Jester's summaries didn't leave much room for improvement. But here's my take - in the weeks and months following ICHEP, I'm sure we'll hear much more from the LHC. The collider's performance continues to improve (this weekend's milestone was the first inverse nanobarn delivered to the ATLAS and CMS experiments). Fermilab will continue to make the most of its last year(s?) of Tevatron data. New neutrino experiments will come online, existing ones will deliver some much-anticipated data, ditto for ground-based and space-based particle astrophysics experiments. And we'll all look forward to the next ICHEP, in 2012 in Melbourne, where we'll hear the latest in theoretical and experimental particle physics, see what the future looks like two years later, and....get a new ICHEP bag. As I've discovered, not only do they make excellent laptop carriers, but they also work pretty well as beach bags.

Sunday, August 8, 2010

Very Last Summary

This is my last entry on this forum: my summary of the conference...better late than sorry :-)
  • Most Important Result: The Higgs exclusion limits from the Tevatron, of course. Anytime now we may get the answer to one of the most important question in particles physics. Not this time yet, but the thrill is on.
  • Most Intriguing Result: the forward-backward asymmetry of top decays at CDF has been updated to $15 \pm 5$ percent and lingers 2 sigma away from the SM prediction of approximately 5 percent.
  • Most Relieving Result: the poster from the HARP collaboration saying that the LSND anomaly was due to underestimated contamination of the beam with anti-electron neutrinos. If confirmed, that would solve the 10-years-long puzzle what went wrong in LSND.
  • Best Presentation: Nicolas Sarkozy. Gee, this guy knows how to talk, especially when contrasted with mumblings physicists. What fervor, what mimics, what gestures (ok, forget the jokes).
  • Best Presentation, seriously: Ben Kilminster, Higgs limits from the Tevatron. Maybe it's because when holding the remote he looks just like Colin Farrel in Bruges, or maybe because the presentation was clear, concise, and illuminating.
  • Worst Presentation: summary of BSM searches. Unfortunately, good experimental talks are rare. The cardinal sins are too much material, overcrowded slides, superficialness, no attempt at explaining presented results or methodology, and misleading theoretical interpretation.
  • Best Animation: the Planck satellite sweeping the sky while uncovering the temperature map. That was just lovely.
  • Best Music: given the number of cell phones in the audience, the competition is always fierce in this category. But if what I heard on the first day was really Genesis' Firth of Fifth, that obviously trumps anything.
  • Overall Impression: Even though and Paris is always worth a mass, the conference was pretty well organized, and I had fun at times, my opinion about the ICHEP series has not changed. Conferences with 1000+ participants are dinosaurs; more a brontosaurus rather than a T.Rex. Parallel sessions contain some interesting material, but the shortness of the talks and no time for discussions preclude any deeper insight. Plenary sessions on the other hand are typically hasty and overloaded summaries of what we already heard in the parallels. Alas, one needs an astereoid strike for dinosaurs to be replaced by more flexible mammals....so maybe see you again in 2 years, upside down ;-)

Wednesday, August 4, 2010

My Bet ? A Fourth Generation!

What picture should we draw of the quest for new phenomena after the presentation of a wealth of new results at the international conference on high-energy physics in Paris held last week ? I am speaking in particular of results coming from the experiments at the Tevatron and LHC, which are all studying hadron collisions in search for still unseen effects to both confirm (with the discovery of the Higgs boson) or break down (with the observation of Supersymmetry, new particles, extra dimensions, or still other effects) the present theoretical understanding of fundamental physics which the standard model provides us with.

In short my question today is, on which signal or phenomenon should we place our chips if we were to bet that the standard model is finally going to break down ?

I have my own answer. But first, before I give it to you, I feel compelled to be extra careful in a couple of ways.

The first way is dictated by personal reasons: I want to state it here very clearly, because I often get fingered as a rumour-monger or overhyper these days. I do NOT believe that the standard model is breaking down any time soon. I have a feeling that we will have to live with it for a while longer. I do not believe in Supersymmetry at arm's reach or anywhere else, nor in other exotics signals that we might see with present-day machines.

(And, since I am going to talk about something like that in particular below: I do not believe we are going to discover a fourth generation of fermions any time soon; I believe the present 2-sigmaish excesses of CDF and DZERO searches for a new t' quark are not due to a signal. If you really want my opinion... they are due to a coherent underestimation of QCD backgrounds, whose root is the use of the same methodologies by the two experiments!)

The second statement consists in my disclaimer, which I will state today as follows:

"The opinions expressed in this article are those of the author, and they do not reflect in any way those of the institutions to which he is affiliated. These include the CDF and CMS collaborations, as well as the Italian Institute of Nuclear Physics."

The above disclaimer is directed in particular at science reporters and other information recyclers... Which should not mistake me for an official source of the experiments in which I work! Of course it is a insufficient shield, but at least nobody can say I have not been clear on the matter.

Okay, now I feel more free to discuss in enthusiastic terms what I think is the single most exciting and promising deviation from standard model predictions that we have in our hands at present: a tentative signals for a fourth generation quark!

Can Fourth-Generation Quarks Really Exist ?

I have kept my eyes open on searches for a new quark since 2008, when a CDF analysis showed some intriguing high-mass events and a vague deviation of data from backgrounds. (The post linked above is rather well written if you need some introduction to the physics!)

After CDF performed the same analysis with doubled statistics, again finding an excess of high-mass events, I thought things were really interesting and I said so here.

In the meantime, there was an enlightening paper which came out in the Cornell Arxiv. Titled "Four Statements About The Fourth Generation", and signed by distinguished theorists, it explained clearly that contrarily to what one might think (or read in the Review of Particle Properties, which makes several assumptions in order to state that a fourth generation is excluded by electroweak measurements), a fourth generation of fermions is not ruled out by experimental measurements, and might actually be useful to explain the amount of CP violation we observe in particle decays. I summarized the paper's highlights in another post which I think is worthwhile reading, if you are interested in the topic.

Well, now DZERO has published the results of a quite similar analysis, and it looks like they too see some excess in the same kinematical distributions that CDF used to search for a fourth-generation quark. Again, this effect can be easily understood in terms of background fluctuations or a mismodeling of the high-mass tail of some of the contributing processes. Yet, the coincidence of the two search results warrants some additional thoughts. So let me first of all show what DZERO has just made public.


The DZERO Search For Fourth-Generation Quarks

DZERO has published, in time for ICHEP 2010, a new search for up-type fourth-generation quarks decaying to W bosons and down-type quarks. In a nutshell, the search considers events of the "lepton plus jets" type: the same kind of events on which all the most precise measurements of top quark physics at the Tevatron are based.

In the lepton-plus-jet topology, top quarks are produced in pairs, decay to a W and a b-quark, and then one W yields two hadronic jets, while the other decays to an electron-neutrino or muon-neutrino pair. This results in one neutrino in the final state, which adds some complexity to the reconstruction of the kinematics (the neutrino is undetected, and only its momentum components transverse to the beam direction can be inferred); however the advantage of having one high-momentum lepton in the event instead of purely hadronic jets is a more than adequate payoff. The events thus must feature a lepton, significant missing energy, and four hadronic jets: backgrounds then are small; the largest is the production of a W boson plus hadronic jets.

When searching for a fourth-generation quark, DZERO does exactly the same thing as in top searches: they assume that the t' quark is produced in pairs, and that it decays 100% of the time into a W boson and a quark (not necessarily a b-quark). The final state is the same as that of top searches, save for the fact that the larger mass of the t' grants a slightly tighter cut on the energy of the leading jet, a device which further reduces backgrounds.

In the end, the data allow the reconstruction of a tentative t' mass, assuming that each event is of the t'-pair-production kind. A kinematic fit searches for the combination of jet assignments to the decay partons which best matches the hypothesized process. One thus obtains a histogram of reconstructed t' mass:



In the figure, you can see with different colours how the predicted amount of events coming from different processes (top pair production in red, W+jets production in green, and multi-jet production in grey) distribute in the reconstructed t' mass. The data is shown by black points with error bars, and it matches very well the predicted shape of backgrounds. An example of what contribution would be given by a 300-GeV t' quark in the histogram is shown in yellow. Tiny, but not entirely undetectable. Mind you: the vertical axis has a logarithmic scale!

What is maybe not so immediate to discern from the figure is the fact that while backgrounds have a wide distribution in the reconstructed t' mass, the signal of a t' quark if present would populate a narrower region: the one around the real mass of the quark. This is entirely the point of having constructed this kinematic variable -discriminating signal and background.

A second discriminating variable is the sum of all transverse energies of the observed final state objects: jets, lepton, and missing energy. This is the so-called "Ht". Ht is large for processes that involve the production of massive states, and so it is a good means to separate t' production from the top and W+jets background. Below you can see how the data compares to backgrounds as a function of Ht; the color coding is the same as above.



DZERO performs a fit in the two-dimensional plane of the t' mass and Ht to extract the possible amount of signal present in the data. This is performed as a function of the unknown value of t' mass: since the distributions of reconstructed mass and Ht of the signal depend on the true t' mass, several fits are performed in series, to extract a limit curve which depends on that parameter; the curve is investigated by points, at 25-GeV intervals in the unknown t' mass.

The result of the fits is displayed in the figure below. The t' mass (this time the "true" one, not the reconstructed tentative mass of the kinematic fits) is on the horizontal axis, and on the vertical axis is the production rate of the fourth-generation quark pair. The black line shows the theoretical prediction for the rate, which falls quickly as the t' mass increases: fewer events are expected in the 4.3 inverse femtobarn dataset of analyzed collisions as the t' mass increases, because the higher the mass, the more energy is required to produce the heavy quark.



The theoretical curve of the signal cross section can be compared with the red curve, which shows the upper limit (at 95% confidence level) extracted from the data. The red curve lies below the black one for low masses: a light t' quark (of masses below 296 GeV) is excluded by the data, because it would have been copiously produced in the Tevatron collisions, and would have stuck out in the two tested distributions. For higher mass values, the limit is above the curve: these mass values are still possible.

Now observe the blue and yellow band: these describe what rates of the searched quark DZERO expected to limit, as a function of t' mass, given the amount of analyzed data they had and the analysis strategy. The blue band shows 1-sigma variations in the expected limit, and the yellow band shows the range of 2-sigma variations. In practice, the bands pictorially explain what "on average" would result from the search, if no signal were present in the data.

Now, the red curve stays on the edge of the 2-sigma band for masses above 300 GeV. What this means is that DZERO has a slight excess of events which distribute like t' production ones in their data. Not awfully exciting, I'll admit. But now compare the curve to the one found by CDF just a few months ago (the analysis which I have discussed in detail here, as already mentioned):



CDF found a strikingly similar result! True, CDF had more sensitivity, so their limit is slightly better; but the behavior of CDF data and DZERO data is indeed quite similar. A fortuitous coincidence between two 2-sigma results ? That is surely a possibility; another one is that the two experiments, which rely on similar simulation tools, both underestimated the high-energy production of top or W+jets production events.

Yet a third possibility remains on the table: that both CDF and DZERO are seeing the first hint of pair production of a fourth-generation quark. The amount of data of the two experiments would be insufficient to see a clear signal yet, so the first hint is just that they both obtain a mass limit well below their expectations.

Now, suspend temporarily your disbelief and consider. If a 400-GeV t' quark exists, who is going to discover it first ? For sure CDF and DZERO with twice as much statistics (which they almost already have in their bags) would be likely to make those 2-sigma excesses become close to 3-sigma ones. Maybe adding other search channels would further increase their reach; but they would probably be unable to conclusively discover the quark.

Instead, on the other side of the Atlantic Ocean... CMS and ATLAS would be very fast in finding conclusive evidence for such a quark! The reason is that producing a 400-GeV t' quark at LHC is much, much easier, given the over 3.5 times higher energy of the LHC collisions. The cross section at the LHC is of several picobarns, which means that well before collecting an inverse femtobarn of collisions, the CERN experiments will find the new quark!

Now, let me say something personal, deep down this long post. I have always said that, despite I have been working more on the CMS experiment at CERN than on the CDF experiment at the Tevatron since 2008, my heart still beats stronger on the Tevatron side... That is still true in a sense: CDF is such a fantastic achievement for science that I will always be proud of having contributed to it for 18 years (and counting). But if you ask me which experiment I would prefer to see discovering a t' quark... I would say CMS!

The reason ? CMS and ATLAS deserve to become the focus of the next decade of high-energy physics research. Too much has been invested in human resources for these experiments to fall short of being a total success. I would love it if the adventure of the LHC experiments into the unknown were to start with a t' discovery, early next year! It would be just great!

... But now please go back and read my original disclaimer once more!

Tuesday, August 3, 2010

Things you see and things you don't see...

It's already more than a week ago that I saw my first president at a physics conference. Doesn't time fly? There were so many things to see (or not) during ICHEP that it really stands out from other conferences I have been to. Hey, after all it was the first real big one with first real LHC results, after Physics at the LHC at DESY, which didn't have quite as many participants. Last year, at the Lepton Photon conference, the main conclusion after every talk was: "We are looking forward to results from the LHC!" It's great to see that those times are over and that the community is buzzing over limits and cuts and simulations and candidates!
Of course what we didn't see was the Higgs. Many people thought we would (which meant we also saw more journalists than ever before at a physics conference), and now the next big question is: what's next? Will the Tevatron keep running for another three to four years? Will that mean it will see the Higgs? From what I hear, that's not a given, but it'll certainly be an exciting time.
Some people also saw the film Sunshine during the nuit des particules at the Grand Rex, and at the time time saw a lot of the actress Irene Jacob - that dress, and a story about balls of fire in a kitchen will go down in particle physics history.
Now it's time to see what's next - for me, that's the global Particle Physics Photo Walk next Saturday. More than 200 amateur photographers from around the world will get an exclusive look behind the scenes of five physics labs (KEK, CERN DESY, Fermilab, TRIUMF) and we are very much looking forward to see our labs through their eyes.

Monday, August 2, 2010

Summary of personal impressions

In looking back at ICHEP, what are my personal overall impressions? The conference was very well organised (with the minor exception of the dinner), the venue was great (as were the conference bag and its contents), the President felt obliged to attend -- so it was clearly a good and successful conference overall.

It was also pretty big (for a physics conference, not in the greater scheme of such things), in fact almost too big for my taste; perhaps it's just that as a theorist I'm naturally more introverted, but I find it difficult to meet people and start a conversation when there's a huge crowd. Smaller more focussed conferences are probably better for discussions; there was also a notable lack of questions in the plenary talks -- perhaps also a symptom of excessive size.

On the other hand, the huge size means a very diverse set of speakers, which enables one to learn about all the things that have recently gone on in the wider field. Since the arXiv is getting so vast that it is well-nigh impossible to even read the titles of all papers that get posted to the hep-* sections (much less the abstracts, to say nothing of the papers -- even assuming that one had the exceptionally broad knowledge base to be able to make sense of all of them), this overview is perhaps the most important function of a large conference like ICHEP.

And the things to be learnt were of great interest: CMS and ATLAS have "rediscovered" the Standard Model; that in itself is no surprise, but the speed at which the LHC experiments have managed to get there is amazing at least for this theorist. The arrival of the LHC hasn't rung the death-knell for the Tevatron quite yet, though: while rumours of a Higgs discovery turned out to have no foundation in fact (a 2σ deviation is hardly a basis even for a rumour), CDF and D0 combined could exclude a much larger mass region for the Higgs, further narrowing down the regions where it can hide. Also from the Tevatron comes the like-sign dimuon charge asymmetry that may be the first sign of new physics if it is confirmed by another experiment. Away from the big colliders, the neutrino physicists and cosmologists are also doing impressive work and chipping away at the Standard Model's plinth. The representation of my own field of research was perhaps not optimally suited to the audience, since the parallel sessions on lattice QCD were not very well-attended except by the lattice people and the plenary talk concentrated on work that would likely have enraptured a nuclear physics audience, but probably not a HEP one. Overall, I got the impression that the experimentalists take ICHEP much more serious as a forum than we theorists do -- there were a lot of new experimental results presented for the first time at ICHEP, whereas most of the theoretical results had been presented at other conferences or been posted on the arXiv earlier.

Blogging a conference as part of a group rather than on my own blog was an interesting new experience; for a lone blogger, ICHEP would have been way too big!

Saturday, July 31, 2010

Meanwhile in the South: CoGeNT dark matter excluded

Parisians say that il n'y a que Paris. This is roughly true, however ICHEP'10 in Paris was not the only important conference in France last week. At the same time down south in Montpellier there was the IDM conference where a number of results in dark matter searches was presented. One especially interesting result concerns hunting for light dark matter particles.

Some time ago the CoGeNT experiment noted that the events observed in their detector are consistent with scattering of dark matter particles of mass 5-10 GeV. Although CoGeNT could not exclude that they are background, the dark matter interpretation was tantalizing because the same dark matter particle could also fit (with a bit of stretching) the DAMA modulation signal and the oxygen band excess from CRESST.

The possibility that dark matter particles could be so light caught experimenters with their trousers down. Most current experiments are designed to achieve the best sensitivity in the 100 GeV - 1 TeV ballpark, because of prejudices (weak scale supersymmetry) and some theoretical arguments (the WIMP miracle). In the low mass region the sensitivity of current techniques rapidly decreases, event though certain theoretical frameworks (e.g asymmetric dark matter) predict dark matter sitting at a few GeV. For example, experiments with xenon targets detect scintillation (S1) and ionization (S2) signals generated by particles scattering in a detector. Measuring both S1 and S2 ensure very good background rejection, however the scintillation signal is the main showstopper to lowering the detection threshold. Light dark matter particles can give only a tiny push to much heavier xenon atoms, and the experiment is able to collect only a few resulting scintillation photons, if any. Besides, the precise number of photons produced at low recoils (described by the notorious Leff parameter) is poorly known, and the subject is currently fiercely debated with knives, guns, and replies-to-comments-on-rebuttals.

It turns out that this debate may soon be obsolete. Peter Sorensen in his talk at IDM argues that xenon experiments can be far more sensitive to light dark matter than previously thought. The idea is to drop the S1 discrimination, and use only the ionization signal. This allows one to lower the detection threshold down to ~1 keVr (it's a few times higher with S1) and gain sensitivity to light dark matter. Of course, dropping S1 also increases background. Nevertheless, thanks to self-shielding, the number of events in the center of the detector (blue triangles on the plot above) is small enough to allow for setting strong limits. Indeed, using just 12.5 day of aged Xenon10 data a preliminary analysis shows that one can improve on existing limits for the scattering cross section of a light dark matter particle:Most interestingly, the region explaining the CoGENT signal (within red boundaries) seems by far excluded. Hopefully, the bigger and more powerful Xenon100 experiment will soon be able to set even more stringent limits. Unless, of course, they will find something...

Friday, July 30, 2010

Random collection of final impressions, and a tentative balance

ICHEP is over. After the last plenary session the few remaining braves stormed out of the auditorium, strained with conference fatigue, and headed back home. I must confess, I found a week-long conference, with six full days packed with presentations, pretty long and tiring. I'm not completely surprised that in the last days not many questions came from the (depleted) audience.

Since this is probably my last entry in this blog, I'll entertain you with a random collection of final impressions, and maybe a tentative balance on the blogging experience itself.

The conference itself

Lot has already been said and written, so let's simply put it this way: the conference was excellent. Superb location (Paris is always Paris), excellent venue (I was just astonished the Palais de Congres doesn't provide wireless microphones in the smaller rooms, everything else was perfect), very efficient organization (thanks!), and an optimal balance of contents. Ok, the catering was less-then-perfect, but why should we indulge in complaining about the little details? :-)

The LHC has entered the game

Again, not a big news, but it's good to repeat it: we begin to see the first physics results from the LHC experiments! And even if this is not yet exciting new physics, those times are approaching fast: after more than 20 years of preparation, it's a nice sensation for the whole community.

Experiments vs theory

On the low side, I must say that I found the theory contributions in the first part of the conference a bit isolated. This is probably normal in the context of parallel sessions (and there were anyway good phenomenological contributions in the sessions more oriented to experiment), but as an experimentalist I probably missed the opportunity to learn something really new for me. For instance, I learned from Georg that:
the talks in the lattice session had actually been selected to be accessible and of interest also to people outside the lattice community (in particular there were a number of review talks), so it was a bit of a pity that the talks were attended almost exclusively by lattice theorists.
I agree: pity! Maybe this should have been advertised more? The situation was of course different in second part of the conference, and I really appreciated some of the more theory-oriented talks in the plenary sessions.

"Sliduments" vs nice talks

The quality of the talks was in general rather good, and of course touched its best in the plenary sessions. I had anyway the impression that the non-LHC and non-Tevatron speakers gave the best talks in the parallel sessions. I have a theory, at least for the LHC talks. Nowadays we (the LHC experimental physicists) routinely use slides as a support for documentation of the daily work we are doing. Most of us have taken the (bad!) habit of packing them of all the information we want to record, information that should anyway go into a written report, sacrificing the graphical quality - and the effectiveness when used as a visual support for an oral presentation - in favor to an hybrid object that the experts in the field call slidument. Sure, it's possibly something easier to present: one can pretend to use the text on the slide as a reminder of what to say, maybe even avoiding to reharse. Well, the quality of this kind of presentation will definitively be worse, it's guaranteed, and - if they maybe can fit a weekly collaboration meeting - will certainly not meet the standard needed for a conference . Have a look at the slides of some of the presentations in plenary session of Wednesday, for instance the ones on dark matter or cosmology:

Almost no text, just the few word need to stress the concept, clear figure, no clutter. Sure, the speaker must now what to say on this slide! Now compare for instance with this one (taken from an ATLAS talk, so that nobody can say I'll try to blame our competitors only):
No excuse, we have still a lot to learn!

Blogging ICHEP 2010

I am still digesting the experience, and in this sense I'd really appreciate to get some feedback by the readers on this. On my side, I can say it has been interesting to blog a conference - it was a primer for me - and to do it in a collective blog, with different voices and styles.

Some of the feedback I go tell me that the blog has been appreciated outside, especially by the colleagues that were not attending the conference: apparently it helped to feel connected, more than the webcasts and slides only can do. It might also have helped the journalists reporting the conference to the media: a blog like this can certainly act a filter, and help the non-physicist to grasp what's important, what gets us excited, and why.

This (semi)official blog of the conference was an experiment, and in this respect the organizers wanted to keep a low profile, and verify on the field what the reactions would have been. It seems to me that, if in effect the community seems interested by the format, maybe next time something slightly more ambitious could be tried. For instance, with a bit more of organization we could have had some video interviews at the conference (someone did that, and did it very well indeed), a dedicated Twitter stream, and especially more visibility at the conference itself. I had in fact the impression that - at least at the beginning of the conference - a large part of the participants had no idea that this project was existing at all. And, since the most interesting and useful part of the blogging experience is the conversation with the readers, this could have been even more fun.

Anyway, I would probably do it again, should the occasion came. See you in two years in Melbourne?

The ICHEP Effect

I created a tool that watches how many plots DZERO, CDF, ATLAS, and CMS release as a function of time. Here are the results for this year (each little square is a plot):


I’m going to call that bump in July there the ICHEP effect.

Thursday, July 29, 2010

The CMS Momentum Scale And Resolution

While the focus of the international conference in high-energy physics in Paris last week has been on the search for new physics and the precise measurement of standard model quantities, I will offer to you today something more technical, but in no way less physics-rich; it was presented in Paris, but with the many parallel sessions it may have well gone unnoticed... What I wish to explain to you is the procedure by means of which the CMS experiments calibrates the scale and resolution of its charged particle momentum measurement.

The dull sound of the topic as stated above should not deceive you: this is a really exciting, interesting technology, which allows the measurement of physical quantities with high precision. Since the M in CMS stands for "muon", we certainly care for the precise measurement of muons -and muons are the particles used for the calibration procedure.

What happens when a charged particle leaves ionization deposits ("hits") in the silicon tracking system is that we can reconstruct its trajectory, forming a track. The track is curved in the plane transverse to the beam, because the S in "CMS" stands for "solenoid", a big cylinder that provides a B= 3.8 Tesla magnetic field within its volume. If you know what the Lorentz force is, you might also remember the formula P = 0.3 B R, expressing the proportionality of the momentum of a charged particle and its curvature in a magnetic field. This demands that within the CMS solenoid a P = 1.14 GeV muon follow a curved trajectory, which resembles a circle of radius R = 1 meter if observed in the "transverse" plane to the beam axis, the one along which the solenoid is symmetrical. By measuring the curvature, we determine the transverse momentum!

Things are always complicated if you want perfection. We of course can measure the position of the silicon hits with extreme accuracy, but alignment and positioning errors may create imperfections in the measurement of the track curvature. We also know the magnetic field with high accuracy, through Hall probes and other means, but imprecisions will affect the momentum measurement. Finally, the amount of material of which the tracking detector is composed affects the trajectory, producing further imprecisions if our map of the material is not perfect.

In the end, all the effects and all the details of the geometry of our detector are encoded in a carefully crafted simulation. With the simulation we can figure out what a 1-GeV track would look like, given our reconstruction and our assumptions about geometry, material, and magnetic field. But we need real data to verify that our model is correct, and to tune it in case it is not!

Real data: we now have it. CMS uses resonance decays to opposite-charge particles for this business: they are easy to identify, have little background, and there are plenty to play with. In particular, we use J/Psi meson decays to muon pairs for some of the checks of the momentum scale and resolution. Other dimuon resonances are also used -there is a large amount of such decays already available in the data so far collected- but here I will only discuss what CMS did with its J/Psi signal.

The dimuon mass spectrum in the vicinity of the nominal J/Psi mass value is shown in the picture below. A large number of signal events is observed. These events can be used to calibrate the momentum scale.



If one looks closely, one observes that the measured mass is very slightly lower than the nominal 3.097 GeV. This is already evidence for a very small underestimation of the momentum scale. To dig further, a simple thing one can do is to divide the J/Psi events depending on the value of the particle's reconstructed momentum or rapidity, measuring the mass in all sub-samples to check if in particular kinematical regions there is a bias. The bias, of course, would arise from the momentum reconstruction of the individual muons; but if one only measures the mass, which is a quantity constructed with the measurement of two muons, surely only an "average" bias can be detected, right ?

Wrong. Each muon from the decay of each J/Psi has a different momentum, travels through different parts of the detector, and is subjected to different reconstruction biases: we can turn these differences to our advantage. What we can do is to assume we know the functional form of these biases, and plug them into a likelihood function.

A further benefit with respect to methods I have seen in the past for the correction of scale biases is that a well-written likelihood function is also capable of extracting the momentum resolution from the same set of data. One just needs to produce a functional form (whose exact shape is suggested by simulation studies) that describes how the resolution on the momentum depends on the track kinematics; then, the likelihood fit will take care of finding the best parameters of the resolution function as well, by comparing the expected lineshape of the resonance with the mass value measured for each particle decay.

The likelihood is very complicated, because it accounts for the dependence of the mass on the muon momenta and the resolutions, and momenta and resolution in turn are functional forms of bias parameters. I know very well the code of this likelihood function, and I can tell you it is not for everybody! So I will abstain for once from finding a suitable analogy, lest I squeeze my brains for the rest of the evening. Let me just say that in the end, the likelihood maximization produces the most likely value of the parameters describing the bias functions, allowing a correction of the bias in the track momentum measurement!

Maybe it is best to show a couple of figures. The first one below shows the average mass of the J/Psi meson as a function of the pseudorapidity of the muons from its decay. The hatched red line shows the true value of the J/Psi mass; but more meaningful are the crosses, which show what should be measured with a perfect detector, given the fitting procedure (which, I am bound to specify, assumes that the lineshape follows a Crystal Ball form). The crosses are our "target": if we measure a mass in agreement with them, given our fitting procedure to extract the mass, our momentum scale is perfect.



In blue you can see that the mass, before corrections, is biased low, especially at high rapidity. Instead, after the likelihood maximization and the correction procedure, we obtain the purple crosses. The agreement with the black crosses is still not perfect, and the statistics is too poor to detect further small deviations, but the demonstration of the validity of the procedure is clear!

And then, the resolution. This is also a function of rapidity in CMS, due to the way the detector is built and the decay geometry. The figure below shows what resolution we expected to measure as a function of rapidity, from simulated J/Psi decays (in black), given the measurement method.



In red the figure also shows what the true resolution is, from simulated muons that are then compared to reconstructed ones. In blue, the band shows what instead CMS measured. The agreement between data and simulation is encouraging, and the result demonstrates the validity of the method. This functional form and its parameters are extracted from the way the reconstructed masses of J/Psi decays distribute around the nominal mass, accounting for the fact that muons in those events have different rapidity: the likelihood knows all the details, and produces a very complete answer to our question.

I think the method is very powerful and I cannot wait to see it applied to all resonances together, with more data -the different dimuon resonances have different kinematics and produce muons of widely varied momenta, allowing a very complete picture of the calibration and resolution of the CMS detector!