Vertexing and Kinematic Fitting, Part I: Basic Theory Lectures Given at SLAC Aug. 5 – 7, 1998 Paul Avery University of Florida avery@phys.ufl.edu http://www.phys.ufl.edu/~avery Paul Avery 1 Basic theory Overview of plan 1st of several lectures on kinematic fitting Focus in this lecture on theory Plan of lectures Lecture 1: Basic theory Lecture 2: Introduction to the KWFIT fitting package Lecture 3: Vertex fitting Lecture 4: Building virtual particles References KWFIT http://www.phys.ufl.edu/~avery/kwfit/ or http://w4.lns.cornell.edu/~avery/kwfit/ Several write-ups on fitting theory and constraints http://www.phys.ufl.edu/~avery/fitting.html Paul Avery 2 Basic theory What is Kinematic Fitting? Kinematic fitting is a mathematical procedure in which one uses the physical laws governing a particle interaction or decay to improve the measurements of the process. For example, the fact that the tracks coming from D0 K decay must come from a common space point can be used to improve the 4-momentum and positions of the daughter particles, thus improving the mass resolution of the D0 . Physical information is supplied via constraints. Each constraint is expressed in the form of an equation expressing some physical condition that the process must satisfy. In the example above, each track contributes 2 constraints (r– and z) to the vertex requirement, giving 8 constraints in all. Vertexing is only one example. We can require instead that the invariant mass of the particles be equal to 1.8654. This is known as a mass constraint. We will discuss mass constraints later. Paul Avery 3 Basic theory Implementation of Constraints Constraints are generally implemented through a least-squares procedure. Each constraint equation is linearized and added, via the Lagrange multiplier technique, to the 2 equation of the tracks using the covariance matrices of the tracks. Each track contributes a 7 parameter “measurement”, and the 7 7 track covariance matrix is the generalization of the 2 for a single measurement. One then minimizes the 2 simultaneously with the constraint conditions. The constraints “pull” the tracks away from their unconstrained values, and the resulting 2 one obtains with n constraints is distributed like a standard 2 with n degrees of freedom, if gaussian errors apply. A histogram of fits to, say, 10,000 decays would clearly show this distribution. Of course, since track errors are only approximately gaussian, the actual distribution will have more events in the tail than predicted by theory. Still, knowledge of the distribution allows one to define reasonable 2 cuts. For example, in the vertexing example D0 K , there are a total of 8 constraints, but 3 unknown parameters must be determined (the D0 vertex). The total number of degrees of freedom is thus 2*4 – 3 = 5. Paul Avery 4 Basic theory Trivial Example Let’s work out all the least squares machinery for a simple example. Suppose we have two measurements, x1 and x2 with (independent) errors 0.1. Now we impose the condition that we want the two variables to sum to 6. Why 6? I don’t know; just humor me for now. 6 x2 6 x1 Without the constraint condition, the total 2 of the measurements could be written dx x i dx x i 2 2 1 10 12 2 2 20 22 where x10 and x20 are the initial measurements of x1 and x2 , and 1 2 01 . . Since there is no reason yet for the measurements to stray from their initial values, 2 0 initially. Paul Avery 5 Basic theory The constraint is imposed using the Lagrange multiplier method, e.g. 2 d x1 x10 12 i d 2 x2 x20 i 2ax x 6f 2 1 22 2 where is a lagrange multiplier which must be determined (the factor of 2 is inserted to simplify the algebra). We minimize the 2 by setting the partial derivatives wrt x1 , x2 and to 0. This yields, using 1 2 1 2 x1 x10 / 2 0 2 x1 d i 1 2 x2 x2 0 / 2 0 2 x 2 d i 1 2 x1 x2 6 0 2 Paul Avery 6 Basic theory We solve for the unknowns x1 , x2 and : 1 2 12 x10 12 x2 0 6 d x 3 d x x 3 d x Paul Avery 1 1 2 10 x2 0 2 1 2 10 x2 0 7 i i i Basic theory Error Analysis and Covariance Matrix The solution is only half the story, because we also care about the errors and correlations of the updated parameters. From the above discussion, we expect that the constraint will reduce the errors of the original measurement. We calculate the errors for x1 and x2 directly from the definition of standard deviation, by averaging over all possible measurements. 2x z f x x x 2 dx First write the deviations from the mean x1 x1 x1 0.5 x10 x2 0 x2 x2 x2 Paul Avery d 0.5d x 10 8 i i x2 0 Basic theory The error information for more than one variable is elegantly expressed in terms of the “covariance matrix”. For example, let x1 x . x2 F IJ G HK The covariance matrix of the two variables wrt one another is Vx ij xix j , or in matrix form Vx xx T 1 2 1 1 1 2 2 x x F G Hx x 1 x I F G Jax x f x K H x x I x x J K 1 2 2 2 It is clear from the definition that Vx is symmetric (Vx ij Vx ji ) and the diagonal elements are just the squares of the standard deviations (Vx ii i2 ). Paul Avery 9 Basic theory For our toy problem, the initial and final covariance matrices are Vx 0 F G H0 2 0 2 2 I J K Thus the errors are x1 x2 F G G2 G H2 Vx 2 2 2 I 2J J J 2 K 2 2 0.071 0.071 which are substantially smaller than before. The correlation coefficient rx1 x2 is commonly used to express the variation of one parameter with another. It is defined as Vx1 x2 rx1 x2 x1 x2 Our simple constraint leads to rx1 x2 1, i.e., every fluctuation of x1 upward is matched by an equal fluctuation of x2 downward. This, of course, was expected. Other kinds of constraints lead to different correlations. Paul Avery 10 Basic theory General Constrained Fits with Tracks Kinematic fitting involving tracks is straightforward, although a bit more complicated: 1. The initial tracks are defined by 7 parameters apiece 2. Each track has a 7 7 non-diagonal covariance matrix 3. There are typically > 1 constraints 4. The constraints are generally non-linear Non-linearity is not a difficult problem since we expand about a point close to the final answer anyway. Suppose that there are m variables and r constraints H 0: H1 0 H2 0 Hr 0 Paul Avery 11 Basic theory The constraints can be expanded to first order about the point A: H 0 i A j jA Hi A a fc j h af j c h Dij j jA di j which can be written more naturally as a matrix equation 0 D d where A and F G DG G G G H H1 1 H2 1 Hr 1 H1 2 H2 2 Hr 2 H1 m H2 m Hr m I FD J G D J G J G J G J K HD 11 D12 D1m 21 D22 D2 m r1 Dr 2 Drm I J J J J K FH a fI G H a fJ J dG J G G HH a fJ K Paul Avery 1 A 2 A r A 12 Basic theory For example, for our simple example of two variables satisfying the constraint x1 x2 6 0 expanded about the point x A 3, 3 , we get D 1 1 af a f d0 The constraint equation becomes x1 D 0 x2 F IJ G HK where x1 x1 3 and x2 x2 3. Paul Avery 13 Basic theory Matrix formulation of 2 problem Kinematic fits differ only in how the matrices D and d are specified. The complete 2 equation can be written compactly in matrix form: a f dV i a f 2 aD d f a f V a f 2 D d 2 0 1 i T 0 0 ij 0 j 1 k kl l k T 0 0 where 0 are the unconstrained parameters, A , and 1 1 2 2 m r FI G J G J G J G HJ K Paul Avery 14 FI G J G J G J G HJ K Basic theory The first term in the 2 expression is the general form for a set of m correlated variables. When the variables are uncorrelated, it collapses to the familiar expression d i i0 i i 2 2i The second term is the sum of the products of each of r Lagrange multipliers i by its corresponding constraint. This works for the simple example: d 2 1 d x1 x10 Paul Avery F 0 IFx x I 2 1ax 3f 1ax 3f 0 iG G J J x x 0 H K H K 1 x1 x10 , x2 x2 0 2 2 i d 2 x2 x2 0 2 i 2 2 1 10 2 20 a 2 x1 x2 6 15 1 2 f Basic theory General solution of 2 problem We set to zero the partial derivatives of the 2 wrt each of the m variables and r Lagrange multipliers , giving a total of m + r equations, enough to solve for all the unknowns. The solution is demonstrated in my first fitting note, CBX 91–72. Without going into details, the solution is 0 V 0 DT a VD D 0 d f with covariance matrix V V 0 V 0 DT VDDV 0 The auxiliary matrix VD and 2 are d VD DV 0 DT 1 i 2 T VD1 a T D 0 d f Physically, VD is the covariance matrix of the lagrange multipliers . Paul Avery 16 Basic theory The following points should be noted: 1. Kinematic fitting problems are fully specified by the D and d matrices. Once they are set, the solution is determined by the above equations. 2. The solution requires the inverse of only a single matrix, the r r matrix DV 0 DT , which is used to obtain VD . 3. It can be shown that the new covariance matrix V has diagonal elements smaller than the initial covariance matrix V 0 . Thus the constraints are doing their job. 4. The 2 does not require the evaluation of V01, although the formal definition uses that matrix. This is a great simplification and permits the use track representations with non-invertible covariance matrices (such as that used in KWFIT). 5. The 2 can be written as a sum of r terms, one per constraint. It’s then possible to look at each of these terms separately in order to get more discriminating power than from the overall 2 . Paul Avery 17 Basic theory