# Interaction with External Sources Chapter 4 4.1

advertisement
```Chapter 4
Interaction with External
Sources
4.1
Classical Fields and Green’s Functions
We start our discussion of sources with classical (non-quantized) fields. Consider
L = 12 (@&micro; (x))2 − 12 m2 (x)2 + J(x) (x)
Here J(x) is a non-dynamical field, the source. The equation of motion for the
dynamical field ( (x)) is
(@ 2 + m2 ) (x) = J(x) .
(4.1)
The terminology comes from the more familiar case in electromagnetism. The
non-homogeneous Maxwell equations,
� = ⇢,
� ⋅E
∇
� − @0 E
� = J�
� &times;B
∇
have as sources of the electric and magnetic fields the charge and current densities,
� respectively. We can go a little further in pushing the analogy. In terms
⇢ and J,
� respectively, the fields are
of the electric and vector potentials, A0 and A,
� = −@0 A� − ∇A
� 0,
E
so that Gauss’s law becomes
�=∇
� &times; A�
B
� ⋅ A� − ∇2 A0 = ⇢
−@0 ∇
� ⋅ A� + @0 A0 = 0 this takes the form
In Lorentz gauge, ∇
@ 2 A0 = ⇢ .
56
4.1. CLASSICAL FIELDS AND GREEN’S FUNCTIONS
57
This is exactly for the form of (4.1), for a massless field with the identification A0
for and ⇢ for J. In Lorentz gauge the equations satisfied by the vector potential
are again of this form,
@ 2 A� = J� .
The four vector J &micro; = (⇢, J i ) serves as source of the four-vector potential A&micro; . Writing
@ 2 A&micro; = J &micro; .
is reassuringly covariant under Lorentz transformations, as it should be since this
is where relativity was discovered! Each of the four components of A&micro; satisfies the
massless KG equation with source.
Consider turning on and o↵ a localized source. The source is “on,” that is
non-vanishing, only for −T &lt; t &lt; T . Before J is turned on is evolving as a free
KG field (meaning free of sources or interactions), so we can think of the initial
conditions as giving (x) = in (x) for t &lt; −T , with in (x) a solution of the free
KG equation. We similarly have that for t &gt; T (x) = out (x) where out (x) is a
solution of the free KG equation. Both in (x) and out (x) are solutions of the KG
equations for all t but they agree with only for t &lt; −T and t &gt; T , respectively.
Our task is to find out (x) given in (x) (and of course the source J(x)).
Solve (4.1) using Green functions:
(x) =
hom (x) + �
where the Green function satisfies
d4 y G(x − y)J(y)
(@ 2 + m2 )G(x) =
(4)
(x)
(4.2)
(4.3)
and hom (x) is a solution of the associated homogeneous equation, which is the
KG equation,
(@ 2 + m2 ) hom (x) = 0 .
We can use the freedom in hom (x) to satisfy boundary conditions. We can determine the Green function by Fourier transform,
Then
G(x) = �
(@ 2 + m2 ) �
so that
d4 k ik⋅x ̃
e G(k)
(2⇡)4
(4)
(x) = �
d4 k ik⋅x
e
(2⇡)4
d4 k ik⋅x ̃
d4 k ik⋅x
d4 k ik⋅x
2
2 ̃
e
G(k)
=
e
(−k
+
m
)
G(k)
=
e
�
�
(2⇡)4
(2⇡)4
(2⇡)4
̃
G(k)
=−
k2
1
.
− m2
58
CHAPTER 4. INTERACTION WITH EXTERNAL SOURCES
So preliminarily take
G(x) = − �
d4 k
eik⋅x
(2⇡)4 k 2 − m2
However, note this is ill-defined �
since the integrand diverges at k 2 = m2 . The
integral over k 0 diverges at k 0 = &plusmn; k�2 + m2 , or k 0 = &plusmn;Ek� for short:
0
� dk
0
eik t
.
(k 0 − Ek� )(k 0 + Ek� )
This integral can be thought of as an integral over a complex variable z along a
contour on the real axis, Re(z) = k 0 , from −∞ to ∞. Then the points z = &plusmn;Ek�
are locations of simple poles of the integrand, and we can define the integral by
deforming the contour to go either above or below these poles. For example we
can take the following contour:
Im(z)
&times;
−Ek�
&times;
Ek�
k 0 = Re(z)
Regardless of which deformation of the contour we choose, the contour can be closed
with a semicircle of infinite radius centered at the origin on the upper half-plane if
t &gt; 0 and in the lower half-plane if t &lt; 0:
Im(z)
t&gt;0
&times;
−Ek�
Im(z)
t&lt;0
&times;
Ek�
k 0 = Re(z)
&times;
−Ek�
&times;
Ek�
k 0 = Re(z)
4.1. CLASSICAL FIELDS AND GREEN’S FUNCTIONS
59
because he integral along the big semicircle vanishes as the radius of the circle is
taken infinitely large. For the choice of contour in this figure no poles are enclosed
for t &gt; 0 so the integral vanishes. This defined the advanced Green’s function,
Gadv (x) = 0 for t &gt; 0. Alternatively we can “displace” the poles by an infinitesimal
amount −i✏, with ✏ &gt; 0, so they lie just below the real axis,
Im(z)
Then we have
&times;
−Ek� − i✏
&times;
k 0 = Re(z)
Ek� − i✏
Gadv (x) = − �
d4 k ik⋅x
1
e
4
(2⇡)
(k 0 + i✏)2 − k� 2 − m2
Gret (x) = − �
d4 k ik⋅x
1
e
4
0
2
(2⇡)
(k − i✏) − k� 2 − m2
Similarly, the retarded Green’s function is
with the contour bellow the two poles. It has Gret (x) = 0 for t &lt; 0. For later use it
is convenient to write
Gret (x) = −✓(x0 ) �
d3 k −ik� ⋅�x
1
�eiEk� t − e−iEk� t ��
e
�2⇡i
(2⇡)4
2Ek�
(4.4)
Gadv (x) = ✓(−x0 ) �
d3 k −ik� ⋅�x
1
�eiEk� t − e−iEk� t ��
e
�2⇡i
(2⇡)4
2Ek�
(4.5)
= −i✓(x0 ) � (dk) (eik⋅x − e−ik⋅x )
= i✓(−x0 ) � (dk) (eik⋅x − e−ik⋅x )
We also note that one may choose a contour that goes above one pole and below
the other. For example, going below −Ek� and above +Ek� we have
GF (x) = − �
= −�
d4 k ik⋅x
1
e
4
0
(2⇡)
(k − Ek� + i✏)(k 0 + Ek� − i✏)
d4 k ik⋅x
1
e
4
2
(2⇡)
(k − m2 + i✏)
= i �✓(x0 ) � (dk) e−ik⋅x + ✓(−x0 ) � (dk) eik⋅x �
(4.6)
60
CHAPTER 4. INTERACTION WITH EXTERNAL SOURCES
In the solution (4.2), for x0 &lt; −T the integral over y 0 only has contributions
from x0 − y 0 &lt; 0, and for x0 &gt; T it has contributions only from x0 − y 0 &gt; 0. So we
have
(x) =
=
Hence we can write
out (x)
=
=
in (x) + �
out (x) + �
in (x) + �
in (x) + �
d4 y Gret (x − y)J(y)
d4 y Gadv (x − y)J(y)
d4 y �Gret (x − y) − Gadv (x − y)�J(y)
d4 y G(−) (x − y)J(y)
(4.7)
G(−) can be obtained from the di↵erence of (4.4) and (4.5). But more directly we
note that
&times;
&times;
−
&times;
&times;
&times;
&times;
+
&times;
&times;
is the same as
The straight segments cancel and we are left with
&times;
This gives
G(−) (x) = − �
&times;
d3 k −ik� ⋅�x
1
�eiEk� t − e−iEk� t ��
e
�2⇡i
(2⇡)4
2Ek�
�
�
= −i � (dk) �eiEk� t−ik ⋅�x − e−iEk� t−ik ⋅�x �
d4 k
✓(k 0 ) (k 2 − m2 ) �eik⋅x − e−ik⋅x �
(2⇡)3
d4 k
= −i �
&quot;(k 0 ) (k 2 − m2 )eik⋅x
(2⇡)3
= −i �
where
�
�
�+1
&quot;(k ) = �
�
�
�−1
0
k0 &gt; 0
k0 &lt; 0
4.2. QUANTUM FIELDS
61
You may have noticed that this looks a lot like the function + (x) = [
In fact, computing the commutator of free fields at arbitrary times,
(+)
(x),
[ (x), (y)] = � (dk)(dk ′ )[↵� e−ik⋅x + ↵†� eik⋅x , ↵� ′ e−ik ⋅y + ↵†� ′ eik ⋅y ]
k
k
= � (dk)�eik⋅(y−x) − e−ik⋅(y−x) �
4.2
k
′
k
′
(−)
(0)].
= iG(−) (y − x) = −iG(−) (x − y)
Quantum Fields
We now consider equation (4.1) for the KG field with a source in the case that
the field is an operator on the Hilbert space F. The source J(x) is a a c-number,
a classical source, and still take it to be localized in space-time. Suppose the
system is in some state initially, well before the source is turned on. Let’s take the
vacuum state for definiteness, although any other state would be just as good. As
the system evolves, the source is turned on and then o↵, and we end up with the
system in some final state. Now, in the classical case if we start from nothing and
turn the source on and then o↵ radiation is produced, emitted out from the region
of the localized source. In the quantum system we therefore expect we will get a
final state with any number of particles emitted from the localized source region.
Then (x) = in (x) is the statement that for t &lt; −T
is the same operator
as a solution to the free KG equation. Likewise for = out for t &gt; T . Does this
mean out = in ? Surely not, it is not true in the classical case. Since satisfies
the equal time commutation relations, i[@t (x), (y)] = (3) (�
x − y� ), etc, so do
and
.
Since
the
latter
solve
the
free
KG
equation,
they
have
expansions in
out
in
creation and annihilation operators, and the same Hilbert space. More precisely,
†
†
we can construct a Fock space Fin out of ↵in
, and a space Fout out of ↵out
. These
spaces are just Hilbert spaces of the free KG equation, and therefore they are
isomorphic, Fin ≈ Fout ≈ FKG . So there is some linear, invertible operator S ∶
FKG → FKG , so that � �out = S −1 � �in . Since the states are normalized, S must
preserve normalization, so S is unitary, S † S = SS † = 1. The operator S is called
the S-matrix.
Let’s understand the meaning of � �out = S † � �in . The state of the system
initially (far past) is � �in . It evolves into a state � �out = S † � �in at late times,
well after the source is turned o↵. We can expand it in a basis of the Fock space,
the states �k� 1 , . . . , k� n � for n = 0, 1, 2, . . . In particular, if the initial state is the
vacuum, then the expansion of S † �0� in the Fock space basis tells us the probability
amplitude for emitting any number of particles. More generally
out �
� �in = in � �S� �in
62
CHAPTER 4. INTERACTION WITH EXTERNAL SOURCES
is the probability amplitude for starting in the state � � and ending in the state
� �. Note that the left hand side is not to be taken literally as an inner product
in the same Hilbert space of free particles (else it would vanish except when the
initial and final states are physically identical, i.e., when nothing happens).
Now, S † in (x)S is an operator acting on out states that satisfies the KG equation. Similarly, if ⇡in (x) = @t in (x), S † ⇡in (x)S acts on out states. Moreover, the
commutator [S † in (x)S, S † ⇡in (y)S = S † [ in (x), ⇡in (y)]S = [ in (x), ⇡in (y)] since
the commutator is a c-number and S † S = 1, and similarly for the other commutators
of S † in (x)S and S † ⇡in (x)S. This means that up to a canonical transformation,
out (x)
= S†
in (x)S .
Since we are free to choose the out fields in the class of canonical equivalent fields,
we take the above relation to be our choice.
There is a simple way to determine S. We will present this now, but the
method works only for the case of an external source and cannot be generalized
to the case of interacting fields. So after presenting this method we will present a
more powerful technique that can be generalized. From Eq. (4.7) we have
S†
Recall
in (x)S
=
=
in (x) + �
in (x) + i �
d4 y G(−) (x − y)J(y)
d4 y [
in (x),
in (y)J(y)] .
eA Be−A = B + [A, B] + 12 [A, [A, B]] + �
= B + [A, B]
It follows that
S = exp �i � d4 y
It is convenient to normal-order S. Since [
use
if [A, [A, B]] = 0.
in (y)J(y)�
(+)
(x),
eA eB = eA+B+ 2 [A,B]
1
(4.8)
(−)
(y)] is a c-number we can
which is valid provided [A, B] commutes with both A and B. So using i ∫ d4 x
and i ∫ d4 x (+) (x)J(x) for A and B,
S = ei ∫
d4 x
(−) (x)J(x)
ei ∫
d4 x
(+) (x)J(x)
e2 ∫
1
d4 x d4 y [
(−) (x), (+) (y)]J(x)J(y)
(−)
(x)J(x)
4.2. QUANTUM FIELDS
From (2.10),
+ (x2
63
− x1 ) ≡ [ ˆ(+) (x1 ), ˆ(−) (x2 )]
= � (dk) e−ik⋅(x2 −x1 )
=�
d4 k
✓(k 0 ) (k 2 − m2 )e−ik⋅(x2 −x1 )
(2⇡)4
and introducing the Fourier transform of the source,
J(x) = �
we have
4
4
� d xd y[
(−)
in (x),
d4 k ik⋅x ̃
e J(k)
(2⇡)4
(+)
in (y)]J(x)J(y)
d4 ki
̃ 2 )J(k
̃ 3 ) � d4 x � d4 y eik2 ⋅x eik3 ⋅y eik1 ⋅(y−x)
✓(k10 ) (k12 − m2 )J(k
4
(2⇡)
i=1
3
= −� �
= −�
d4 k
2
̃ J(−k)
̃
̃
✓(k 0 ) (k 2 − m2 )J(k)
= − � (dk)�J(k)�
,
(2⇡)4
̃
where we have used J ∗ (x) = J(x) ⇒ J(−k)
= J̃∗ (k), and it is understood that
0
k = Ek� . Hence,
S = ei ∫
d4 x
(−) (x)J(x)
ei ∫
d4 x
(+) (x)J(x)
̃
e− 2 ∫ (dk)�J(k)� .
1
2
As an example of an application, we can compute the probability of finding particles in the final state if we start form no particles initially (emission probability).
Start from probability of persistence of the vacuum,
2
̃
�out �0�0�in �2 = �in �0�S�0�in �2 = exp �− � (dk)�J(k)�
�.
Next compute the probability that one particle is produced with momentum k� :
�out �k� �0�in �2 = �in �k� �S�0�in �2 = �in �0�↵k� S�0�in �2 .
where “in” in ↵k� is implicit. To proceed we use
so that
e−i ∫
(−) J
Hence
[↵k� ,
↵k� ei ∫
(−)
(x)] = � (dk ′ )[↵k� , ↵†� ′ ]eik ⋅x = eik⋅x
(−) J
k
= ↵k� − i � [
(−)
′
(with k 0 = Ek� )
̃
, ↵k� ]J = ↵k� + i � d4 xeik⋅x J(x) = ↵k� + iJ(−k)
.
2
2
̃
̃
�out �k� �0�in �2 = �J(k)�
exp �− � (dk)�J(k)�
�.
64
CHAPTER 4. INTERACTION WITH EXTERNAL SOURCES
4.2.1
Phase Space
Suppose we want to find the probability of finding a single particle as t → ∞
regardless of its momentum, that is, in any state. We need to sum over all final
states consisting of a single particle, once each. Since we have a continuum of states
we have to integrate over all k� with some measure, ∫ d&micro;(k� ). Clearly d&micro;(k� ) must
involve d3 k, possibly weighted by a function f (k� ). Presumably this function is
rotationally invariant, f = f (�k� �). But we suspect also d&micro;(k� ) is Lorentz invariant,
so d&micro;(k� ) ∝ (dk), that is, it equals the invariant measure up to a constant. Let’s
figure this out by counting. Note that how we normalize states matters.
First we determine this via a shortcut, and later we repeat the calculation via
a more physical approach (and obtain, of course the same result). We want
2
� �out �n�0�in � = � in �0�n�out out �n�0�in = out �0� �� �n�out out �n�out � �0�in ,
n
n
n
where the sum is restricted over some states. The operator
� �n�out out �n�
n
is a projection operator onto the space of “some states,” the ones we want to sum
in the final state. We have already discussed this projection operator for the case
of one particle states when �k� � is relativistically normalized. It is
� (dk)�k� ��k� � .
So we have
2
2
̃
̃
1-particle emission probability = � (dk)�J(k)�
exp �− � (dk)�J(k)�
�.
Now we repeat the calculation by counting states. It is easier to count discrete
sets of states. We can do so by placing the system in a box of volume Lx Ly Lz .
�
Take, say, periodic boundary conditions. Then instead of ∫ (dk)(↵k� eik ⋅�x + h.c.), we
�
have a Fourier sum, ∑k� (ak� eik ⋅�x + h.c.). Here ak� are creation operators with some
normalization we will have to sort out. For periodic boundary conditions we must
have kx Lx = 2⇡nx , ky Ly = 2⇡ny , kz Lz = 2⇡nz , with ni integers. We label the one
n
particle states by these, ��
n � = a†� �0�, where k� = 2⇡ � Lnxx , Lyy , Lnzz �. The probability
of finding � � in state ��
n � is ���
n � ��2 if both � � and ��
n � are normalized to unity.
†
(Note that this means [ak� , a� ′ ] = nx n′x ny n′y nz n′z ). The probability of finding � �
k
in a 1-particle state is then ∑n� ���
n � ��2 .
Let’s be more specific. What is the probability of finding � � in a 1-particle
state with momentum in a box (kx , kx + kx ) &times; (ky , ky + ky ) &times; (kz , kz + kz )?
k
4.2. QUANTUM FIELDS
There are
nx ny nz =
65
Lx Ly Lz
(2⇡)3
kx ky kz state in the box. For large volume
the spacing between values of k� becomes small, so in the limit of large volume
kx → dkx , etc, and the number of particles in the momentum box is
kx ky kz → V
Lx Ly Lz
(2⇡)3
d3 k
(2⇡)3
where V = Lx Ly Lz is the volume fo the box. As we switch from discrete to continuum labels for our states we must be careful with their normalization condition.
Since on the space of 1-particle states we have
n ���
n� = 1
� ��
�
n
⇒
n ���
n ��
n ′ � = ��
n ′�
� ��
�
n
then in the limit, denoting by �k� � the continuum normalized states,
�
�
n
V 3k
��
n ���
n ��
n ′ � = ��
n ′ � → � d3 k�k� ��k� �k� ′ � = �k� ′ �
(2⇡)3
V
⇒
n n′
(2⇡)3 x x
ny n′y nz n′z
→
(3)
(k� − k� ′ ) .
Putting these elements together, the probability of finding � � in a 1-particle state
is
n � ��2 → � d3 k��k� � ��2 .
� ���
�
n
Finally, if we want to change the normalization of states so that �k� ′ �k� � = Nk�
k� ′ ) then the probability of finding � � in a 1-particle state is
(3)
(k� −
d3 k �
��k � ��2 .
�
Nk�
For relativistic normalization of states, Nk� = (2⇡)3 2Ek� and the probability is
2
� (dk)��k� � �� .
This is precisely our earlier result. Denoting the emission probability for n particles
in the presence of a localized source J(x) by pn , we have
p0 = e−⇠ , p1 = ⇠e−xi ,
̃ k� )�2 .
where ⇠ = � (dk)�J(
(4.9)
66
CHAPTER 4. INTERACTION WITH EXTERNAL SOURCES
4.2.2
Poisson
We can now go on to compute pn for arbitrary n. Start with n = 2, that is, emission
of two particles.
e−i ∫
(−) J
↵k� ↵k� ′ ei ∫
(−) J
so that
= e−i ∫
(−) J
↵k� ei ∫
(−) J
e−i ∫
(−) J
↵k� ′ ei ∫
(−) J
̃
̃ ′
= (↵k� + iJ(−k))(↵
� ′ + iJ(−k )) (4.10)
k
� � ′ �0�in = −J(−k)
̃
̃ ′ )e− 12 ⇠ .
J(−k
out �k k
To get the emission probability we must sum over all distinguishable 2-particle
states. Since �k� k� ′ � = �k� ′ k� � we must not double count. When we integrate over a
box in momentum space, we count twice the state �k� k� ′ � if we sum over k1 and k2
with values k1 = k, k2 = k ′ and k1 = k ′ , k2 = k:
kx′
=
kx
Hence
1
2
&times;
kx′
kx
p2 = 12 ⇠ 2 e−⇠ .
The generalization is straightforward:
pn =
1 n
⇠ exp(−⇠) .
n!
This is a Poisson distribution! Note that ∑n pn = 1, that is, there is certainty of
finding anything (that is, either no or some particles). The mean of the distribution
̃ k� )�2 .
is ⇠ = ∫ (dk)�J(
4.3
Evolution Operator
We now introduce a more general formalism to derive the same results, but that
will be more useful when we consider interacting quantum fields. We want to
construct an operator U (t) that gives the connection between the field and the
“in” field in :
(�
x , t) = U −1 (t) in (�
x , t)U (t) .
(4.11)
Since we are assuming
→
in
as t → −∞, we must have
U (t) → 1
as
t → −∞ .
(4.12)
4.3. EVOLUTION OPERATOR
The S matrix is then
67
S = lim U (t) .
t→∞
The time evolution of
and
@
(�
x , t) = i[H(t), (�
x , t)]
@t
in
are given by
@ in
(�
x , t) = i[H0in (t),
@t
x , t)]
in (�
(4.13)
Here H = H0 + H ′ , where H0 is the free Hamiltonian and H ′ describes interactions.
In the present case,
� )2 + 1 m2
H0 = � d3 x � 12 ⇡ 2 + 12 (∇
2
2
�
H ′ (t) = � d3 x J(�
x , t) (�
x , t) .
Also the subscript “in” means the argument has “in” fields. So H = H( , ⇡, J)
with H0 = H0 ( , ⇡) while H0in = H0in ( in , ⇡in ). From Eq. (4.11) we have
U −1 (t)H( (t), ⇡(t), J(t))U (t)
= H(U −1 (t) (t)U (t), U −1 (t)⇡(t)U (t), U −1 (t)J(t)U (t))
= H(
and
in (t), ⇡in (t), J(t))
(4.14)
@ in
@
�U (�
(�
x , t) =
x , t)U −1 (t)�
@t
@t
dU
dU −1
=
U −1 + U
+ U i[H(t), (�
x , t)]U −1
dt
dt
dU −1
dU −1
=
U
U + i[Hin (t), in (�
x , t)]
in − in
dt
dt
dU −1
= i[−i
U + Hin (t), in (�
x , t)]
dt
Comparing with Eq. (4.13) this is the commutator with H0in so we must have
or
−i
dU −1
U + Hin (t) = H0in (t)
dt
dU
′
= −i(Hin − H0in )U = −iHin
U
(4.15)
dt
The solution to this equation with the boundary condition (4.12) gives the S matrix,
S = U (∞). Note that the equation contains only “in” fields, which we know how
to handle.
We can solve (4.15) by iteration. Integrating (4.15) form −∞ to t we have
U (t) − 1 = −i �
t
−∞
′
dt′ Hin
(t′ )U (t′ ) .
68
CHAPTER 4. INTERACTION WITH EXTERNAL SOURCES
Now use this again, repeatedly,
U (t) = 1 − i �
′
dt′ Hin
(t′ ) �1 − i �
t
−∞
= 1 − i�
t′
−∞
′
dt1 Hin
(t1 ) + (−i)2 �
t
−∞
∞
= � = 1 + � (−i)n �
t
−∞
n=1
dt1 �
t1
−∞
′
dt′′ Hin
(t′′ )U (t′′ )�
t
dt1 �
−∞
dt2 � �
t1
−∞
tn−1
−∞
′
′
dt2 Hin
(t1 )Hin
(t2 )U (t2 )
′
′
′
dtn Hin
(t1 )Hin
(t2 )�Hin
(tn ) .
′
′
′
Note that the product Hin
(t1 )Hin
(t2 )�Hin
(tn ) is time-ordered, that is, the Hamiltonians appear ordered by t ≥ t1 ≥ t2 � ≥ tn . This is the main result of this section.
We can write the result more compactly. Note that
�
t
−∞
dt1 �
t1
−∞
′
′
dt2 Hin
(t1 )Hin
(t2 ) = �
t
−∞
dt2 �
t2
−∞
′
′
dt1 Hin
(t2 )Hin
(t1 )
The two integrals cover the whole t1 vs t2 plane, as can be seen from the followin
figures in which the shaded regions correspond to the region of integration of teh
first and second integrals, respectively:
t2
t2
t1
t1
For any two time dependent operators, A(t) and B(t), we define the time-ordered
product
�
�
�A(t1 )B(t2 ) t1 &gt; t2
T (A(t1 )B(t2 )) = ✓(t1 −t2 )A(t1 )B(t2 )+✓(t2 −t1 )B(t2 )A(t1 ) = �
�
�
�B(t2 )A(t1 ) t2 &gt; t1
and similarly when there are more than two operators in the product. Then
′
′
dt2 Hin
(t1 )Hin
(t2 ) =
t
1 t
′
′
dt
1
�
�
� dt2 T �Hin (t1 )Hin (t2 )� .
2 −∞
−∞
−∞
−∞
For the term with n integrals there are n! orderings of t1 , . . . , tn so we obtain
t
dt1 �
t1
t
t
t
(−i)n
′
′
′
� dt � dt2 � � dtn T (Hin (t1 )Hin (t2 )�Hin (tn )) . (4.16)
−∞
−∞
−∞
n=1 n!
∞
U (t) = 1 + �
or, comparing with ez = ∑∞
n=0
1 n
n! z
we write
U (t) = T �exp �−i �
t
−∞
′
dt′ Hin
(t′ )�� .
4.4. WICK’S THEOREM
69
You should keep in mind that the meaning of this expression is the explicit expansion in (4.16). Finally, taking t → ∞,
S = T �exp �−i �
or, using L′ = −H′ ,
∞
−∞
′
′
dt Hin
(t)�� = T �exp �−i � d4 x Hin
��
S = T �exp �i � d4 x L′in �� .
Let’s use this general result for the specific example we have been discussing,
and compare with previous results. Use L′in = J(x) in (x). Then
But we had obtained
S = T �exp �i � d4 x J(x)
in (x)��
S = e− 2 ⇠ ∶exp �i � d4 x J(x)
1
.
in (x)�∶
(4.17)
2
̃
with ⇠ = 12 ∫ (dk)�J(k)�
, as in (4.9). Are these two expression for S the same? To
show they are we need some additional machinery, some relation between T-ordered
and normal-ordered products.
4.4
Wick’s Theorem
In order to answer the question, what is the di↵erence between T-ordered and
normal-ordered products, we consider
T ( (x1 ) (x2 )) − ∶ (x1 ) (x2 )∶
Here and below stands for an “in” field, that is, a free field satisfying the KG
equation. For notational conciseness we write i for (x1 , etc. Letting = (+) +
(−)
and taking x01 &gt; x02 the di↵erence is
(
(+)
(−)
(+)
(−)
(+) (+)
(−) (+)
(−) (+)
(−)
(−)
1 + 1 )( 2 + 2 ) − ( 1
2 + 1
2 + 2
1 + 1 + 2 )
=[
(+)
1 ,
(−)
2 ]
This is a c-number (equals + (x2 − x1 )). Taling the expectation value in the
vacuum we obtain, restoring full notation momentarily,
T ( (x1 ) (x2 )) = ∶ (x1 ) (x2 )∶ +�0�T ( (x1 ) (x2 ))�0� .
Clearly this holds for the more general case of several fields,
T(
n (x1 ) m (x2 ))
=∶ ∶
n (x1 ) m (x2 )∶ +�0�T ( n (x1 ) m (x2 ))�0� .
70
CHAPTER 4. INTERACTION WITH EXTERNAL SOURCES
Consider next the case of three fields, T (
take x01 ≥ x02 ≥ x03 :
T(
Now, 1 ∶ 2 3 ∶ = (
the (−) operators:
1 2 3)
(+)
1
+
(+)
1 ∶ 2 3∶
=∶
=
1 2 3
(−)
1 ) ∶ 2 3 ∶.
(+)
1
2 3∶
1 2 3)
=∶
Without loss of generality
1 (∶ 2 3 ∶ +�0�T ( 2 3 )�0�)
We need to move
+[
where we have used the fact that [
�0�T ( 1 n )�0�. So we have
T(
=
1 2 3 ).
(+)
1 ,
(+)
1 ,
(+)
1
(−)
(+)
2 ] ∶ 3 ∶ +[ 1 ,
(−)
n ]
to the right of all of
(−)
3 ] ∶ 2∶
is a c-number. In fact, it equals
1 2 3 ∶ + ∶ 1 ∶�0�T ( 2 3 )�0� + ∶ 2 ∶�0�T ( 1 3 )�0� + ∶ 3 ∶�0�T ( 1 2 )�0� .
Of course, ∶ ∶ = , but the notation will more easily generalize below.
More notation, rewrite the above as
T(
where
1 2 3)
=∶
1 2 3∶ + ∶ 1 2 3∶ + ∶ 1 2 3∶ + ∶ 1 2 3∶
n m
is called a contraction.
Wick’s theorem states that
T(
1� n)
=∶
1� n∶ +
+
� ∶
pairs
(i,j)
�
= �0�T (
.
n m )�0�
1� i� j � n∶
2-pairs
(i,j),(k,l)
∶
1 � i � j � k � l � n ∶ +�
�
�
�
n = even
�∶ 1 2 3 4 � n−1 n ∶ +all pairings
+�
(4.18)
�
�
∶
�
∶
+all
pairings
n
=
odd
�
1
2
3
4
n−2
n−1
n
�
In words, the right hand side is the sum over all possible contractions in the normal
ordered product ∶ 1 � n ∶ (including the term with no contractions). The proof is
by induction. We have already demonstrated this for n = 2, 3. Let W ( 1 , . . . , n )
stand for the right hand side of (4.18), and assume the theorem is valid for n − 1
fields. Then assuming x01 ≥ x02 ≥ � ≥ x0n ,
T(
1� n)
=
=
=
=
1T ( 1� n)
1W ( 2, . . . ,
n)
(+)
(−)
( 1 + 1 )W ( 2 , . . . , n )
(−)
1 W ( 2, . . . , n) + W ( 2, . . . ,
(+)
n) 1
+[
(+)
1 , W ( 2, . . . ,
n )]
4.4. WICK’S THEOREM
71
The first two terms are normal ordered and contain all contractions that do not
involve 1 . The last term involves the contractions of 1 with every field in every
term in W ( 2 , . . . , n ), therefore all possible contractions. Hence the right hand
side contains all possible contractions, which is W ( 1 , . . . , n ).
4.4.1
Combinatorics
Let’s go back to our computation of S. We’d like to use Wick’s theorem to relate T [exp(i ∫ d4 x in (x)J(x))] to ∶exp(i ∫ d4 x in (x)J(x))∶, and for this we need
a little combinatorics. To make the notation more compact we will continue
dropping the “in” label on the “in” fields for the rest of this section. Now,
T [exp(i ∫ d4 x in (x)J(x))] is a sum of terms of the form
n
in
d4 xi J1 �Jn T (
�
�
n!
i=1
1�
n) =
n
in
d4 xi J1 �Jn W (
�
�
n!
i=1
1, . . . ,
Consider the term on the right hand side with one contraction,
n
1
4
� � d xi J1 �Jn � ∶
n!
i=1
pairs
(i,j)
=
n)
(4.19)
1� i� j � n∶
n−2
Npairs
4
� � d xi J1 �Jn−2 ∶
n!
i=1
1 � n−2 ∶ �
d4 x d4 x′ JJ ′
′
where Npairs is the number of contractions in the sum, which is the same as the
number of pairs (i, j) in the list 1, . . . , n. That is, the number of ways of choosing
two elements of a list of n objects:
Let
Then
n!
n
Npairs = � � =
2
2!(n − 2)!
⇣ = � d4 x d4 yJ(x)J(y)�0�T ( (x) (y))�0� .
1 pairing terms =
n−2
−⇣ in−2
4
� � d xi J1 �Jn−2 ∶
2 (n − 2)!
i=1
1 � n−2 ∶
(4.20)
.
We want to repeat this calculation for the rest of the terms on the right hand
side of (4.19). For this we need to count the number of terms for a given number of
n
contractions. Now, k contractions involve 2k fields. Now, there are � � ways of
2k
picking 2k fields out of n, and we need to determine how many distinct contractions
72
CHAPTER 4. INTERACTION WITH EXTERNAL SOURCES
one can make among them. We can figure this out by inspecting a few simple cases.
For k = 1
12
and for k = 2
1234 1234
→ one contraction
→ 3 contractions.
1234
Instead of continuing in this explicit way, we analyze the k = 3 case using inductive
reasoning:
12 &times; (k = 2) 13 &times; (k = 2) � 16 &times; (k = 2)
→ 5 &times; 3 contractions.
By induction the arbitrary k case has (2k − 1)!! pairings:
�(2k − 1)-pairings:1j� &times; �(k − 1)-case: (2k − 3)!!� = (2k − 1)!!
So the terms with k contractions give
1
n!
(2k − 1)!! in−2k �
n! (n − 2k)!(2k)!
��� � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � ��
1
1
k
(n − 2k)! 2 k!
=
n−2k
4
� d xi J1 �Jn−2k ∶
i=1
m
(−⇣�2)k im
4
� � d xi J1 �Jm ∶
k!
m!
i=1
1� m∶
2
with m = n − 2k.
Aha! We recognize this as a term in an exponential expansion. Considering
all the terms in he expansion of T [exp(i ∫ d4 x in (x)J(x))], the coefficient of
m
∞ 1
im
1
4
k
m! ∫ ∏i=1 d xi J1 �Jm ∶ 1 � m ∶ (fixed m) is ∑k=0 k! (−⇣�2) = exp(− 2 ⇣), and is independent of m, so it factors out. We are left with
∞
m
im
4
� � d xi J1 �Jm ∶
m=0 m!
i=1
e−⇣�2 �
That is
S = T �exp �i � d4 x
1� m∶
in (x)J(x)��
k
4
4 ′
′
′
�
� d x d x JJ
��� � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � �
(−1)k ⇣ k
1 � n−2k ∶ �i
= e−⇣�2 ∶exp �i � d4 x J(x) (x)�∶
= e−⇣�2 ∶exp �i � d4 x J(x) (x)�∶
This will equal our previous expression for S in (4.17) if ⇣ = ⇠. To answer this we
need to know more about the T-ordered product in the definition of ⇣ in (4.20).
4.5. SCALAR FIELD PROPAGATOR
4.5
73
Scalar Field Propagator
Let
F (x, y)
≡ �0�T ( (x) (y))�0�
where (x) is a real, scalar, free field satisfying the Klein-Gordon equation. This is
the quantity we need for the computation fo ⇣, but it is also important for several
other reasons, so we spend some time investigating it.
First of all, i F (x, y) is a Green’s function for the Klein-Gordon equation; see
(4.3). To verify this claim compute directly. Taking @&micro; to be the derivative with
respect to x&micro; keeping y &micro; fixed and using (@ 2 + m2 ) (x) = 0 we have
(@ 2 + m2 )
F (x, y)
= (@ 2 + m2 ) �✓(x0 − y 0 )�0� (x) (y)�0� + ✓(y 0 − x0 )�0� (y) (x)�0��
=
@2
@2
0
0
✓(x
−
y
)�0�
(x)
(y)�0�
+
✓(y 0 − x0 )�0� (y) (x)�0�
@x02
@x02
@
@ (x)
@
@ (x)
+ 2 0 ✓(x0 − y 0 )�0�
(y)�0� + 2 0 ✓(y 0 − x0 )�0� (y)
�0�
0
@x
@x
@x
@x0
Now using d✓(x)�dx = (x) and @0 (x) = ⇡(x) the last line above is
2 (x0 − y 0 )�0�[⇡(x), (y)]�0� = −2i
The line above that gives
(4)
(x − y)
@
(x0 − y 0 )�0�[ (x), (y)]�0� = − (x0 − y 0 )�0�[⇡(x), (y)]�0� = i
@x0
(4)
Combining these we have
(@ 2 + m2 )
F (x, y)
= −i
(4)
(x − y)
(x − y)
As we saw in Sec. 4.1 Green functions are not unique, since one can always add solutions to the homogenous Klein-Gordon equation to obtain a new Green’s function.
We must have
d4 k ik⋅(x−y)
1
i F (x, y) = −i �
e
(2⇡)4
k 2 − m2
with some prescription for the contour of integration. Note that F (x, y) = F (x−
y) depends only on the di↵erence x − y, which is as expected from homogeneity pof
space-time.
Recall that [ (+) (x), (−) (y)] = �0�T (x) (y)�0� for x0 &gt; y 0 . More generally
�0�T (x) (y)�0� = ✓(x0 − y 0 )[
Recall also that
[
(+)
(x),
(−)
(+)
(x),
(−)
(y)] + ✓(y 0 − x0 )[
(+)
0 −y 0 )+ik
� ⋅(�
x −�
y)
(y)] = � (dk)e−iEk� (x
(y),
(−)
(x)]
74
CHAPTER 4. INTERACTION WITH EXTERNAL SOURCES
Without loss of generality and to simplify notation we set y &micro; = 0. We have
�0�T (x) (0)�0� = ✓(x0 ) � (dk)e−iEk� x
0 +ik
� ⋅�
x
+ ✓(−x0 ) � (dk)eiEk� x
0 −ik
� ⋅�
x
Comparing with Eq. (4.6) we see this is precisely −iGF (x), which was obtained by
taking a contour that goes below −Ek� and above Ek� ,
Im(z)
−Ek� + i✏
&times;
&times;
k 0 = Re(z)
Ek� − i✏
So we have
�0�T (x) (y)�0� = �
d4 k −ik⋅(x−y)
i
e
4
2
(2⇡)
k − m2 + i✏
(4.21)
This will be of much use later. We will refer to this as the two-point function of
the KG field, and the Fourier transform, 1�(k 2 − m2 + i✏), as the KG propagator.
We can finally return to the question of relating ⇣ to ⇠:
⇣ = � d4 x d4 y J(x)J(y) �0�T (x) (y)�0�
d4 k1 d4 k2 ik1 ⋅x ik2 ⋅y ̃
d4 p −ip⋅(x−y)
i
̃
e
e
J(k
)
J(k
)
e
1
2 �
(2⇡)4 (2⇡)4
(2⇡)4
p2 − m2 + i✏
d4 p ̃ ̃
i
=�
J(p)J(−p) 2
4
(2⇡)
p − m2 + i✏
= � d4 x d 4 y �
̃ J(−p)
̃
̃ 2 vanishes as �p0 � → ∞,
Perform the integral over p0 , assuming J(p)
= �J(p)�
which is justified by our assumption that the source is localized in space-time.
Closing the contour on the upper half of the complex p0 plane we pick the pole at
−Ek� so that
⇣ = 2⇡i �
d3 p ̃ 2 i
̃ 2 =⇠.
�J(p)�
= � (dk)�J(p)�
(2⇡)4
−2Ek�
```