Uploaded by m shoshan

Mathematics of Quantum Mechanics Malkiel Shoshan

advertisement
Shoshan 1
Fundamentals of Quantum Mechanics
Malkiel Shoshan
Shoshan 2
Contents
1. Introduction .............................................................................................................................................. 3
2. Mathematics of Vector Spaces ................................................................................................................. 4
2.1 Linear Vector Spaces ........................................................................................................................... 4
2.2 Inner Product Spaces .......................................................................................................................... 5
2.3 Linear Operators ................................................................................................................................. 6
2.4 Vector Components ............................................................................................................................ 7
2.5 Matrix Representation: Vectors and Operators ................................................................................. 7
2.6 Matrix Representation of the Inner Product and Dual Spaces ........................................................... 9
2.7 The Adjoint Operation: Kets, Scalars, and Operators ....................................................................... 10
2.8 The Projection Operator and the Completeness Relation ................................................................ 10
The Kronecker Delta and the Completeness Relation ........................................................................ 12
2.9 Operators: Eigenvectors and Eigenvalues ........................................................................................ 13
2.10 Hermitian Operators ....................................................................................................................... 14
2.11 Functions as Vectors in Infinite Dimensions ................................................................................... 16
2.12. Orthonormality of Continuous Basis Vectors and the Dirac ๐›… Function........................................ 17
2.13. The Dirac ๐›… and the Completeness Relation.................................................................................. 19
2.14. The Dirac ๐›… Function: How it looks ................................................................................................ 19
The Derivative of the ๐›… Function ........................................................................................................ 19
The δ Function and Fourier Transforms.............................................................................................. 21
2.15. The Operators K and X ................................................................................................................... 21
2.16. The propogator .............................................................................................................................. 26
3. The Postulates of Quantum Mechanics .................................................................................................. 30
3.1. The Postualtes .................................................................................................................................. 30
3.2. The continuous Eigenvalue Spectrum.............................................................................................. 32
3.3 Collapse of the Wave Function and the Dirac Delta Function .......................................................... 32
3.4. Measuring X and P ........................................................................................................................... 32
3.5. Schrodinger’s Equation .................................................................................................................... 35
4. Particle in a Box ....................................................................................................................................... 38
Conclusion ................................................................................................................................................... 39
Bibliography ................................................................................................................................................ 40
Shoshan 3
1. Introduction
The highly confident and optimistic attitude during the era before the quantum revolution was
that all known phenomena were explained by physics. The physicist Albert Michelson, in the 1800s said:
“…The more important fundamental laws and facts of physical science have all been discovered and
these are now so firmly established that the possibility of their ever being supplanted in consequence of
new discoveries is exceedingly remote.” And then, quantum mechanics came and shook the world of
physics at its very foundations.
Many different experiments were designed that demonstrated the inadequacy of classical
physics. They will not be enumerated here; we discuss only one. Most of the mysterious and peculiar
features are contained in the double slit experiment. We first do the (thought) experiment with bullets.
1
P1
2
P2
We have a source (gun), and two walls. Assume the walls are very, very far apart. The bullets
shoot off in different directions. Some go through the slits and hit the next wall. We close slit 2, and
measure the frequency of hits; we get a curve, P1. The curve has a maximum opposite slit 1 that is
where the bullets are most frequent. If we open both slits we get the curve P12.
1
P1
2
P2
Next, we repeat the experiment with light. By light, P1 and P2 interfere, so we get the wavy
pattern. The bullets, which are “lumps” (as Feynman liked to call them), don’t interfere.
If we stop here, everything will be fine. But, we repeat the experiment a third time, this time
with electrons. If we open only one slit we get P1 or P2 as expected. However, if we open both slits we
get the wavy pattern we got by light! How is this possible? Let’s look at the point represented by the red
dot in the figure. When only slit 2 was open, that point got many electrons. How is it that when another
slit is open that point gets fewer electrons? According to classical physics, each electron has a definite
trajectory. The electron going through slit 2 should follow that trajectory; they should not care if slit 1 is
open. Worse still, they should not even know that slit 1 is open. We will see that every particle has a
wave function associated with it that contains all the information about the particle.
These are the conundrums that perplexed physicists in the early 1900s. Many scientists
contributed to the development of quantum mechanics; Albert Einstein, Erwin Schrodinger, Paul Dirac,
Werner Heisenberg and Richard Feynman to name just a few.(At least) Three different but equivalent
mathematical treatments were developed. We use a combination of two approaches, Schrodinger’s
Wave Mechanics and Heisenberg’s Matrix Mechanics. First, we lay the mathematical foundation. Then,
the postulates of quantum mechanics will be explained, and a short example will be presented.
Shoshan 4
2. Mathematics of Vector Spaces
2.1 Linear Vector Spaces
Definition: A linear vector space, ๐•, is a collection of objects called vectors.
A vector, V, is denoted by |๐‘‰โŸฉ , read ket V, and satisfies the following properties:
1. They have a rule for addition: |๐‘‰โŸฉ + |๐‘ŠโŸฉ
2. They have a rule for scalar multiplication: a|๐‘‰โŸฉ
3. (closure) The result of these operations is an element of the same vector space ๐•.
4. (distributive property in vectors) a(|๐‘‰โŸฉ + |๐‘ŠโŸฉ) = a|๐‘‰โŸฉ + ๐‘Ž|๐‘ŠโŸฉ
5. (distributive property in scalars) (a + b)|๐‘‰โŸฉ = ๐‘Ž|๐‘‰โŸฉ + ๐‘|๐‘‰โŸฉ
6. (commutative) |๐‘‰โŸฉ + |๐‘ŠโŸฉ = |๐‘ŠโŸฉ + |๐‘‰โŸฉ
7. (associative) |๐‘‰โŸฉ + (|๐‘ŠโŸฉ + |๐‘ˆโŸฉ) = (|๐‘‰โŸฉ + |๐‘ŠโŸฉ) + |๐‘ˆโŸฉ
8. A null vector |0โŸฉ exists so that |๐‘‰โŸฉ + |0โŸฉ = |๐‘‰โŸฉ
9. An additive inverse vector exists for every |๐‘‰โŸฉ so that |๐‘‰โŸฉ + |−๐‘‰โŸฉ = |0โŸฉ
The familiar arrows from physics obey the above axioms; they are definitely vectors, but not of
the most general form: vectors need not have a magnitude or direction. In fact, matrices also satisfy the
above properties, and, thus, they are also legitimate vectors.
Definition: If we have a set of vectors |๐‘‰1 โŸฉ , |๐‘‰2 โŸฉ, … , |๐‘‰๐‘› โŸฉ, that are related linearly by
๐‘›
๏ฟฝ๐‘Ž๐‘– |๐‘‰๐‘– โŸฉ = |0โŸฉ ,
๐‘–=1
(2.1.1)
and (2.1.1) is true only if all the scalars ai = 0, then the set of vectors is linearly independent.
Consider two parallel vectors, |1โŸฉ and |2โŸฉ, in the xy-plane. One can be written as a multiple of
the other; accordingly, they are not linearly independent. If they were not parallel, they would be
linearly independent. If we now bring in a third vector, |3โŸฉ, into the plane, |3โŸฉ = ๐‘Ž|1โŸฉ + ๐‘|2โŸฉ for some
scalars a and b; thus, |1โŸฉ, |2โŸฉ, and |3โŸฉ do not form a linearly independent set.
Definition: The dimension, n, of a vector space is the maximum number of linearly independent
vectors it can contain.
Definition: A basis is a set of n linearly independent vectors in an n – dimensional space.
Theorem 1. Any vector, |๐‘‰โŸฉ, in n – dimensional space can be written linearly in terms of a basis
in that space so that if |1โŸฉ, |2โŸฉ, … , |๐‘›โŸฉ constitute a basis and the vi are scalars, then
๐‘›
|๐‘‰โŸฉ = ๏ฟฝ ๐‘ฃ๐‘– |๐‘–โŸฉ
๐‘–=1
(2.1.2)
Proof. If |๐‘‰โŸฉ can’t be written in terms of a basis, the n – dimensional space will contain
n+1
linearly independent vectors; the n vectors of a basis and the vector |๐‘‰โŸฉ itself. However, by definition an
n – dimesional space can only accommodate n linearly independent vectors.
Definition: The coefficients, vi, in (2.1.2) are called the components of |๐‘‰โŸฉ in the |๐‘–โŸฉ basis.
Shoshan 5
Definition: A subset of a vector space, ๐•, that forms a vector space is called a subspace.
A subspace ๐• of dimension n is written as ๐•n. We may give a subspace a label such as ๐•1๐‘ฆ . The
subscript ๐‘ฆ is the label; it tells us that ๐•1๐‘ฆ contains all vectors along the ๐‘ฆ axis. The superscript tells us
that the subspace is one dimensional. ๐•1๐‘ฆ is a subspace of ๐•3(R) which represents 3 – d space.
2.2 Inner Product Spaces
Recall, we have an operation between two arrows in the plane called the dot product, denoted
by ๐‘Žโƒ— โˆ™ ๐‘โƒ— , that produces a number and satisfies the following three properties:
•
•
•
๏ฟฝโƒ— = ๐ต
๏ฟฝโƒ— โˆ™ ๐ดโƒ— (symetry)
๐ดโƒ— ⋅ ๐ต
๐ดโƒ— โˆ™ ๐ดโƒ— ≥ 0
๏ฟฝโƒ— + ๐‘๐ถโƒ—๏ฟฝ = ๐‘๐ดโƒ— โˆ™ ๐ต
๏ฟฝโƒ— + ๐‘๐ดโƒ— โˆ™ ๐ถโƒ— (linearity)
๐ดโƒ— โˆ™ ๏ฟฝ๐‘๐ต
We now define an operation called the inner product between any two vectors |๐‘‰โŸฉ and |๐‘ŠโŸฉ. It is
denoted by โŸจ๐‘‰|๐‘ŠโŸฉ, and satisfies three properties that are analogous to those of the dot product:
•
•
•
โŸจ๐‘‰|๐‘ŠโŸฉ = โŸจ๐‘Š|๐‘‰โŸฉ* (skew symetry; the * says: take the complex conjugate)
โŸจ๐‘‰|๐‘‰โŸฉ ≥ 0
โŸจ๐‘‰|(๐‘Ž|๐‘ŠโŸฉ + ๐‘|๐‘โŸฉ) = ๐‘ŽโŸจ๐‘‰|๐‘ŠโŸฉ + ๐‘โŸจ๐‘‰|๐‘โŸฉ (linearity)
The first axiom says the inner product depends on the order of the vectors; the results are
complex conjugates of each other. As with arrows where ๐‘Žโƒ— โˆ™ ๐‘Žโƒ— is the norm or length squared of the
vector, we want โŸจ๐‘‰|๐‘‰โŸฉ to also represent the length squared of the vector, so โŸจ๐‘‰|๐‘‰โŸฉ better be positive;
the second axiom expresses this. In addition, โŸจ๐‘‰|๐‘‰โŸฉ should be real. The first axiom ensures this:
โŸจ๐‘‰|๐‘‰โŸฉ = โŸจ๐‘‰|๐‘‰โŸฉ*
๐‘Ž + ๐‘๐‘– = ๐‘Ž − ๐‘๐‘–
where i = √−1, and a, b are constants. (2.2.1) can only be true if b = 0 so that โŸจ๐‘‰|๐‘‰โŸฉ = ๐‘Ž is real.
(2.2.1)
The third axiom expresses the linearity of the inner product when a linear combination of
vectors occurs in the second factor. The axiom says that the scalars a, b,…, can come out of the inner
product. If a linear combination occurs in the first factor, we first envoke skew-symetry:
โŸจ๐‘Ž๐‘Š + ๐‘๐‘|๐‘‰โŸฉ = โŸจ๐‘‰|๐‘Ž๐‘Š + ๐‘๐‘โŸฉ*
and then we use the third axiom
โŸจ๐‘‰|๐‘Ž๐‘Š + ๐‘๐‘โŸฉ* = (๐‘ŽโŸจ๐‘‰|๐‘ŠโŸฉ + ๐‘โŸจ๐‘‰|๐‘โŸฉ)* = ๐‘Ž*โŸจ๐‘‰|๐‘ŠโŸฉ* + ๐‘*โŸจ๐‘‰|๐‘ŠโŸฉ*
= ๐‘Ž*โŸจ๐‘Š|๐‘‰โŸฉ + ๐‘*โŸจ๐‘Š|๐‘‰โŸฉ
(2.2.2)
In short, when there is a linear combination in the first factor, the scalars in the first factor get
complex conjugated and taken out of the inner product.
Definition: Two vectors are orthogonal if their inner product is zero.
Definition: The norm of a vector is expressed as ๏ฟฝโŸจ๐‘‰|๐‘‰โŸฉ ≡ |๐‘‰|
Shoshan 6
Definition: A set of basis vectors, all of unit norm and mutually orthogonal, is referred to as an
orthonormal basis.
We can now derive a formula for the inner product. Given |๐‘‰โŸฉ and |๐‘ŠโŸฉ with components vi and
wj, respectively, written in terms of an orthonormal basis |1โŸฉ, |2โŸฉ, … , |๐‘›โŸฉ,
|๐‘‰โŸฉ = ๏ฟฝ ๐‘ฃ๐‘– |๐‘–โŸฉ
๐‘–
|๐‘ŠโŸฉ = ๏ฟฝ ๐‘ค๐‘— | ๐‘—โŸฉ,
๐‘—
โŸจ๐‘‰|๐‘ŠโŸฉ = ๏ฟฝ๏ฟฝ ๐‘ฃ๐‘– |๐‘–โŸฉ ๏ฟฝ ๏ฟฝ ๐‘ค๐‘— |๐‘—โŸฉ๏ฟฝ
๐‘–
๐‘—
By (2.2.2), the components, vi, of |๐‘‰โŸฉ (the vector in the first factor) can come out of the inner
product provided we take their complex conjugate. Thus,
โŸจ๐‘‰|๐‘ŠโŸฉ = ∑๐‘– ∑๐‘— ๐‘ฃ๐‘– *๐‘ค๐‘— โŸจ๐‘–|๐‘—โŸฉ
(2.2.3)
However, we still need to know the inner product between the basis vectors, โŸจ๐‘–|๐‘—โŸฉ. We know
that the inner product between an orthonormal vector and itself is, by definition, the norm of the
orthonormal vector which, also by definition, equals one. In addition, the inner product between two
different orthonormal vectors is, by defintion, zero. Thus, the value of โŸจ๐‘–|๐‘—โŸฉ will be one or zero
depending if i = j or i ≠ j:
โŸจ๐‘–|๐‘—โŸฉ = ๏ฟฝ
1 if ๐‘– = ๐‘—
≡ ๐›ฟ๐‘–๐‘—
0 if ๐‘– ≠ ๐‘—
(2.2.4)
where δij, called the Kronecker delta, is shorthand for the function in (2.2.4). Due to the Kronecker delta,
only the terms that occur with i = j will survive. So, (2.2.3) now collapses to
โŸจ๐‘‰|๐‘ŠโŸฉ = ∑๐‘– ๐‘ฃ๐‘– * ๐‘ค๐‘–
(2.2.5)
โŸจ๐‘‰|๐‘‰โŸฉ = ∑๐‘– |๐‘ฃ๐‘– |2
(2.2.6)
Accordingly, the inner product between a vector and itself will be
We calculate the norm as defined before, ๏ฟฝโŸจ๐‘‰|๐‘‰โŸฉ. A vector is normalized if its norm equals one.
The kronecker delta combines the orthogonality and normalization of orthonormal basis vectors
into one entity. If ๐‘– = ๐‘—, we get the inner product between a vector and itself; for the basis vectors this
inner product equals one, so we know they are normalized. If ๐‘– ≠ ๐‘—, we get an inner product of zero, so
they are orthogonal. Simply put, if we know that โŸจ๐‘–|๐‘—โŸฉ = ๐›ฟ๐‘–๐‘— , the basis is orthonormal.
2.3 Linear Operators
An operator acts on a vector, |๐‘‰โŸฉ, and transforms it into another vector |๐‘‰ ′ โŸฉ. If Ω is an operator,
we reprsent its action as follows:
Ω|๐‘‰โŸฉ = |๐‘‰ ′ โŸฉ
(2.3.1)
Shoshan 7
We will deal only with linear operators, operators that satisfy the following criterion:
Ω(๐‘Ž|๐‘‰โŸฉ + ๐‘|๐‘ŠโŸฉ) = ๐‘ŽΩ|๐‘‰โŸฉ + ๐‘Ω|๐‘ŠโŸฉ
(2.3.2)
Two examples of operators include the identity operator, I, and the rotation operator Rπ(k)
I|๐‘‰โŸฉ = |๐‘‰โŸฉ
R|๐’ŒโŸฉ = |๐’ŒโŸฉ
R|๐’ŠโŸฉ = −|๐’ŠโŸฉ
The identity operator leaves the vector unchanged, the rotation operator rotates vectors in 3 – d space
around the z-axis by π.
By (2.1.2), any vector |๐‘‰โŸฉ can be written as a linear combination of basis vectors. Therefore if
we know how an operator transforms the basis vectors, we will know how it transform |๐‘‰โŸฉ. So, if
Ω|๐‘–โŸฉ = |๐‘–′โŸฉ
where i is an index for the basis |1โŸฉ, |2โŸฉ, … , |๐‘›โŸฉ, then for a vector |๐‘‰โŸฉ which can be expressed as
๐‘›
|๐‘‰โŸฉ = ๏ฟฝ ๐‘ฃ๐‘– |๐‘–โŸฉ
๐‘–
we have
๐‘›
๐‘›
๐‘›
๐‘–=1
๐‘–=1
๐‘–=1
Ω|VโŸฉ = Ω ๏ฟฝ ๐‘ฃ๐‘– |๐‘–โŸฉ = ๏ฟฝ ๐‘ฃ๐‘– Ω|๐‘–โŸฉ = ๏ฟฝ ๐‘ฃ๐‘– |๐‘–′โŸฉ
2.4 Vector Components
(2.3.4)
To find the components of any vector, |๐‘ŠโŸฉ, in an orthonormal basis |1โŸฉ, |2โŸฉ, … ,|๐‘›โŸฉ, we take the
inner product between |๐‘ŠโŸฉ and the basis vectors:
|๐‘ŠโŸฉ = ๏ฟฝ
๐‘›
โŸจ๐‘˜|๐‘ŠโŸฉ = ๏ฟฝ
๐‘›
๐‘ค๐‘– |๐‘–โŸฉ
๐‘–=1
๐‘ค๐‘– โŸจ๐‘˜|๐‘–โŸฉ = ๐‘ค๐‘– ๐›ฟ๐‘˜๐‘–
๐‘–=1
โŸจ๐‘˜|๐‘ŠโŸฉ = ๐‘ค๐‘–
(2.4.1)
2.5 Matrix Representation: Vectors and Operators
A vector |๐‘‰โŸฉ in some basis |1โŸฉ, |2โŸฉ, … , |๐‘›โŸฉ, can be represented by an n × 1 column matrix whose
elements are the components of |๐‘‰โŸฉ in the given basis. So, if we have 5 basis vectors, and,
then |๐‘‰โŸฉ can be represented by
|๐‘‰โŸฉ = 5|1โŸฉ + 3|2โŸฉ − 4|3โŸฉ + 2|5โŸฉ
Shoshan 8
5
โŽก 3โŽค
โŽข
โŽฅ
|๐‘‰โŸฉ ↔ โŽข −4 โŽฅ
โŽข 0โŽฅ
โŽฃ 2โŽฆ
and the basis vectors, |๐‘›โŸฉ, will then be represented by n × 1 column matrices where the nth
element is 1. Then,
1
โŽก0โŽค
โŽข โŽฅ
|1โŸฉ ↔ โŽข0โŽฅ
โŽข0โŽฅ
โŽฃ0โŽฆ
and,
|๐‘‰โŸฉ ↔
5
โŽก 3โŽค
โŽข โŽฅ
โŽข−4โŽฅ
โŽข 0โŽฅ
โŽฃ 2โŽฆ
0
โŽก1โŽค
โŽข โŽฅ
|2โŸฉ ↔ โŽข0โŽฅ
โŽข0โŽฅ
โŽฃ0โŽฆ
1
โŽก0โŽค
โŽข โŽฅ
= 5 โŽข0โŽฅ
โŽข0โŽฅ
โŽฃ0โŽฆ
0
โŽก1โŽค
โŽข โŽฅ
+ 3 โŽข0โŽฅ
โŽข0โŽฅ
โŽฃ0โŽฆ
0
โŽก0โŽค
โŽข โŽฅ
|5โŸฉ ↔ โŽข0โŽฅ
โŽข0โŽฅ
โŽฃ1โŽฆ
0
โŽก0โŽค
โŽข โŽฅ
− 4 โŽข1โŽฅ
โŽข0โŽฅ
โŽฃ0โŽฆ
0
โŽก0โŽค
โŽข โŽฅ
+ 2 โŽข0โŽฅ
โŽข0โŽฅ
โŽฃ1โŽฆ
Just as a vector can be represented by an n × 1 matrix, a linear operator can be represented by
an n × n matrix. If we have a linear operator Ω acting on an orthonormal basis |1โŸฉ , |2โŸฉ , … , |๐‘›โŸฉ where
Ω|๐‘–โŸฉ = |๐‘–′โŸฉ
then any vector V written in terms of this basis will be transformed, according to (2.3.4), into
๐‘›
Ω|๐‘‰โŸฉ = ๏ฟฝ ๐‘ฃ๐‘– |๐‘–′โŸฉ
๐‘–=1
(2.5.1)
According to (2.4.1), to find the components of |๐‘–′โŸฉ, the transformed basis vector, in terms of the
original basis vector |๐‘–โŸฉ, we take the inner product between them:
โŸจ๐‘—|๐‘–′โŸฉ = โŸจ๐‘—|Ω|๐‘–โŸฉ ≡ Ω๐‘—๐‘–
where |๐‘—โŸฉ and |๐‘–โŸฉ are different indices refering to the same basis, and Ω๐‘—๐‘– is a shorthand.
We can then build an n × n matrix representing Ω. The Ω๐‘—๐‘– from (2.5.2) are the n × n
elements; Ω๐‘—๐‘– is the element in the jth column and the ๐‘–th row.
If we have
Ω|๐‘‰โŸฉ = |๐‘‰ ′ โŸฉ,
we can write the new components, vi’ , in terms of the original components, vi , as in (2.5.2):
๐‘ฃ๐‘–′ = โŸจ๐‘–|๐‘‰′โŸฉ = โŸจ๐‘–|Ω|๐‘‰โŸฉ
Since V can be written as a linear combination of its basis vectors, we have
๐‘ฃ๐‘–′ = โŸจ๐‘–|Ω|๐‘‰โŸฉ = โŸจ๐‘–|Ω๏ฟฝ∑๐‘— ๐‘ฃ๐‘— |๐‘—โŸฉ๏ฟฝ = ๏ฟฝ๐‘–๏ฟฝ ∑๐‘— ๐‘ฃ๐‘— Ω ๏ฟฝ๐‘—๏ฟฝ = ∑๐‘— ๐‘ฃ๐‘— โŸจ๐‘–|Ω|๐‘—โŸฉ
(2.5.2)
Shoshan 9
and by our shorthand from (2.5.2) we get
๐‘ฃ๐‘–′ = ∑๐‘— ๐‘ฃ๐‘– Ω๐‘–๐‘—
(2.5.3)
According to the rules of matrix multiplication, (2.5.3) is equivalent to the following:
or,
Ω11
๐‘ฃ′
โŽก 1โŽค
โŽก
โŽข๐‘ฃ2′ โŽฅ
โŽขΩ21
โŽข โŽฅ = โŽข
โŽขโ‹ฎโŽฅ
โŽข โ‹ฎ
โŽข
โŽข โŽฅ
โŽฃΩ๐‘›1
โŽฃ๐‘ฃ๐‘›′ โŽฆ
Ω12
Ω22
…
…
โ‹ฑ
…
Ω1๐‘›
โŽค
โ‹ฎ โŽฅ
โŽฅ
โŽฅ
โŽฅ
Ω๐‘›๐‘› โŽฆ
๐‘‰′ = Ω๐‘‰
๐‘ฃ1
โŽก โŽค
โŽข๐‘ฃ2 โŽฅ
โŽข โ‹ฎ โŽฅ=
โŽข โŽฅ
โŽฃ๐‘ฃ๐‘› โŽฆ
โŸจ1|Ω|1โŸฉ โŸจ1|Ω|2โŸฉ
โŽก
โŽขโŸจ2|Ω|1โŸฉ โŸจ2|Ω|2โŸฉ
โŽข
โŽข โ‹ฎ
โŽข
โŽฃโŸจn|Ω|1โŸฉ
…
↔
…
โ‹ฑ
…
โŸจ1|Ω|๐‘›โŸฉ
โŽค
โ‹ฎ โŽฅ
โŽฅ
โŽฅ
โŽฅ
โŸจn|Ω|๐‘›โŸฉโŽฆ
|๐‘‰′โŸฉ = Ω|๐‘‰โŸฉ
๐‘ฃ1
โŽก โŽค
โŽข๐‘ฃ2 โŽฅ
โŽข โ‹ฎ โŽฅ (2.5.4)
โŽข โŽฅ
โŽฃ๐‘ฃ๐‘› โŽฆ
(2.5.5)
These two equations say that the vector, |๐‘‰′โŸฉ, resulting from the action of Ω on V, can be
represented by the product of the matrix representing Ω and the matrix representing V. Simply put, to
see how an operator, Ω, transforms a vector, V, multiply their matrices.
2.6 Matrix Representation of the Inner Product and Dual Spaces
If we have two vectors |๐‘‰โŸฉ and |๐‘ŠโŸฉ represented by matrices as column vectors
๐‘ฃ1
๐‘ฃ2
|๐‘‰โŸฉ ↔ ๏ฟฝ โ‹ฎ ๏ฟฝ
๐‘ฃ๐‘›
๐‘ค1
๐‘ค2
|๐‘ŠโŸฉ ↔ ๏ฟฝ โ‹ฎ ๏ฟฝ ,
๐‘ค๐‘›
we can’t write the inner product, โŸจ๐‘‰|๐‘ŠโŸฉ, as a product of matrices because we can’t multiply two column
vectors together. However, if we take the adjoint or transpose conjugate of the first vector, so that the
matrix for |๐‘‰โŸฉ will be
then we can write the inner product as
[๐‘ฃ1 * ๐‘ฃ2 * โ‹ฏ
๐‘ฃ๐‘› *]
๐‘ค1
โŽก โŽค
โŽข๐‘ค2 โŽฅ
โŸจ๐‘‰|๐‘ŠโŸฉ = [๐‘ฃ๐‘› * ๐‘ฃ๐‘› * โ‹ฏ ๐‘ฃ๐‘› *] โŽข โŽฅ
โ‹ฎ
โŽข โŽฅ
โŽฃ๐‘ค๐‘› โŽฆ
This serves as motivation for the notation attributed to Paul Dirac. Just as we associate with
each column matrix a ket, we will associate with each row matrix a new entity called a bra. Ket V is
written as |๐‘‰โŸฉ, and bra V is written as โŸจ๐‘‰|. The two entities are adjoints of each other. That is, to get bra
V, take the transpose conjugate of the column matrix representing V as we did above.
Shoshan 10
Thus, we have two distinct vector spaces, the space of bras and the space of kets. The inner
product is defined between a bra and a ket. Above we had an inner product between ket |๐‘ŠโŸฉ and bra
โŸจ๐‘‰|; together, โŸจ๐‘‰|๐‘ŠโŸฉ, they form a bra – ket (or bracket).
2.7 The Adjoint Operation: Kets, Scalars, and Operators
If a scalar, a, multiplies a ket |๐‘‰โŸฉ, so that
๐‘Ž|๐‘‰โŸฉ = |๐‘Ž๐‘‰โŸฉ
then the corresponding bra will be
โŸจ๐‘Ž๐‘‰| = โŸจ๐‘‰|๐‘Ž*
We can see why if we look at the matrix representations.
The adjoint of this matrix is
๐‘Ž๐‘ฃ1
๐‘Ž๐‘ฃ2
๐‘Ž|๐‘‰โŸฉ = |๐‘Ž๐‘‰โŸฉ ↔ ๏ฟฝ โ‹ฎ ๏ฟฝ
๐‘Ž๐‘ฃ๐‘›
[๐‘Ž*๐‘ฃ1 * ๐‘Ž*๐‘ฃ2 * โ‹ฏ ๐‘Ž*๐‘ฃ๐‘› *] = [๐‘ฃ1 * ๐‘ฃ2 * โ‹ฏ ๐‘ฃ๐‘› *]๐‘Ž* ↔ โŸจ๐‘‰|๐‘Ž*
We now define the adjoint of an operator Ω as follows. If Ω acts on |๐‘‰โŸฉ we have
Ω|๐‘‰โŸฉ = |Ω๐‘‰โŸฉ
(2.7.1)
(where |Ω๐‘‰โŸฉ is the vector that results after Ω acts on |๐‘‰โŸฉ). The corresponding bra is
โŸจΩV| = โŸจ๐‘‰|Ω†
(2.7.2)
Ω† is called the adjoint of Ω.
Equations (2.7.1) and (2.7.2) say if Ω acts on ket V to produce |๐‘‰′โŸฉ, Ω† acts on bra V to produce
|. It follows that Ω† also has a matrix associated with it. We can calculate its elements in a basis as
we did in (2.5.2)
โŸจ๐‘‰ ′
By (2.7.2), this becomes
(Ω†)๐‘–๐‘— = โŸจ๐‘–|Ω† |๐‘—โŸฉ
(Ω†)๐‘–๐‘— = โŸจΩ๐‘–|๐‘—โŸฉ = โŸจ๐‘—|Ω๐‘–โŸฉ* = โŸจ๐‘—|Ω|๐‘–โŸฉ* = Ω๐‘—๐‘– *
Thus, to find the adjoint of an operator take the transpose conjugate of its matrix.
2.8 The Projection Operator and the Completeness Relation
We know that for any vector |๐‘‰โŸฉ,
Shoshan 11
๐‘›
๐‘›
๐‘–=1
๐‘–=1
|๐‘‰โŸฉ = ๏ฟฝ ๐‘ฃ๐‘– |๐‘–โŸฉ = ๏ฟฝ|๐‘–โŸฉโŸจ๐‘–|๐‘‰โŸฉ
(2.8.1)
Consider, the object in the summation, |๐‘–โŸฉโŸจ๐‘–|๐‘‰โŸฉ, gives us the component of V along the ๐‘–th basis vector
multiplied by that basis vector. Alternatively, it is the projection of |๐‘‰โŸฉ along |๐‘–โŸฉ. The entity, |๐‘–โŸฉโŸจ๐‘–|, can be
thought of as the projection operator. It acts on |๐‘‰โŸฉ to give
|๐‘–โŸฉโŸจ๐‘–|๐‘‰โŸฉ = |๐‘–โŸฉ๐‘ฃ๐‘–
We call |๐‘–โŸฉโŸจ๐‘–| , โ„™๐‘– , so |๐‘–โŸฉโŸจ๐‘–|๐‘‰โŸฉ becomes โ„™๐‘– |๐‘‰โŸฉ. Then we can write (2.8.1) as
๐‘›
|๐‘‰โŸฉ = ๏ฟฝ โ„™๐‘– |๐‘‰โŸฉ.
๐‘–=1
In (2.8.2),which operator does (∑๐‘›๐‘–=1 โ„™๐‘– ) correspond to? If we rewrite (2.8.2) as
๐‘›
|๐‘‰โŸฉ = ๏ฟฝ๏ฟฝ โ„™๐‘– ๏ฟฝ |๐‘‰โŸฉ.
๐‘–=1
(2.8.2)
(2.8.3)
We see that after the operator acts on |๐‘‰โŸฉ, the result remains |๐‘‰โŸฉ; then the operator must be
the identity operator I. Therefore, we get
๐‘›
๐‘›
๐‘–=1
๐‘–=1
๐ผ = ๏ฟฝ โ„™๐‘– = ๏ฟฝ|๐‘–โŸฉ โŸจ๐‘–|
(2.8.4)
Equation (2.8.4) is called the completeness relation. It says the sum of all the projections of a
vector V along its basis vectors is the vector V itself. (2.8.4) expresses the fact that basis vectors are
complete; meaning any vector V can be written as a linear combination of the basis vectors.
This highlights the difference between a basis and any other linearly independent set of vectors.
Any vector can’t necessarily be expressed as a linear combination of a linear independent set, but it
must be able to be expressed as a linear combination of basis vectors.
For example, in 3 – d space, if we have the unit vectors i and j, they form a linearly independent
set but not a basis, and we can’t write the unit vector k in terms of i and j. However, if we have all three
i, j, k they form a basis and any vector can be written as
In short, all basis are complete.
ket
v = ai + bj + ck
We now look at the matrix representation of the operator, โ„™, and of (2.8.4). We have the basis
0
โŽก0โŽค
โŽข โŽฅ
โ‹ฎ
|๐‘–โŸฉ ↔ โŽข โŽฅ
1
โŽข โŽฅ
โŽขโ‹ฎโŽฅ
โŽฃ0โŽฆ
Shoshan 12
and the basis bra
โŸจ๐‘–| ↔ [0
The projection operator
โ„™ = |๐‘–โŸฉโŸจ๐‘–|
is the product of these
0
โŽก โŽค
โŽข0โŽฅ
โŽข โŽฅ
โ‹ฎ
โ„™ = |๐‘–โŸฉโŸจ๐‘–| = โŽข โŽฅ [0
โŽข1โŽฅ
โŽข โŽฅ
โŽขโ‹ฎโŽฅ
โŽฃ0โŽฆ
0
โŽก
โŽข0
โŽข
โŽขโ‹ฎ
โ‹ฏ 0] = โŽข
โŽข
โŽข
โŽข
โŽฃ0
0 โ‹ฏ 1
The sum of all projections will be
1
โŽก โŽค
โŽข0โŽฅ
โŽข โŽฅ
โŽข โ‹ฎ โŽฅ [1
โŽข0โŽฅ
โŽข โŽฅ
โŽฃ0โŽฆ
0 … 0 0]
1
โŽก
โŽข0
= โŽขโ‹ฎ
โŽข
โŽฃ0
0
0
โ‹ฏ
โ‹ฏ
โ‹ฑ
โ‹ฏ 0]
0 โ‹ฏ 1
+
0
โŽก โŽค
โŽข 1โŽฅ
โŽข โŽฅ
โŽข0โŽฅ [0 1
โŽขโ‹ฎโŽฅ
โŽข โŽฅ
โŽฃ 0โŽฆ
0
0
โŽค
โŽก
โ‹ฎโŽฅ
โŽข0
โŽฅ + โŽขโ‹ฎ
โŽฅ
โŽข
โŽฃ0
0โŽฆ
the identity matrix as expected.
0
โ‹ฑ
0
โŽก โŽค
โŽข0โŽฅ
โŽข โŽฅ
โŽข1โŽฅ [0 0
โŽข0โŽฅ
โŽข โŽฅ
+
0
1
0
0
โŽค
โŽก
โ‹ฎโŽฅ
โŽข0
โŽฅ + โŽขโ‹ฎ
โŽฅ
โŽข
โŽฃ0
0โŽฆ
โ‹ฏ
1
โŽก
โŽข0
= โŽขโ‹ฎ
โŽข
โŽฃ0
0
โ‹ฑ
0
1
โ‹ฏ
โ‹ฏ
โ‹ฑ
1
0
โ‹ฑ
0
1
โ‹ฏ
0 โ‹ฏ 0]
โ‹ฏ
โ‹ฏ
โŽฃโ‹ฎโŽฆ
0
โŽค
โ‹ฎโŽฅ
โŽฅ
0โŽฅ
1โŽฆ
0
0
โ‹ฏ
โ‹ฑ
1 0
โ‹ฏ
1
โ‹ฑ
โŽค
โ‹ฎโŽฅ
โŽฅ
โŽฅ
โŽฅ
โŽฅ
โŽฅ
โŽฅ
0โŽฆ
โ‹ฏ]
0
โŽค
โ‹ฎโŽฅ
โŽฅ
โŽฅ
0โŽฆ
+ โ‹ฏ
+ โ‹ฏ
The Kronecker Delta and the Completeness Relation
We know that if |๐‘ฅโŸฉ is a basis vector,
|ΨโŸฉ = ๏ฟฝ|๐‘ฅโŸฉโŸจ๐‘ฅ|ΨโŸฉ
๐‘ฅ
If |ΨโŸฉ itself is a basis vector, |๐‘ฆโŸฉ, the completeness relation should still hold in an orthonormal basis,
Shoshan 13
|yโŸฉ = ๏ฟฝ|๐‘ฅโŸฉโŸจ๐‘ฅ|yโŸฉ
๐‘ฅ
But |๐‘ฅโŸฉ and |๐‘ฆโŸฉ are basis vectors so โŸจ๐‘ฅ|๐‘ฆโŸฉ = ๐›ฟ๐‘ฅ๐‘ฆ and
|๐‘ฆโŸฉ = ๏ฟฝ|๐‘ฅโŸฉ๐›ฟ๐‘ฅ๐‘ฆ
๐‘ฅ
the δxy will kill all terms except for the term that occurs when x = y , so we’re left with
|๐‘ฆโŸฉ = |๐‘ฆโŸฉ
Thus, the kronecker delta was defined in a way that is consistent with the completeness relation.
2.9 Operators: Eigenvectors and Eigenvalues
For every operator, Ω, there are vectors that when acted upon by Ω will only undergo a
rescaling and not any other transformation so that
Ω|๐‘ŠโŸฉ = ๐œ”|๐‘ŠโŸฉ
(2.9.1)
The vectors, |๐‘ŠโŸฉ, that satisfy (2.9.1) are called the eigenvectors of Ω, and the scale factors, ω,
are its eigenvalues; (2.9.1) is known as an eigenvalue equation. In general, if |๐‘ŠโŸฉ is an eigenvector of Ω,
๐‘Ž|๐‘ŠโŸฉ will also be an eigenvector:
Ω๐‘Ž|๐‘ŠโŸฉ = ๐‘ŽΩ|๐‘ŠโŸฉ = ๐‘Ž๐œ”|๐‘ŠโŸฉ
Thus, the vector ๐‘Ž|๐‘ŠโŸฉ has also been rescaled by the factor ω, so, it, too, is an eigenvector. For
now, we look at two examples of operators and their eigenvectors; later we will meet more.
Example 1. The identity operator I
The eigenvalue equation is
๐ผ|๐‘ŠโŸฉ = ๐œ”|๐‘ŠโŸฉ
(2.9.2)
We know that if ω = 1, any vector will satisfy (2.9.2) because the vector just remains the same.
Hence, I has an infinite number of eigenvectors; in fact, every vector is an eigenvector of I. I has only
one eigenvalue which is 1; i.e. the scale of the vectors doesn’t change.
Example 2. The rotation operator we mentioned before, Rπ(k)
Once again, this operator rotates all vectos in 3 – d space π radians around the z – axis.
Obviously, the only vectors in real space it doesn’t rotate are the vectors that lay along the z – axis.
Some of its eigenvectors include the unit vector k (or |3โŸฉ as we have called it before) and any multiple of
k, ak. For these eigenvectors, its eigenvalue is one since it doesn’t rescale.
(2.9.1):
We now develop a method to find solutions to the eigenvalue equation. We first rearrange
(Ω − ๐ผ๐œ”)|๐‘ŠโŸฉ = 0
(2.9.3)
Shoshan 14
and
|๐‘ŠโŸฉ = (Ω − ๐ผ๐œ”)−1 |0โŸฉ
(2.9.4)
we know that the inverse of a matrix A is
๐ด−1 =
๐ถ๐‘œ๐‘“๐‘Ž๐‘๐‘ก๐‘œ๐‘Ÿ ๐ด๐‘‡
๐ท๐‘’๐‘ก ๐ด
Hence, the only way (2.9.4) can happen is if
๐ท๐‘’๐‘ก(Ω − ๐œ”๐ผ) = 0
(2.9.5)
After expanding the determinant of (Ω − ๐œ”๐ผ), we will get a polynomial with powers of ω and
coefficients, a0, a1, …, an; (2.9.5) will be of the form
๐‘›
๏ฟฝ ๐‘Ž๐‘š ๐œ”๐‘š = 0
๐‘š=0
(2.9.6)
(2.9.6) is known as the characteristic equation. Solving the characteristic equation gives us the
eigenvalues, after which we can find the eigenvectors.
2.10 Hermitian Operators
Definition: An operator, Ω, is hermitian if Ω = Ω†
Because operators can be written as matrices, we also have the following definition.
Definition: A matrix, M, is hermitian if Ω = Ω†
Theorem 2. The eigenvalues of a hermitian operator Ω are real.
Proof. If
Ω|๐‘ŠโŸฉ = ๐œ”|๐‘ŠโŸฉ
we take the inner product of both sides with a bra โŸจ๐‘Š|
โŸจ๐‘Š|Ω|๐‘ŠโŸฉ = ๐œ”โŸจ๐‘Š|๐‘ŠโŸฉ
and then also take the adjoint of both sides
since Ω is hermitian we have Ω = Ω†
(2.10.1)
โŸจ๐‘Š|Ω† |๐‘ŠโŸฉ = ๐œ”*โŸจ๐‘Š|๐‘ŠโŸฉ
โŸจ๐‘Š|Ω|๐‘ŠโŸฉ = ๐œ”*โŸจ๐‘Š|๐‘ŠโŸฉ
We now subtract (2.10.2) from (2.10.1)
โŸจ๐‘Š|Ω|๐‘ŠโŸฉ − โŸจ๐‘Š|Ω|๐‘ŠโŸฉ = ๐œ”โŸจ๐‘Š|๐‘ŠโŸฉ − ๐œ”*โŸจ๐‘Š|๐‘ŠโŸฉ
(2.10.2)
Shoshan 15
we are left with
0 = (๐œ” − ๐œ”*)โŸจ๐‘Š|๐‘ŠโŸฉ
ω = ω*
the only way this is possible is if ω is real.
Theorem 3. The eigenvectors of every hermitian operator form an orthonormal basis; in this
basis, โ„ฆ, can be written as a diagonal matrix with its eigenvalues as diagonal entries.
Proof: Following the method of solving eigenvalue equations developed earlier, we solve the
characteristic equation of โ„ฆ. We will get at least one root ω1 that corresponds to an eigenvector |๐‘Š1 โŸฉ.
Assuming Ω operates in the vector space ๐•n, we look at the subspace ๐•n-1 that is orthogonal to |๐‘Š1 โŸฉ.
We then take any n – 1 orthonormal vectors from ๐•n-1 and the original normalized |๐‘Š1 โŸฉ, and form a
basis.
In terms of this basis, Ω looks like
๐œ”1
โŽก
โŽข0
โŽข
Ω ↔ โŽข0
โŽข โ‹ฎ
โŽข
0
0
โ‹ฏ
โŽฃ0
0
โŽค
โŽฅ
โŽฅ
โŽฅ
โŽฅ
โŽฅ
โŽฆ
(2.10.3)
The first element is ω1 because (as was discussed in section 2.5 on operators)
Ω11 = โŸจ1|Ω|1โŸฉ,
here the first basis vector is |๐‘Š1 โŸฉ, so we have
Ω11 = โŸจ๐‘Š1 |Ω|๐‘Š1 โŸฉ = ๐œ”1 โŸจ๐‘Š1 |๐‘Š1 โŸฉ = ๐œ”1
The rest of the column is zero because the basis is orthogonal; thus, if |๐‘–โŸฉ is any of the other basis
vectors,
โŸจ๐‘–|Ω|๐‘Š1 โŸฉ = ๐œ”1 โŸจ๐‘–|๐‘Š1 โŸฉ = 0
The rest of the row is zero because Ω is hermitian and must be symetric along the diagonal. The blank
part of the matrix consists of elements about to be determined.
Consider, the blank part of the matrix in (2.10.3), which we’ll call M, can itself be a matrix of
dimension n – 1; we can solve its eigenvalue earlier too. Once again, the characteristic equation yields a
root, ω2, corresponding to an eigenvector (which we normalize) |๐‘Š2 โŸฉ. Once again, we choose a
subspace one dimension less, ๐•n-2, that is orthogonal to |๐‘Š2 โŸฉ. We choose any n – 2 orthonormal vectors
from ๐•n-2 and our original |๐‘Š2 โŸฉ to form a basis.
In terms of this basis, M looks like
Shoshan 16
๐œ”2
โŽก
โŽข0
โŽข
๐‘€ = โŽข0
โŽข โ‹ฎ
โŽข
Then, the original operator, Ω, looks like
0
0
โ‹ฏ
0
โŽฃ0
๐œ”
โŽก 1
โŽข0
โŽข
Ω ↔ โŽข โ‹ฎ
โŽข
โŽข
โŽฃ0
0
๐œ”2
0
โ‹ฏ
0
โŽค
โŽฅ
โŽฅ
โŽฅ
โŽฅ
โŽฅ
โŽฆ
(2.10.4)
0
โŽค
0โŽฅ
โŽฅ
โŽฅ
โŽฅ
โŽฅ
โŽฆ
โ‹ฏ
โ‹ฎ
0
We then repeat the process on the blank part in Ω until we are finally left with
Ω ↔
๐œ”
โŽก 1
โŽข0
โŽข
โŽข0
โŽข
โŽข โ‹ฎ
โŽข
โŽข
โŽฃ0
0
๐œ”2
0
โ‹ฎ
0
0
โ‹ฏ
0
โ‹ฏ
0
โ‹ฑ
0
…
๐œ”3
โ‹ฎ
0
0
โ‹ฏ
๐œ”๐‘›−1
0
โŽค
0โŽฅ
โŽฅ
0โŽฅ
โŽฅ
โ‹ฎ โŽฅ
0โŽฅ
โŽฅ
๐œ”๐‘› โŽฆ
The eigenvectors |๐œ”1 โŸฉ, |๐œ”2 โŸฉ, … , |๐œ”๐‘› โŸฉ form an orthonormal basis because each subspace ๐•n-1,
๐• , … , ๐•1 was chosen to be orthogonal to the previous one.
n-2
2.11 Functions as Vectors in Infinite Dimensions
Going back to our basic properties of vectors, we notice that there are functions that satisfy
these properties. If we have two functions, f(x) and g(x), there is a definite rule for addition, there is a
rule for scalar multiplication, we design functions that have inverses, and, in short follow all the
properties enumerated in section 2.1.
We first look at discrete functions. Let us look at a function f. It is defined at x1, x2, … , xn with
values f(x1), f(x2), … , f(xn), respectively. The basis vectors, |๐‘ฅโŸฉ, will constitute of functions that are
nonzero at only one point, xi with the value f(xi) = 1.
The basis vectors are orthonormal that is,
and obey the completeness relation
โŸจ๐‘ฅ๐‘– |๐‘ฅ๐‘— โŸฉ = ๐›ฟ๐‘–๐‘—
๏ฟฝ|๐‘ฅ๐‘– โŸฉโŸจ๐‘ฅ๐‘– | = ๐ผ
๐‘–
(2.11.1)
Shoshan 17
To find the component of f along a basis vector |๐‘ฅโŸฉ, we follow the typical method, we take the inner
product between f and that basis vector
โŸจ๐‘ฅ|๐‘“โŸฉ = ๐‘“(๐‘ฅ).
We can, as in (2.8.1), write f as a linear combination of basis vectors
|๐‘“โŸฉ = ๏ฟฝ|๐‘ฅ๐‘– โŸฉโŸจ๐‘ฅ๐‘– |๐‘“โŸฉ
๐‘–
= ๏ฟฝ ๐‘“(๐‘ฅ๐‘– ) |๐‘ฅ๐‘– โŸฉ
๐‘–
The inner product between functions will be the same as before
โŸจ๐‘“|๐‘”โŸฉ = ๏ฟฝ ๐‘“*(๐‘ฅ๐‘– )๐‘”(๐‘ฅ๐‘– )
๐‘–
(2.11.2)
(2.11.3)
Now, we would like to extend the above to continuous functions. The basis vectors are the same
as before only now there is a basis vector for every point x, i.e. there is an infinite number of basis
vectors. A function f will then be a vector in infinite dimensions. Continuous functions have relations
similar to equations (2.11.1), (2.11.2), (2.11.3); just in each case the summation is replaced by an
integral.
The completeness relation will then be
๏ฟฝ|๐‘ฅโŸฉโŸจ๐‘ฅ|๐‘‘๐‘ฅ = ๐ผ
(2.11.2) will become
|๐‘“โŸฉ = ๏ฟฝ๐‘“(๐‘ฅ)|๐‘ฅโŸฉ ๐‘‘๐‘ฅ
and the inner product will be
โŸจ๐‘“|๐‘”โŸฉ = ๏ฟฝ ๐‘“*(๐‘ฅ)๐‘”(๐‘ฅ)๐‘‘๐‘ฅ
2.12. Orthonormality of Continuous Basis Vectors and the Dirac ๐›… Function
Basis vectors are orthogonal, so
โŸจ๐‘ฅ|๐‘ฆโŸฉ = 0
What is the normalization of the basis vectors? Should โŸจ๐‘ฅ|๐‘ฆโŸฉ = 1 ? To answer this question we start with
the completeness relation.
๐‘
๏ฟฝ|๐‘ฆโŸฉโŸจ๐‘ฆ|๐‘‘๐‘ฆ = ๐ผ
๐‘Ž
We then take the inner product with a ket |๐‘“โŸฉ and a bra โŸจ๐‘ฅ|
Shoshan 18
๐‘
๏ฟฝโŸจ๐‘ฅ|๐‘ฆโŸฉโŸจ๐‘ฆ|๐‘“โŸฉ๐‘‘๐‘ฆ = โŸจ๐‘ฅ|๐‘“โŸฉ
๐‘Ž
๐‘
๏ฟฝโŸจ๐‘ฅ|๐‘ฆโŸฉ๐‘“(๐‘ฆ)๐‘‘๐‘ฆ = ๐‘“(๐‘ฅ)
๐‘Ž
Let us call โŸจ๐‘ฅ|๐‘ฆโŸฉ an unknown function ๐›ฟ(๐‘ฅ, ๐‘ฆ)
๐‘
๏ฟฝ ๐›ฟ(๐‘ฅ, ๐‘ฆ)๐‘“(๐‘ฆ)๐‘‘๐‘ฆ = ๐‘“(๐‘ฅ)
๐‘Ž
We know that ๐›ฟ(๐‘ฅ, ๐‘ฆ) is zero wherever ๐‘ฅ ≠ ๐‘ฆ (since they’re orthogonal), so we can limit the integral to
the infinitesimal region where ๐‘ฅ = ๐‘ฆ
๐‘ฅ+๐›ผ
๏ฟฝ ๐›ฟ(๐‘ฅ, ๐‘ฆ)๐‘“(๐‘ฆ)๐‘‘๐‘ฆ = ๐‘“(๐‘ฅ)
๐‘ฅ−๐›ผ
We see that the integral of the delta function together with ๐‘“(๐‘ฆ) is ๐‘“(๐‘ฅ). If we take the limit as
๐‘“(๐‘ฆ) → ๐‘“(๐‘ฅ). Then, in the limit, we can pull ๐‘“(๐‘ฆ) out of the integral as ๐‘“(๐‘ฅ)
(2.12.1)
α → 0,
๐‘ฅ+๐›ผ
๐‘“(๐‘ฅ) ๏ฟฝ ๐›ฟ(๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฆ = ๐‘“(๐‘ฅ)
๐‘ฅ−๐›ผ
๐‘ฅ+๐›ผ
(2.12.2)
๏ฟฝ ๐›ฟ(๐‘ฅ, ๐‘ฆ)๐‘‘๐‘ฆ = 1
๐‘ฅ−๐›ผ
So ๐›ฟ(๐‘ฅ, ๐‘ฆ) is a very peculiar function. It’s zero everywhere except for one point, ๐‘ฅ = ๐‘ฆ. At that point
it can’t be finite because then the integral in (2.12.2), which is over an infinitesimal region, will be zero
not one. At ๐‘ฅ = ๐‘ฆ, ๐›ฟ(๐‘ฅ, ๐‘ฆ) must be infinite in such a way that the integral sums to one.
Nowhere did we care about the actual values of ๐‘ฅ and ๐‘ฆ, only about whether they were equal or
not. This leads us to denote the delta function by ๐›ฟ(๐‘ฅ − ๐‘ฆ). It’s known as the Dirac Delta function with
the following values.
๐’ƒ
๐›ฟ(๐‘ฅ − ๐‘ฆ) = 0
๏ฟฝ ๐›ฟ(๐‘ฅ − ๐‘ฆ)dy = 1
๐’‚
when
๐‘ฅ≠๐‘ฆ
(2.12.3๐‘Ž)
when ๐‘ฅ = ๐‘ฆ
(2.12.3๐‘)
Thus, โŸจ๐‘ฅ|๐‘ฆโŸฉ = ๐›ฟ(๐‘ฅ − ๐‘ฆ). In the continuous case, basis vectors are not normalized to one, rather
they are normalized to the Dirac delta function.
Shoshan 19
2.13. The Dirac ๐›… and the Completeness Relation
If |๐‘ฅโŸฉ and |๐‘ฆโŸฉ are basis vectors, we have
๐‘
|๐‘ฆโŸฉ = ๏ฟฝ|๐‘ฅโŸฉโŸจ๐‘ฅ|๐‘ฆโŸฉ๐‘‘๐‘ฅ
๐‘Ž
๐‘
|๐‘ฆโŸฉ = ๏ฟฝ|๐‘ฅโŸฉ๐›ฟ(๐‘ฅ − ๐‘ฆ)๐‘‘๐‘ฅ
๐‘Ž
(2.13.1)
If โŸจ๐‘ฅ|๐‘ฅโŸฉ would equal one, as in the discrete case, that is, if โŸจ๐‘ฅ|๐‘ฅโŸฉ = ๐›ฟ(๐‘ฅ − ๐‘ฅ) = ๐›ฟ(0) = 1, (2.13.1)
would not be true. However, now that ๐›ฟ(0) does not equal one, rather it follows (2.12.3b), (2.13.1) is of
the form of the following equation shown earlier:
๐‘ฅ+๐›ผ
๏ฟฝ ๐›ฟ(๐‘ฅ, ๐‘ฆ)๐‘“(๐‘ฆ)๐‘‘๐‘ฆ = ๐‘“(๐‘ฅ)
๐‘ฅ−๐›ผ
(2.12.1)
The Dirac delta collapses the integral to the one term when ๐‘ฅ = ๐‘ฆ. It is the counterpart of the Kronecker
delta. The Kronecker delta collapses sums; the Dirac delta collapses integrals.
2.14. The Dirac ๐›… Function: How it looks
From, the discussion above it should be clear that ๐›ฟ(๐‘ฅ − ๐‘ฅ′) [where |๐‘ฅโŸฉ and |๐‘ฅ′โŸฉ are basis
vectors] is sharply peaked at ๐‘ฅ and zero elsewhere. The sharp peak comes from the fact that the area
under it must total one (2.12.3b).
It helps to think of the deltafunction as a limit of the Gaussian function:
Taking the limit
We have the familiar bell curve of width Δ. The area under it is one. The delta function could
then be the limit of the Gaussian as Δ → 0.
The area would still be one, and we’ll have a very sharp peak at ๐‘ฅ. The peak has to be high “enough” so
that the area should still be one. The delta function has a value of zero or infinite.
The Derivative of the ๐›… Function
We examine the derivative of ๐›ฟ(๐‘ฅ − ๐‘ฅ′) because it will be used later. The derivative is denoted
by
Shoshan 20
๐‘‘
๐›ฟ(๐‘ฅ − ๐‘ฅ′) = ๐›ฟ′(๐‘ฅ − ๐‘ฅ′)
๐‘‘๐‘ฅ
We first look at its graph, and as before we compare it to the Gaussian. The derivative of the
Gaussian, graph A, will be
๐‘‘๐‘”(๐‘ฅ ′ − ๐‘ฅ)
๐‘‘๐‘ฅ
๐‘“
When we take the limit of it as Δ → 0, we get the delta derivative in graph B
๐‘ฅ
Graph A is the Delta function spiked at ๐‘ฅ. In B there are two rescaled δ functions, one is positive
at ๐‘ฅ − ๐›ผ, one is negative at ๐‘ฅ + ๐›ผ .
We had earlier (2.12.1), when we integrate the delta function with another function we get back
the function itself. In particular for a coordinate ๐‘ฅ
๐‘
๏ฟฝ ๐›ฟ(๐‘ฅ − ๐‘ฅ ′ )๐‘“(๐‘ฅ ′ )๐‘‘๐‘ฅ′ = ๐‘“(๐‘ฅ)
๐‘Ž
the δ function picks out the value ๐‘“(๐‘ฅ). We see this in Graph C.
The derivative of δ, ๐›ฟ ′ (๐‘ฅ − ๐‘ฅ ′ ), will pick out two values, one at ๐‘ฅ − ๐›ผ and one at ๐‘ฅ + ๐›ผ; one
gives the value ๐‘“(๐‘ฅ − ๐›ผ), one gives the value – ๐‘“(๐‘ฅ + ๐›ผ)
We then have
๏ฟฝ ๐›ฟ′(๐‘ฅ − ๐‘ฅ ′ )๐‘“(๐‘ฅ ′ )๐‘‘๐‘ฅ ′ =
The numerator will be equal to
๐‘“(๐‘ฅ − ๐›ผ) − ๐‘“(๐‘ฅ + ๐›ผ)
2๐›ผ
๐‘“(๐‘ฅ − ๐›ผ) − ๐‘“(๐‘ฅ + ๐›ผ) = 2๐›ผ
๐‘‘๐‘“(๐‘ฅ)
๐‘‘๐‘ฅ ′
(2.14.1)
(2.14.2)
This is the amount the function ๐‘“ changes by over the interval [๐‘ฅ − ๐›ผ, ๐‘ฅ + ๐›ผ]. Putting (2.14.1)
into (2.14.2) we get
๏ฟฝ ๐›ฟ′(๐‘ฅ − ๐‘ฅ ′ )๐‘“(๐‘ฅ ′ )๐‘‘๐‘ฅ ′ =
๐‘‘๐‘“(๐‘ฅ)
๐‘‘๐‘ฅ ′
(2.14.3)
For comparison, the integral of ๐›ฟ(๐‘ฅ − ๐‘ฅ ′ ) with๐‘“(๐‘ฅ ′ ) gives us ๐‘“(๐‘ฅ), the integral of ๐›ฟ′(๐‘ฅ − ๐‘ฅ′)
with ๐‘“(๐‘ฅ ′ ) gives us ๐‘“′(๐‘ฅ). Thus,
Shoshan 21
๐›ฟ ′ (๐‘ฅ − ๐‘ฅ ′ )๐‘“(๐‘ฅ ′ )
gives the same result as
๐›ฟ(๐‘ฅ − ๐‘ฅ ′ )
because
๐‘‘
๐‘“(๐‘ฅ ′ )
๐‘‘๐‘ฅ ′
๏ฟฝ ๐›ฟ(๐‘ฅ − ๐‘ฅ ′ )
๐‘‘
๐‘“(๐‘ฅ ′ )
๐‘‘๐‘ฅ ′
= ๏ฟฝ ๐›ฟ(๐‘ฅ − ๐‘ฅ ′ )๐‘“ ′ (๐‘ฅ ′ ) = ๐‘“′(๐‘ฅ)
so we can write
๐›ฟ ′ (๐‘ฅ − ๐‘ฅ ′ ) = ๐›ฟ(๐‘ฅ − ๐‘ฅ ′ )
๐‘‘
๐‘‘๐‘ฅ
The δ Function and Fourier Transforms
From Fourier analysis we have Fourier’s Inversion Theorem that says
∞
∞
−∞
−∞
1
๏ฟฝ ๐‘’ ๐‘–๐œ”๐‘ก ๐‘‘๐œ” ๏ฟฝ ๐‘“(๐‘ข)๐‘’ −๐‘–๐œ”๐‘ข ๐‘‘๐‘ข
๐‘“(๐‘ก) =
2๐œ‹
∞
∞
1
= ๏ฟฝ ๐‘“(๐‘ข)๐‘‘๐‘ข ๏ฟฝ
๏ฟฝ ๐‘’ ๐‘–๐œ”(๐‘ก−๐‘ข) ๐‘‘๐œ”๏ฟฝ
2๐œ‹
−∞
−∞
Comparing this to (2.12.1) we see that
∞
1
๏ฟฝ ๐‘’ ๐‘–๐œ”(๐‘ก−๐‘ข) ๐‘‘๐œ”
๐›ฟ(๐‘ก − ๐‘ข) =
2๐œ‹
−∞
2.15. The Operators K and X
If a function ๐‘“ is a vector, we can have an operator Ω that operates on ๐‘“ to produce another
function
Ω|๐‘“โŸฉ = |๐‘“ฬƒโŸฉ
Later on, we will look at two important operators, ๐‘‹ and ๐พ. For now, we look at the
differentiation operator ๐ท.
๐ท|๐‘“โŸฉ = ๏ฟฝ
๐‘‘๐‘“
๏ฟฝ = |๐‘“′โŸฉ
๐‘‘๐‘ฅ
(2.15.1)
We know that we can compute the matrix elements, Ω๐‘–๐‘— , of an operator Ω by taking the inner
product with basis kets and bras.
Ω๐‘–๐‘— = โŸจ๐‘–|Ω|๐‘—โŸฉ
Therefore, we can compute the matrix elements ๐ท๐‘ฅ๐‘ฅ′ of ๐ท by
Shoshan 22
๐ท๐‘ฅ๐‘ฅ' = โŸจ๐‘ฅ|๐ท|๐‘ฅ′โŸฉ
We start with (2.15.1), and take the inner product with a basis bra โŸจ๐‘ฅ|,
โŸจ๐‘ฅ|๐ท|๐‘“โŸฉ =
๐‘‘๐‘“(๐‘ฅ)
๐‘‘๐‘ฅ
We then write the left side in terms of the completeness relation
๏ฟฝ|๐‘ฅ′โŸฉโŸจ๐‘ฅ ′ |๐‘‘๐‘ฅ ′ โŸจ๐‘ฅ|๐ท|๐‘“โŸฉ = ๏ฟฝโŸจ๐‘ฅ|๐ท|๐‘ฅ ′ โŸฉโŸจ๐‘ฅ′|๐‘“โŸฉ๐‘‘๐‘ฅ ′ =
= ๏ฟฝโŸจ๐‘ฅ|๐ท|๐‘ฅ ′ โŸฉ๐‘“(๐‘ฅ ′)๐‘‘๐‘ฅ′ =
๐‘‘๐‘“(๐‘ฅ)
๐‘‘๐‘ฅ
๐‘‘๐‘“(๐‘ฅ)
๐‘‘๐‘ฅ
(2.15.2)
This equation (2.15.2) is very similar to something we had earlier
If we compare, we see that
or
๏ฟฝ ๐›ฟ′(๐‘ฅ − ๐‘ฅ ′ )๐‘“(๐‘ฅ ′ )๐‘‘๐‘ฅ ′ =
๐‘‘๐‘“(๐‘ฅ)
๐‘‘๐‘ฅ ′
(2.14.3)
โŸจ๐‘ฅ|๐ท|๐‘ฅ′โŸฉ = ๐›ฟ′(๐‘ฅ − ๐‘ฅ ′ )
๐ท๐‘ฅ๐‘ฅ' = ๐›ฟ(๐‘ฅ − ๐‘ฅ ′ )
๐‘‘
๐‘‘๐‘ฅ'
๐ท, then, is a matrix of infinite dimensions. Is ๐ท Hermitian? If it were, we would have
Let us check, we know
and
But since the δ function is real
∗
๐ท๐‘ฅ๐‘ฅ' = ๐ท๐‘ฅ'๐‘ฅ
๐ท๐‘ฅ๐‘ฅ ' = ๐›ฟ′(๐‘ฅ − ๐‘ฅ ′ )
(2.15.3)
∗
= ๐›ฟ′(๐‘ฅ′ − ๐‘ฅ)*
๐ท๐‘ฅ'๐‘ฅ
and since it’s an odd function
๐›ฟ′(๐‘ฅ′ − ๐‘ฅ)* = ๐›ฟ′(๐‘ฅ′ − ๐‘ฅ)
so
๐›ฟ′(๐‘ฅ′ − ๐‘ฅ) = − ๐›ฟ′(๐‘ฅ − ๐‘ฅ′)
∗
= − ๐›ฟ′(๐‘ฅ − ๐‘ฅ′)
๐ท๐‘ฅ'๐‘ฅ
Comparing equations (2.15.3) and (2.15.4) we see that
∗
๐ท๐‘ฅ๐‘ฅ' = − ๐ท๐‘ฅ'๐‘ฅ
(2.15.4)
Shoshan 23
Therefore, ๐ท is not Hermitian. However, we can make ๐ท Hermitian by multiplying it by −๐‘–. We
call the resulting operator ๐พ,
๐พ = −๐‘–๐ท = −๐‘–๐›ฟ′(๐‘ฅ − ๐‘ฅ′)
For ๐พ, we have
but, once again, δ is real so
∗
∗
๐พ๐‘ฅ'๐‘ฅ
= −๐‘–๐ท๐‘ฅ'๐‘ฅ
= −๐‘–๐›ฟ′(๐‘ฅ′ − ๐‘ฅ)*
and since it’s odd
∗
= ๐‘–๐›ฟ′(๐‘ฅ′ − ๐‘ฅ)
๐พ๐‘ฅ'๐‘ฅ
∗
๐พ๐‘ฅ'๐‘ฅ
= −๐‘–๐›ฟ′(๐‘ฅ − ๐‘ฅ′)
but the right side is what we had for K, then
∗
๐พ๐‘ฅ๐‘ฅ' = ๐พ๐‘ฅ'๐‘ฅ
As of now it seems that ๐พ is Hermitian, but we now use a different approach. If we have two
vectors |๐‘“โŸฉ and |๐‘”โŸฉ, we should have
by skew symetry we have
โŸจ๐‘”|๐พ|๐‘“โŸฉ = โŸจ๐‘”|๐พ๐‘“โŸฉ
from section 2.7 we know that
โŸจ๐‘”|๐พ๐‘“โŸฉ = โŸจ๐พ๐‘“|๐‘”โŸฉ*
(2.15.5)
โŸจ๐พ๐‘“| = โŸจ๐‘“|๐พ †
and
But, if ๐พ is Hermitian,
โŸจ๐พ๐‘“|๐‘”โŸฉ* = โŸจ๐‘“|๐พ † |๐‘”โŸฉ*
โŸจ๐‘“|๐พ † |๐‘”โŸฉ* = โŸจ๐‘“|๐พ|๐‘”โŸฉ*
(2.15.6)
Taking equations (2.15.5) and (2.15.6), we conclude that if ๐พ is Hermitian, we must have
โŸจ๐‘”|๐พ|๐‘“โŸฉ = โŸจ๐‘“|๐พ|๐‘”โŸฉ*
We rewrite both sides in terms of inegrals: we interpose the completeness relation twice, so we
get double integrals. Is the following equation true?
๐‘
๐‘
∗
๏ฟฝโŸจ๐‘”|๐‘ฅโŸฉโŸจ๐‘ฅ|๐พ|๐‘ฅ′โŸฉโŸจ๐‘ฅ′|๐‘“โŸฉ๐‘‘๐‘ฅ ′ ๐‘‘๐‘ฅ = ๏ฟฝ๏ฟฝโŸจ๐‘“|๐‘ฅโŸฉโŸจ๐‘ฅ|๐พ|๐‘ฅ ′ โŸฉโŸจ๐‘ฅ ′ |๐‘”โŸฉ๐‘‘๐‘ฅ ′ ๐‘‘๐‘ฅ ๏ฟฝ
๐‘Ž
๐‘Ž
We now simplify the integrals. We start with the left side. First, by skew symetry
โŸจ๐‘”|๐‘ฅโŸฉ = โŸจ๐‘ฅ|๐‘”โŸฉ*
(2.15.7)
Shoshan 24
this gives the component of |๐‘”โŸฉ along |๐‘ฅโŸฉ
โŸจ๐‘ฅ|๐‘”โŸฉ* = ๐‘”*(๐‘ฅ)
also,
โŸจ๐‘ฅ ′ |๐‘“โŸฉ = ๐‘“(๐‘ฅ ′ ).
Next, we have
โŸจ๐‘ฅ|๐พ|๐‘ฅ′โŸฉ = ๐พ๐‘ฅ๐‘ฅ' = −๐‘–๐ท๐‘ฅ๐‘ฅ' = −๐‘–๐›ฟ ′ (๐‘ฅ − ๐‘ฅ ′ ) = −๐‘–๐›ฟ(๐‘ฅ − ๐‘ฅ ′ )
๐‘‘
๐‘‘๐‘ฅ'
this operates on whatever follows it in the integral, i.e. ๐‘“(๐‘ฅ ′ ). Then the inner integral on the left
side of (2.15.7) looks like
๐‘
๏ฟฝ ๐‘”*(๐‘ฅ)(−๐‘–)๐›ฟ(๐‘ฅ − ๐‘ฅ ′ )
๐‘Ž
๐‘
๐‘‘
๐‘“(๐‘ฅ ′ )๐‘‘๐‘ฅ ′
๐‘‘๐‘ฅ'
−๐‘–๐‘‘๐‘“(๐‘ฅ)
= ๐‘”*(๐‘ฅ)(−๐‘–) ๏ฟฝ ๐›ฟ(๐‘ฅ − ๐‘ฅ ′ )๐‘“ ′ (๐‘ฅ ′ )๐‘‘๐‘ฅ ′ = ๐‘”*(๐‘ฅ) ๏ฟฝ
๏ฟฝ
๐‘‘๐‘ฅ
๐‘Ž
Once again, the delta function kills off the integral. For the outer integral we have
๐‘
−๐‘–๐‘‘๐‘“(๐‘ฅ)
๏ฟฝ ๐‘‘๐‘ฅ
๏ฟฝ ๐‘”*(๐‘ฅ) ๏ฟฝ
๐‘‘๐‘ฅ
๐‘Ž
We do the same simplification to the right side of (2.15.7) and we’re left with
๐‘
๐‘
−๐‘–๐‘‘๐‘”(๐‘ฅ)
−๐‘–๐‘‘๐‘“(๐‘ฅ)
๏ฟฝ ๐‘”*(๐‘ฅ) ๏ฟฝ
๏ฟฝ ๐‘‘๐‘ฅ = ๏ฟฝ๏ฟฝ ๐‘“*(๐‘ฅ) ๏ฟฝ
๏ฟฝ ๐‘‘๐‘ฅ ๏ฟฝ
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐‘Ž
Due the conjugation, the right side becomes
๐‘
๐‘– ๏ฟฝ ๐‘“(๐‘ฅ)
๐‘Ž
∗
๐‘Ž
๐‘‘๐‘”*(๐‘ฅ)
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
We now integrate the left side of (2.15.8) by parts with the following substitutions
๐‘ข = −๐‘–๐‘”*(๐‘ฅ)
๐‘‘๐‘ข = −๐‘–
๐‘‘๐‘”*(๐‘ฅ)
๐‘‘๐‘ฅ
๐‘‘๐‘ฃ =
๐‘‘๐‘“(๐‘ฅ)
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐‘ฃ = ๐‘“(๐‘ฅ)
(2.15.8)
Shoshan 25
Finally, we have
[−๐‘–๐‘”*(๐‘ฅ)๐‘“(๐‘ฅ)]๐‘๐‘Ž
or
๐‘
๐‘
๐‘‘๐‘”*(๐‘ฅ)
๐‘‘๐‘”*(๐‘ฅ)
+ ๐‘– ๏ฟฝ ๐‘“(๐‘ฅ)
๐‘‘๐‘ฅ = ๐‘– ๏ฟฝ ๐‘“(๐‘ฅ)
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
๐‘Ž
๐‘Ž
[−๐‘–๐‘”*(๐‘ฅ)๐‘“(๐‘ฅ)]๐‘๐‘Ž = 0
(2.15.9)
If this equation is true, ๐พ is Hermitian, meaning ๐พ is Hermitian in a space of functions that
satisfy this equation.There are two sets of functions that obey (2.15.9): functions that are zero at
endpoints, and periodic functions that follow
๐‘“(๐œƒ) = ๐‘“(๐œƒ + 2๐œ‹)
For the first type, we have
[−๐‘–๐‘”*(๐‘ฅ)๐‘“(๐‘ฅ)]๐‘๐‘Ž = 0 − 0 = 0
For the second type, we have
[−๐‘–๐‘”*(๐‘ฅ)๐‘“(๐‘ฅ)]2๐œ‹
0 = −๐‘–๐‘”*(2๐œ‹)๐‘“(2๐œ‹) + ๐‘–๐‘”*(0)๐‘“(0) = 0
We now find the eigenvectors of ๐พ.
๐พ|Ψ๐‘˜ โŸฉ = ๐‘˜|Ψ๐‘˜ โŸฉ
To find components in the |๐‘ฅโŸฉ basis, we follow the typical procedure: taking the inner product
with a bra โŸจ๐‘ฅ|
โŸจ๐‘ฅ|๐พ|Ψ๐‘˜ โŸฉ = ๐‘˜โŸจ๐‘ฅ|Ψ๐‘˜ โŸฉ
๏ฟฝโŸจ๐‘ฅ|๐พ|๐‘ฅ′โŸฉโŸจ๐‘ฅ′|Ψ๐‘˜ โŸฉ = ๐‘˜โŸจ๐‘ฅ|Ψ๐‘˜ โŸฉ
−๐‘–
๐‘‘
Ψ (๐‘ฅ) = ๐‘˜Ψ๐‘˜ (๐‘ฅ)
๐‘‘๐‘ฅ ๐‘˜
We know that the function whose derivative is the function itself is the exponential
Ψ๐‘˜ (๐‘ฅ) = ๐ด๐‘’ ๐‘–๐‘˜๐‘ฅ =
1
√2๐œ‹
๐‘’ ๐‘–๐‘˜๐‘ฅ
Thus, ๐พ has an infinite number of eigenvectors |๐‘˜โŸฉ, one for every real number. We choose ๐ด to
normalize them to the Dirac δ
∞
โŸจ๐‘˜|๐‘˜′โŸฉ = ๏ฟฝ
−∞
โŸจ๐‘˜|๐‘ฅโŸฉโŸจ๐‘ฅ|๐‘˜ ′โŸฉ๐‘‘๐‘ฅ
2
∞
= ๐ด ๏ฟฝ ๐‘’ −๐‘–๏ฟฝ๐‘˜−๐‘˜
−∞
′ ๏ฟฝ๐‘ฅ
๐‘‘๐‘ฅ =
1
๐›ฟ(๐‘˜ − ๐‘˜ ′ )
2๐œ‹
We now look at the ๐‘‹ operator. Its action on a vector is to multiply the vector by ๐‘ฅ, so
๐‘‹|๐‘“โŸฉ = ๐‘ฅ|๐‘“โŸฉ
Shoshan 26
By dotting both sides with a basis bra โŸจ๐‘ฅ|, we have, in the |๐‘ฅโŸฉ basis
โŸจ๐‘ฅ|๐‘‹|๐‘“โŸฉ = ๐‘ฅโŸจ๐‘ฅ|๐‘“โŸฉ = ๐‘ฅ๐‘“(๐‘ฅ)
(2.15.10)
We now find its eigenvectors. The eigenvalue equation is
๐‘‹|๐‘ฅโŸฉ = ๐‘ฅ|๐‘ฅโŸฉ
In the |๐‘ฅโŸฉ basis
โŸจ๐‘ฅ′|๐‘‹|๐‘ฅโŸฉ = ๐‘ฅโŸจ๐‘ฅ′|๐‘ฅโŸฉ = ๐‘ฅ๐›ฟ(๐‘ฅ ′ − ๐‘ฅ)
Comparing this to (2.15.10) we see that the eigenvectors are
๐‘‹ is a Hermitian operator because
1
2.16. The propogator
|๐‘ฅโŸฉ = ๐›ฟ(๐‘ฅ ′ − ๐‘ฅ)
1
๏ฟฝ ๐‘“*(๐‘ฅ)๐‘ฅ๐‘”(๐‘ฅ)๐‘‘๐‘ฅ = ๏ฟฝ[๐‘ฅ๐‘“*(๐‘ฅ)]๐‘”(๐‘ฅ)๐‘‘๐‘ฅ
−1
−1
We now explain and demonstrate the concept of a propogator through the use of an example.
Suppose we have a vibrating string that’s clamped at both ends, ๐‘ฅ = 0, ๐‘ฅ = ๐ฟ, where ๐ฟ is the length of
the string. We would like to know how the string will look in the future.
The function Ψ(๐‘ฅ) gives the displacement of the string at the point ๐‘ฅ. It can be thought of as a
snapshot of the string at one instant, Ψ(๐‘ฅ, ๐‘ก) will then be a snapshot of the string at time ๐‘ก. Ψ(๐‘ฅ, ๐‘ก)
changes with time according to
Since ๐พ = −๐‘–
๐œ•
๐œ•๐‘ฅ
๐œ•2Ψ
๐œ•2Ψ
=
๐œ•๐‘ก 2
๐œ•๐‘ฅ 2
, the right side can be written as −๐พ 2 |ΨโŸฉ so we have
|Ψฬˆ(๐‘ก)โŸฉ = −๐พ 2 |Ψ(๐‘ก)โŸฉ
(2.16.1)
|Ψ(๐‘ก)โŸฉ = ๐‘ˆ(๐‘ก)|Ψ(0)โŸฉ
(2.16.2)
Our goal is to find a function ๐‘ˆ(๐‘ก) such that
Meaning, for any initial state |Ψ(0)โŸฉ we just multiply it by a function ๐‘ˆ(๐‘ก) and the result will be
|Ψ(๐‘ก)โŸฉ; ๐‘ˆ(๐‘ก) is called the propogator. The utility of the propogator comes from the fact that once we
find the propogator we have completely solved the problem. The first step in constructing the
propogator is solving the eigenvalue problem of the operator, here −๐พ 2 . The eigenvalue equation is
or
In the |๐‘ฅโŸฉ basis we have
−๐พ 2 |ΨโŸฉ = −๐‘˜ 2 |ΨโŸฉ
๐พ 2 |ΨโŸฉ = ๐‘˜ 2 |ΨโŸฉ
(2.16.3)
Shoshan 27
๐พ 2 โŸจ๐‘ฅ|ΨโŸฉ = ๐‘˜ 2 โŸจ๐‘ฅ|ΨโŸฉ
๐‘‘2
Ψ (๐‘ฅ) = ๐‘˜ 2 Ψ๐‘˜ (๐‘ฅ)
๐‘‘๐‘ฅ 2 ๐‘˜
The solution to this differential equation gives us the eigenvectors
−
Ψ๐‘˜ (๐‘ฅ) = ๐ด cos ๐‘˜๐‘ฅ + ๐ต sin ๐‘˜๐‘ฅ
(2.16.4)
๐ด and ๐ต are constants about to be determined. We know that
Ψ๐‘˜ (0) = 0
then,
Also,
๐ด cos ๐‘˜(0) + ๐ต sin ๐‘˜(0) = 0
๐ด=0
Ψ๐‘˜ (๐ฟ) = 0
So
๐ต sin ๐‘˜๐ฟ = 0
If we want a solution other than ๐ต = 0, we must have ๐‘˜๐ฟ equal to a multiple of π.
then (2.16.4) becomes
๐‘˜๐ฟ = ๐‘š๐œ‹
๐‘š = 1, 2, 3, โ‹ฏ
๐‘š๐œ‹๐‘ฅ
Ψ๐‘š (๐‘ฅ) = ๐ต sin ๏ฟฝ
๏ฟฝ
๐ฟ
We choose ๐ต so that Ψ๐‘š (๐‘ฅ) is normalized:
๐ฟ
๐‘š๐œ‹๐‘ฅ 2
๏ฟฝ๏ฟฝ ๐‘‘๐‘ฅ = 1
๏ฟฝ ๏ฟฝ๐ต sin ๏ฟฝ
๐ฟ
0
๐ฟ
๐‘š๐œ‹๐‘ฅ
๏ฟฝ ๐‘‘๐‘ฅ = 1
๐ต ๏ฟฝ ๐‘ ๐‘–๐‘›2 ๏ฟฝ
๐ฟ
2
๐ฟ
0
2๐‘š๐œ‹๐‘ฅ
๐ต2
๏ฟฝ ๏ฟฝ1 − cos ๏ฟฝ
๏ฟฝ๏ฟฝ ๐‘‘๐‘ฅ = 1
2
๐ฟ
0
๐ฟ๐ต2
=1
2
โŸน
2
๐ต= ๏ฟฝ
๐ฟ
(2.16.5)
Shoshan 28
then (2.16.5) is
๐‘š๐œ‹๐‘ฅ
2
Ψ๐‘š (๐‘ฅ) = ๏ฟฝ sin ๏ฟฝ
๏ฟฝ
๐ฟ
๐ฟ
We now have an eigenbasis made of the eigenvectors
๐‘š๐œ‹ 2
๐‘š๐œ‹๐‘ฅ
2
๏ฟฝ
|๐‘šโŸฉ → ๏ฟฝ sin ๏ฟฝ
๐ฟ
๐ฟ
in the |๐‘ฅโŸฉ basis with eigenvalues ๏ฟฝ ๏ฟฝ . The eigenbasis is orthonormal, i.e. โŸจ๐‘š|๐‘›โŸฉ = ๐›ฟ๐‘š๐‘› . When ๐‘š = ๐‘›,
๐‘™
โŸจ๐‘š|๐‘›โŸฉ = 1, because we normalized the vectors above. if ๐‘š ≠ ๐‘›, we have
๐ฟ
Making the substitution ๐‘ข =
๐œ‹๐‘ฅ
๐ฟ
using the trigonometric identity
๐‘š๐œ‹๐‘ฅ
๐‘›๐œ‹๐‘ฅ
2
โŸจ๐‘š|๐‘›โŸฉ = ๏ฟฝ sin ๏ฟฝ
๏ฟฝ sin ๏ฟฝ
๏ฟฝ ๐‘‘๐‘ฅ
๐ฟ
๐ฟ
๐ฟ
0
๐œ‹
2
โŸจ๐‘š|๐‘›โŸฉ = ๏ฟฝ sin(๐‘š๐‘ข) sin(๐‘›๐‘ข) ๐‘‘๐‘ข
๐œ‹
0
sin ๐ด sin ๐ต =
๐œ‹
1
[cos(๐ด − ๐ต) − cos(๐ด + ๐ต)]
2
๐œ‹
1
โŸจ๐‘š|๐‘›โŸฉ = ๏ฟฝ๏ฟฝ cos(๐‘š๐‘ข − ๐‘›๐‘ข)๐‘‘๐‘ข − ๏ฟฝ cos(๐‘š๐‘ข + ๐‘›๐‘ข)๐‘‘๐‘ข๏ฟฝ
๐œ‹
0
=
๐œ‹
0
๐œ‹
1 sin(๐‘š − ๐‘›)๐‘ข
1 sin(๐‘š + ๐‘›)๐‘ข
๏ฟฝ
๏ฟฝ − ๏ฟฝ
๏ฟฝ
๐œ‹
๐‘š− ๐‘›
๐œ‹
๐‘š+ ๐‘›
0
0
โŸจ๐‘š|๐‘›โŸฉ = 0
Now that we have solved the eigenvalue problem of −๐พ 2 we need to construct the propogator in terms
of the eigenbasis. We write (2.16.1) in this eigenbasis; we dot both sides with bra โŸจ๐‘š|
๐‘‘2
โŸจ๐‘š|Ψ(๐‘ก)โŸฉ = −๐พ 2 โŸจ๐‘š|Ψ(๐‘ก)โŸฉ
๐‘‘๐‘ก 2
But by the eigenvalue equation we had before the right side becomes
Shoshan 29
or
๐‘š๐œ‹ 2
−๐พ 2 โŸจ๐‘š|Ψ(๐‘ก)โŸฉ = −๐‘˜ 2 โŸจ๐‘š|Ψ(๐‘ก)โŸฉ = − ๏ฟฝ ๏ฟฝ โŸจ๐‘š|Ψ(๐‘ก)โŸฉ
๐ฟ
๐‘‘2
๐‘š๐œ‹ 2
โŸจ๐‘š|Ψ(๐‘ก)โŸฉ
๏ฟฝ โŸจ๐‘š|Ψ(๐‘ก)โŸฉ
=
−
๏ฟฝ
๐‘‘๐‘ก 2
๐ฟ
Solving the differential equation we get
We can now write
๐‘š๐œ‹๐‘ก
โŸจ๐‘š|Ψ(๐‘ก)โŸฉ = โŸจ๐‘š|Ψ(0)โŸฉ cos ๏ฟฝ
๏ฟฝ
๐ฟ
|Ψ(๐‘ก)โŸฉ = ๏ฟฝ|๐‘šโŸฉโŸจ๐‘š|Ψ(๐‘ก)โŸฉ
๐‘š
๐‘š๐œ‹๐‘ก
๏ฟฝ
|Ψ(๐‘ก)โŸฉ = ๏ฟฝ|๐‘šโŸฉโŸจ๐‘š|Ψ(0)โŸฉ cos ๏ฟฝ
๐ฟ
๐‘š
comparing this to the propogator equation we had earlier (2.16.2)
we see that
|Ψ(๐‘ก)โŸฉ = ๐‘ˆ(๐‘ก)|Ψ(0)โŸฉ
๐‘š๐œ‹๐‘ก
๐‘ˆ(๐‘ก) = ๏ฟฝ|๐‘šโŸฉโŸจ๐‘š| cos ๏ฟฝ
๏ฟฝ
๐ฟ
๐‘š
Finally we write the propogator in the |๐‘ฅโŸฉ basis; we dot both sides with bra โŸจ๐‘ฅ| and ket |๐‘ฅ′โŸฉ
๐‘š๐œ‹๐‘ก
โŸจ๐‘ฅ|๐‘ˆ(๐‘ก)|๐‘ฅ′โŸฉ = ๏ฟฝโŸจ๐‘ฅ|๐‘šโŸฉโŸจ๐‘š|๐‘ฅ′โŸฉ cos ๏ฟฝ
๏ฟฝ
๐ฟ
๐‘š
๐‘š๐œ‹๐‘ฅ
๐‘š๐œ‹๐‘ฅ′
๐‘š๐œ‹๐‘ก
2
โŸจ๐‘ฅ|๐‘ˆ(๐‘ก)|๐‘ฅ′โŸฉ = ๏ฟฝ sin ๏ฟฝ
๏ฟฝ sin ๏ฟฝ
๏ฟฝ
๏ฟฝ cos ๏ฟฝ
๐ฟ
๐ฟ
๐ฟ
๐ฟ
๐‘š
We then go back to (2.16.2) and write it in |๐‘ฅโŸฉ basis
(2.16.6)
๐ฟ
โŸจ๐‘ฅ|Ψ(๐‘ก)โŸฉ = Ψ(๐‘ฅ, ๐‘ก) = โŸจ๐‘ฅ|๐‘ˆ(๐‘ก)|Ψ(0)โŸฉ = ๏ฟฝโŸจ๐‘ฅ|๐‘ˆ(๐‘ก)|๐‘ฅ′โŸฉโŸจ๐‘ฅ′|Ψ(0)โŸฉ ๐‘‘๐‘ฅ ′ (2.16.7)
0
We substitute what we get for the propogator from (2.16.6) into (2.16.7) and perform the integral.
Shoshan 30
3. The Postulates of Quantum Mechanics
3.1. The Postualtes
The postulates of quantum mechanics are now stated and explained. They are postulates in the
sense that we can’t derive them mathematically. They can be compred to Newton’s third law, ๐น = ๐‘š๐‘Ž.
Just as Newton’s law can’t be proven, only supported by experiment, these postulates are assumptions
that yield the correct results when tested by experiment.
The postualtes here deal with one particle in one dimension. The first three deal with the state
of the particle at one instant; the fourth deals with how the state evolves with time.
Postulate I. The state of a particle is represented by a vector |ΨโŸฉ in the physical Hilbert space.
We know that some vectors are normalizable to one, and some can be normalized to the δ
function. The space of both of these types of vectors is called the physical Hilbert Space.
In classical physics, we just need to know two dynamic variables, ๐‘ฅ(๐‘ก) and ๐‘(๐‘ก), i.e. position and
momentum, to know everything about the particle. The state of the particle is representes by (๐‘ฅ, ๐‘), in 3
๏ฟฝโƒ—, ๏ฟฝ๐’‘โƒ—). All information about any dynamic variable, ๐‘Š(๐‘ฅ, ๐‘) can be determined from it. For
– d (๐’“
example, kinetic energy and angular momentum will be
๐‘2
๐ธ=
2๐‘š
๐‘Ž๐‘›๐‘‘
๏ฟฝ๐‘ณโƒ— = ๐’“
๏ฟฝโƒ— × ๐’‘
๏ฟฝโƒ—
In quantum mechanics, the state is represnted by what we call a wave a function |ΨโŸฉ. The wave
function, in the |๐‘ฅโŸฉ basis, at one instant in time is a function of ๐‘ฅ: Ψ(๐‘ฅ). This is a vector in infinite
dimensions. Thus, while in classical mechanics we only needed two variables to express the state of the
particle, in quantum mechanics we need an infinite number of them.
We saw how in classical physics we can easily extract information from the state (๐‘ฅ, ๐‘). How do
we do the same in quantum mechanics? The second and third posulates tell us just that. The second
postualte defines the equivalence of the classical variables ๐‘ฅ and ๐‘.
Postulate II. The Hermitian operators ๐‘‹ and ๐‘ƒ represent the classical variables ๐‘ฅ and ๐‘. Other
dynamical variables, ๐‘Š(๐‘ฅ, ๐‘), are represented by Ω(๐‘‹, ๐‘ƒ).
The ๐‘‹ operator is the same discussed in section 2.15 with matrix elements
The operator ๐‘ƒ is
โŸจ๐‘ฅ|๐‘‹|๐‘ฅ′โŸฉ = ๐‘ฅ๐›ฟ(๐‘ฅ − ๐‘ฅ ′ )
๐‘ƒ = โ„๐พ = −๐‘–โ„๐ท
where ๐พ was discussed in the same section and โ„ = โ„Ž⁄2๐œ‹ The matrix elements of ๐‘ƒ, in the |๐‘ฅโŸฉ
basis, will then be
โŸจ๐‘ฅ|๐‘ƒ|๐‘ฅ ′ โŸฉ = −๐‘–โ„๐›ฟ ′ (๐‘ฅ − ๐‘ฅ ′ )
Postulate III. The measurement of a variable Ω(๐‘‹, ๐‘ƒ), that has eigenvectors |๐œ”โŸฉ, will yield an
eigenvalue, ω, of Ω with a probability proportional to |โŸจ๐œ”|๐œ“โŸฉ|2 . The state will change from |ΨโŸฉ to |๐œ”โŸฉ.
Shoshan 31
As was said earlier, postulates II and III tell us how to extract information from the state |ΨโŸฉ.
First, we follow postulate II to find the operator Ω that corresponds to the dynamic variable we’re
measuring. Then, we solve the eigenvalue problem of Ω, and find all eigenvectors, |๐œ”๐‘– โŸฉ, and
eigenvalues, ωi, of Ω. Since Ω is Hermitian, its eigenvectors form a basis. Then we can expand |ΨโŸฉ in
terms of this eigenbasis using the completeness relation. If the eigenvalues are discrete
If they are continuous
|ΨโŸฉ = ๏ฟฝ|๐œ”๐‘– โŸฉโŸจ๐œ”๐‘– |ΨโŸฉ
๐‘–
|ΨโŸฉ = ๏ฟฝ|๐œ”โŸฉโŸจ๐œ”|ΨโŸฉ๐‘‘๐œ”
If |ΨโŸฉ is normalized, the probability of measuring a value |๐œ”โŸฉ is equal to |โŸจ๐œ”|๐œ“โŸฉ|2 , otherwise it is
proportional to it.
We first note the use of two theorems proved earlier. The eigenvalues of Hermititian operators
are real, and the eigenvectors of Hermitian form a basis. If we make a measurement, we expect the
result to be real, and we can always write the state |ΨโŸฉ as a linear combination of eigenvectors because
they form a basis.
The quantum theory only allows probablistic predictions, and the only possible values to be
measured are the eigenvalues. We illustrate with an example. Suppose we have an operator Ω and a
state |ΨโŸฉ written in terms of the eigenvectors of Ω: |1โŸฉ, |2โŸฉ, |3โŸฉ, and suppose that |ΨโŸฉ is a vector in 3 – d
๏ฟฝโƒ—. Say
๏ฟฝโƒ— ๐’‹,
๏ฟฝโƒ— ๐’Œ
space. |1โŸฉ, |2โŸฉ, |3โŸฉ are the unit vectors ๐’Š,
If we normalize it, we get
|ΨโŸฉ = √2|1โŸฉ + √3|2โŸฉ + |3โŸฉ
|ΨโŸฉ =
√3
√2
|1โŸฉ + 2 |2โŸฉ +
3
1 1 1
√6|3โŸฉ
The probabilities of measuring |1โŸฉ, |2โŸฉ, |3โŸฉ are , , respectively. No other value is possible.
3 2 6
According to the third postulate, if we make a measurement and measure, say, the second eigenvector,
|2โŸฉ, the state |ΨโŸฉ changes and becomes the eigenvector |2โŸฉ.
In terms of the projection operator, the measurement acts like one; here it acted like, โ„™2 , i.e.
the projection operator along the basis vector |2โŸฉ
โ„™2 |ΨโŸฉ = |2โŸฉโŸจ2|ΨโŸฉ
If |ΨโŸฉ is in the eigenstate |2โŸฉ,
=
√2
|2โŸฉ
2
|ΨโŸฉ = |2โŸฉ
Subsequent measurements of the same operator, Ω, will yield the same state |2โŸฉ; repeated applications
of the projection operator don’t have an added effect.
Shoshan 32
Thus, we have three physical states. Each state has a vector that corresponds to it. A
measurement filters out one of those vectors, and that is the result of the measurement.
3.2. The continuous Eigenvalue Spectrum
In the previous example, the eigenvalues were discrete. If they are continuous the probability
needs a different interpretation.
โŸจ๐œ”|ΨโŸฉ2 can’t be the probability of measuring |๐œ”โŸฉ, because there is an infinite number of values
for ω, so each has a probability of zero. Rather, we interpret it to be the probability density. That means
๐‘ƒ(๐œ”)๐‘‘๐œ” = โŸจ๐œ”|ΨโŸฉ2 is the probability of getting a value between ω and ω + dω.
The two operators ๐‘‹ and ๐‘ƒ both have continuous eigenvalue spectrums. Earlier, we noted that
while in classical mechanics we only needed two variables ๐‘ฅ and ๐‘, in quantum mechanics we need an
infinite number of variables all contained in the wave function. It is now clear, in classical mechanics a
particle is in a state where it has exactly one definite value. In quantum mechaincs, it is in the state
where it can yield any value upon measurement. Then, we must have probabilites for all those results.
3.3 Collapse of the Wave Function and the Dirac Delta Function
According to postulate III, after a measurement of Ω, the state |ΨโŸฉ will collapse to the measured
value |๐œ”โŸฉ. Before the measurement, |ΨโŸฉ was a linear combination of the eigenvectors of Ω, |๐œ”โŸฉ
|ΨโŸฉ = ๏ฟฝ|๐œ”๐‘– โŸฉ โŸจ๐œ”๐‘– |ΨโŸฉ
๐‘–
or
|ΨโŸฉ = ๏ฟฝ|๐œ”โŸฉ โŸจ๐œ”|ΨโŸฉ๐‘‘๐œ”
After the measurement, it collapses and becomes the measured vector |๐œ”โŸฉ
|ΨโŸฉ = |๐œ”โŸฉ
Suppose we measured the position of the particle. The corresponding operator is ๐‘‹. Prior to the
measurement we had
Now, we have
|ΨโŸฉ = ๏ฟฝ|๐‘ฅโŸฉ โŸจ๐‘ฅ|ΨโŸฉ๐‘‘๐‘ฅ
|ΨโŸฉ = |๐‘ฅ′โŸฉ
To find how Ψ(๐‘ฅ) looks like, we dot both sides with a basis bra
โŸจ๐‘ฅ|ΨโŸฉ = โŸจ๐‘ฅ|๐‘ฅ ′ โŸฉ = ๐›ฟ(๐‘ฅ − ๐‘ฅ ′ )
The collapsed wave function becomes the delta function. This makes sense. When we make a
measurement and find the particle, we expect it to be at the spot we found it, so all the probability is
spiked at that spot.
3.4. Measuring X and P
In the previous example, information was extracted from |ΨโŸฉ where the eigenbasis was
discrete. In the following examples, we will deal with continuous eigenbasis.
We have a particle that is in the state |Ψ(๐‘ฅ)โŸฉ; the wave function |Ψ(๐‘ฅ)โŸฉ is the Gaussian.
Shoshan 33
Ψ(๐‘ฅ) = ๐ด๐‘’ −(๐‘ฅ−๐‘Ž)
Let us first normalize it.
∞
๏ฟฝ
−∞
|Ψ(๐‘ฅ)|2
∞
๐‘‘๐‘ฅ = ๏ฟฝ ๐ด๐‘’ −(๐‘ฅ−๐‘Ž)
−∞
∞
๐ผ = ๏ฟฝ ๐ด2 ๐‘’ −(๐‘ฅ−๐‘Ž)
−∞
We set ๐ผ equal to this integral. Then, ๐ผ 2 will be
∞
๐ผ 2 = ๐ด4 ๏ฟฝ ๐‘’
−∞
−(๐‘ฅ−๐‘Ž)2 ⁄Δ2
We make the substitutions ๐‘ข = (๐‘ฅ − ๐‘Ž)⁄Δ,
๐ผ
2
4 2
∞
= ๐ด Δ ๏ฟฝ๐‘’
−∞
−๐‘ข2
∞
๐‘‘๐‘ข ๏ฟฝ ๐‘’
−∞
2 ⁄2Δ2
2 ⁄Δ2
๐ด๐‘’ −(๐‘ฅ−๐‘Ž)
2 ⁄2Δ2
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ = 1
∞
๐‘‘๐‘ฅ ๏ฟฝ ๐‘’ −(๐‘ฆ−๐‘Ž)
−∞
2 ⁄Δ2
๐‘‘๐‘ฆ
๐‘ฃ = (๐‘ฆ − ๐‘Ž)⁄Δ
and
−๐‘ฃ 2
2 ⁄2Δ2
4 2
∞
∞
๐‘‘๐‘ฃ = ๐ด Δ ๏ฟฝ ๏ฟฝ ๐‘’ −(๐‘ข
and ๐‘Ÿ 2 = ๐‘ข2 + ๐‘ฃ 2
Then, we switch to polar coordinates: ๐‘‘๐‘ข ๐‘‘๐‘ฃ = ๐‘Ÿ ๐‘‘๐‘Ÿ ๐‘‘๐œƒ
∞
๐ผ 2 = ๐ด4 Δ2 ๏ฟฝ ๐‘Ÿ ๐‘’
0
−∞ −∞
2 +๐‘ฃ 2 )
๐‘‘๐‘ฃ ๐‘‘๐‘ข
2๐œ‹
−๐‘Ÿ 2
After another substitution, ๐‘ง = ๐‘Ÿ 2 , the integral becomes
๐‘‘๐‘Ÿ ๏ฟฝ ๐‘‘๐œƒ
0
∞
๐ผ 2 = ๐ด4 Δ2 ๐œ‹ ๏ฟฝ ๐‘’ −๐‘ง ๐‘‘๐‘ง = ๐ด4 Δ2 ๐œ‹
0
and
๐ผ = ๐ด2 Δ √๐œ‹ = 1
1
Thus, ๐ด = (๐œ‹Δ2 )−4 and the normalized state is
|ΨโŸฉ =
1
1
(๐œ‹Δ2 )4
๐‘’ −(๐‘ฅ−๐‘Ž)
2 ⁄2Δ2
Lets say we want to know where the particle is, i.e. what is its position. The operator corresponding to
position is ๐‘‹. We follow the algorithm we developed. First, we expand |ΨโŸฉ in the eigenbasis of ๐‘‹.
∞
∞
−∞
−∞
|ΨโŸฉ = ๏ฟฝ |๐‘ฅโŸฉ โŸจ๐‘ฅ|ΨโŸฉ ๐‘‘๐‘ฅ = ๏ฟฝ |๐‘ฅโŸฉ Ψ(๐‘ฅ) ๐‘‘๐‘ฅ
Shoshan 34
The probability of finding the particle between ๐‘ฅ and ๐‘ฅ + ๐‘‘๐‘ฅ is
1
|Ψ(๐‘ฅ)|2 ๐‘‘๐‘ฅ =
๐‘’ −(๐‘ฅ−๐‘Ž)
1
2
(๐œ‹Δ2 )
2 ⁄Δ2
๐‘‘๐‘ฅ
The particle will probably be found around ๐‘Ž, that is where the probability density is largest.
Next, we look at the momentum of the particle. The operator corresponding to momentum is ๐‘ƒ.
In this case the process is longer. We found the eigenvectors last section.
โŸจ๐‘|ΨโŸฉ2
1
|๐‘โŸฉ = Ψ๐‘ (๐‘ฅ) =
√2๐œ‹โ„
will give us the probability density.
∞
๐‘’
๐‘–๐‘๐‘ฅ
โ„
∞
โŸจ๐‘|ΨโŸฉ = ๏ฟฝ โŸจ๐‘|๐‘ฅโŸฉ โŸจ๐‘ฅ|ΨโŸฉ ๐‘‘๐‘ฅ = ๏ฟฝ Ψ๐‘∗ (๐‘ฅ) Ψ(๐‘ฅ) ๐‘‘๐‘ฅ
−∞
−∞
∞
2
2
๐‘’ −๐‘–๐‘๐‘ฅ⁄โ„ ๐‘’ −(๐‘ฅ−๐‘Ž) ⁄2Δ
๐‘‘๐‘ฅ
(๐œ‹Δ2 )1⁄4
√2๐œ‹โ„
โŸจ๐‘|ΨโŸฉ = ๏ฟฝ
−∞
To integrate, we expand the powers, add the fractions, and make the following substitutions:
๐‘˜=
๐›ฝ=
1
√2๐œ‹โ„
(๐œ‹Δ2 )−1⁄4
๐‘Žโ„ − ๐‘–๐‘Δ2
โ„Δ2
๐›ผ=
๐›พ= −
∞
โŸจ๐‘|ΨโŸฉ = ๐‘˜๐‘’ ๐›พ ๏ฟฝ ๐‘’ −๐›ผ๐‘ฅ
−∞
Completing the square, we get
๐›ฝ 2 ⁄4๐›ผ
๐›พ
โŸจ๐‘|ΨโŸฉ = ๐‘˜ ๐‘’ ๐‘’
1
2Δ2
∞
๏ฟฝ๐‘’
−∞
2 +๐›ฝ๐‘ฅ
๐‘Ž2
2Δ2
๐‘‘๐‘ฅ
−๐›ผ๏ฟฝ๐‘ฅ−
๐›ฝ 2
๏ฟฝ
2๐›ผ
๐‘‘๐‘ฅ
Using the same method we used when we normalized the state we get,
Substituting back the original values,
โŸจ๐‘|ΨโŸฉ = ๐‘˜ ๐‘’ ๐›พ ๐‘’ ๏ฟฝ๐›ฝ
1⁄4
Δ2
โŸจ๐‘|ΨโŸฉ = ๏ฟฝ 2 ๏ฟฝ
๐œ‹โ„
2 ⁄4๐›ผ ๏ฟฝ
๐œ‹
๐›ผ
๏ฟฝ
๐‘’ (−๐‘–๐‘๐‘Ž⁄โ„) ๐‘’ ๏ฟฝ−๐‘
2 Δ2 ⁄2โ„2 ๏ฟฝ
In both of these examples, position and momentum, the eigenvectors reprsented definite
states. When we measure position, |ΨโŸฉ changes from being a sum over the states |๐‘ฅโŸฉ to taking the value
of one of the|๐‘ฅโŸฉ. Before the measurement, the particle had no definite position. It was in a state where
it could yield any |๐‘ฅโŸฉ, i.e. any position, with a certain probability. After the measurement, it is forced
Shoshan 35
into a definite state of position, |๐‘ฅโŸฉ. It is the same with momentum; |๐‘โŸฉ represents a state of definite
momentum.
For energy, we first have to get the states of definite energy by solving this equation
−
โ„2 ๐‘‘2 Ψ๐ธ
+ ๐‘‰(๐‘ฅ)Ψ๐ธ (๐‘ฅ) = ๐ธΨ๐ธ (๐‘ฅ)
2๐‘š ๐‘‘๐‘ฅ 2
This is called the time – independent Schrodinger equation. Ψ๐ธ represent states of definite energy. For
position and momentum the definite states are always the same. For energy they depend on the
potential the particle is experiencing.
3.5. Schrodinger’s Equation
We now move on to the fourth postulate. While the first three postulates dealt with the state
|Ψ(๐‘ฅ)โŸฉ at one instant, the fourth deals with how the state |Ψ(๐‘ฅ, ๐‘ก)โŸฉ evolves with time.
Postulate IV. The wave function |Ψ(๐‘ฅ, ๐‘ก)โŸฉ follows the Schrodinger equation
๐‘–โ„
๐‘‘
|Ψ(๐‘ก)โŸฉ = ๐ป|Ψ(๐‘ก)โŸฉ
๐‘‘๐‘ก
Schrodinger’s equation is the quantum mechanical counterpart of Newton’s equation ๐นโƒ— = ๐‘š
, they both tell you how a particle evolves with time. ๐ป is the Hamiltonian; it gives the total energy of
the particle
๐ป =๐‘‡+๐‘‰
Where ๐‘‡ is the kinetic energy and ๐‘‰ is the potential energy. For every particle the potential, may be
different, so the Hamiltonian may be different.
The Schrodinger equation tells us how the state will change. |ΨโŸฉ starts looking perhaps like A
and later like B. If we will solve the Schrodinger equation we will know how it will look.
We develop a solution using two different but equivalent approaches. We first rewrite ๐ป
๐ป =๐‘‡+๐‘‰ =
๐‘ƒ2
โ„2 ๐œ• 2
+๐‘‰ = −
+๐‘‰
2๐‘š
2๐‘š ๐œ•๐‘ฅ 2
So, the Schrodinger equation can be written as
๐‘–โ„
๐œ•Ψ
โ„2 ๐‘‘2 Ψ(๐‘ฅ)
= −
+ ๐‘‰Ψ(๐‘ฅ)
๐œ•๐‘ก
2๐‘š ๐‘‘๐‘ฅ 2
๐‘‘2 ๐‘ฅ
๐‘‘๐‘ก 2
Shoshan 36
Assuming ๐‘‰ doesn’t depend on time, we look for solutions of the form
Ψ(๐‘ฅ, ๐‘ก) = Ψ(๐‘ฅ)๐‘“(๐‘ก)
๐‘‘๐‘“
๐œ•Ψ
= Ψ
๐‘‘๐‘ก
๐œ•๐‘ก
๐œ•2Ψ
๐‘‘2 Ψ
=
๐‘“
๐œ•๐‘ฅ 2
๐‘‘๐‘ฅ 2
and
Putting these into Schrodinger’s equation
๐‘‘๐‘“
โ„2 ๐‘‘2 Ψ
๐‘–โ„Ψ(๐‘ฅ, ๐‘ก)
= −
๐‘“(๐‘ก) + ๐‘‰Ψ(๐‘ฅ, ๐‘ก)๐‘“(๐‘ก)
2๐‘š ๐‘‘๐‘ฅ 2
๐‘‘๐‘ก
Dividing both sides by Ψ๐‘“ we get
โ„2 ๐‘‘2 Ψ 1
1 ๐‘‘๐‘“
= −
+๐‘‰
๐‘–โ„
2๐‘š ๐‘‘๐‘ฅ 2 Ψ
๐‘“ ๐‘‘๐‘ก
The left side only depends on ๐‘ก, and the right side only on ๐‘ฅ. This equation is true only if both
sides equal a constant. Otherwise, if we vary either ๐‘ฅ or ๐‘ก keeping the other side constant, the equation
will not be true. Thus,
๐‘–โ„
and
−
(3.5.1) Is easily solved
1 ๐‘‘๐‘“
=๐ธ
๐‘“ ๐‘‘๐‘ก
โ„2 ๐‘‘2 Ψ 1
+๐‘‰ =๐ธ
2๐‘š ๐‘‘๐‘ฅ 2 Ψ
or
or
−
๐‘‘๐‘“
๐ธ
= −๐‘– ๐‘“
๐‘‘๐‘ก
โ„
(3.5.1)
โ„2 ๐‘‘ 2 Ψ
+ ๐‘‰(๐‘ฅ)Ψ(๐‘ฅ) = ๐ธΨ(๐‘ฅ)
2๐‘š ๐‘‘๐‘ฅ 2
(3.5.2)
๐‘“๐‘ก = ๐‘’ −๐‘–๐ธ๐‘ก⁄โ„
We recognize (3.5.2) as the time independent Schrodinger equation. This tells us that Ψ(๐‘ฅ) is in a state
of definite energy. We conclude that the Schrodinger equation
does admit solutions of the form Ψ(๐‘ฅ, ๐‘ก) = Ψ(๐‘ฅ)๐‘“(๐‘ก), provided that ๐‘“ is the exponential written above
and that Ψ(๐‘ฅ) is in a state of definite energy. If these are true then Ψ(๐‘ฅ, ๐‘ก) evolves according to
Ψ(๐‘ฅ, ๐‘ก) = Ψ(๐‘ฅ)๐‘’ −๐‘–๐ธ๐‘ก⁄โ„
If not, Ψ(๐‘ฅ) will be a sum over definite states of energy and evolve in some complex manner.
Ψ(๐‘ฅ, ๐‘ก) = ๏ฟฝ ๐ด๐ธ Ψ๐ธ (๐‘ฅ)๐‘’ −๐‘–๐ธ๐‘ก⁄โ„
๐ธ
Where ๐ด๐ธ are constants. We now follow a different approach starting with Schrodinger’s equation
Shoshan 37
๐‘–โ„
๐‘‘
|ΨโŸฉ = ๐ป|ΨโŸฉ
๐‘‘๐‘ก
We solve the problem using the propogator. We look for ๐‘ˆ(๐‘ก) that satisfies
We rewrite the Schrodinger equation as
|Ψ(๐‘ก)โŸฉ = ๐‘ˆ(๐‘ก)|Ψ(0)โŸฉ
๏ฟฝ๐‘–โ„
(3.5.3)
๐‘‘
− ๐ป๏ฟฝ |ΨโŸฉ = 0
๐‘‘๐‘ก
(3.5.4)
As in the string problem (section 2.16) we need to write the propogator in terms of the eigebasis of ๐ป.
๐ป|๐ธโŸฉ = ๐ธ|๐ธโŸฉ
This is the time independent Schrodinger equation. We expand |ΨโŸฉ in the eigenbasis |EโŸฉ
|Ψ(๐‘ก)โŸฉ = ๏ฟฝ|๐ธโŸฉโŸจ๐ธ|Ψ(๐‘ก)โŸฉ = ๏ฟฝ๐ด๐ธ (๐‘ก)|๐ธโŸฉ
Where we made the substitution ๐ด๐ธ (๐‘ก) = โŸจ๐ธ|๐›น(๐‘ก)โŸฉ. We operate on both sides with ๏ฟฝ๐‘–โ„
๏ฟฝ๐‘–โ„
๐œ•
๐œ•๐‘ก
(3.5.5)
− ๐ป๏ฟฝ
๐œ•
๐œ•
− ๐ป๏ฟฝ |๐›น(๐‘ก)โŸฉ = ๏ฟฝ ๐‘–โ„ ๐ด๐ธ (๐‘ก)|๐ธโŸฉ − ๐ป๐ด๐ธ (๐‘ก)|๐ธโŸฉ = ๏ฟฝ๏ฟฝ๐‘–โ„๐ดฬ‡๐ธ (๐‘ก) − ๐ธ๐ด๐ธ ๏ฟฝ |๐ธโŸฉ
๐œ•๐‘ก
๐œ•๐‘ก
By (3.5.4) the left side is zero, so we have
๐‘–โ„๐ดฬ‡๐ธ = ๐ธ ๐ด๐ธ
This is easily solved as
๐ด๐ธ (๐‘ก) = ๐ด๐ธ (0)๐‘’ −๐‘–๐ธ๐‘ก⁄โ„
Substituting back ๐ด๐ธ (๐‘ก) = โŸจ๐ธ|๐›น(๐‘ก)โŸฉ, and plugging it into (3.5.5)
โŸจ๐ธ|๐›น(๐‘ก)โŸฉ = โŸจ๐ธ|๐›น(0)โŸฉ๐‘’ −๐‘–๐ธ๐‘ก⁄โ„
|๐›น(๐‘ก)โŸฉ = ๏ฟฝ|๐ธโŸฉโŸจ๐ธ|๐›น(0)โŸฉ ๐‘’ −๐‘–๐ธ๐‘ก⁄โ„
By comparing this to (3.5.3) we see that
๐‘ˆ(๐‘ก) = ๏ฟฝ|๐ธโŸฉโŸจ๐ธ| ๐‘’ −๐‘–๐ธ๐‘ก⁄โ„
๐ธ
Shoshan 38
4. Particle in a Box
We now consider a situation that features one of the amazing results of Quantum Physics. We
have a particle between 0 and ๐‘Ž in a potential ๐‘‰(๐‘ฅ) so that
๐‘‰(๐‘ฅ)
๐‘‰(๐‘ฅ) = ๏ฟฝ
0,
∞,
0≤๐‘ฅ<๐‘Ž
๐‘’๐‘™๐‘ ๐‘’๐‘คโ„Ž๐‘’๐‘Ÿ๐‘’
The particle can’t escape because it needs an infinite amount of energy. We look
for states of definite energy, so we use the time independent Schrodinger equation.
−
โ„2 ๐‘‘2 Ψ
= ๐ธΨ(๐‘ฅ)
2๐‘š ๐‘‘๐‘ฅ 2
๐‘‘2 Ψ
= −๐‘˜ 2 Ψ
๐‘‘๐‘ฅ 2
๐‘˜≡
The solution to this equation is
0
√2๐‘š๐ธ
โ„
Ψ(๐‘ฅ) = ๐ด sin ๐‘˜๐‘ฅ + ๐ต cos ๐‘˜๐‘ฅ
Where the potential is infinte Ψ(๐‘ฅ) = 0. Then, the wave function will be continuous only if
Ψ(0) = Ψ(๐‘Ž) = 0
Ψ(0) = ๐ด sin(0) + ๐ต cos(0) = ๐ต = 0
Ψ(๐‘Ž) = ๐ด sin ๐‘˜๐‘Ž = 0
If ๐ด is to be nonzero, we must have
๐‘˜๐‘Ž = ๐œ‹, 2๐œ‹, 3๐œ‹, โ‹ฏ
Or
๐‘˜=
๐‘›๐œ‹
๐‘Ž
We substitute back for ๐‘˜ from (4.1) and solve for ๐ธ
Replacing ๐‘˜ with the values we got,
๐‘› = 1, 2 ,3, โ‹ฏ
๐ธ=
๐ธ๐‘› =
๐‘›2 ๐œ‹ 2 โ„2
2๐‘š๐‘Ž2
โ„2 ๐‘˜ 2
2๐‘š
๐‘› = 1, 2, 3, โ‹ฏ
Thus, the definite states of energy can’t have just any energy. Energy is quantized to certain values.
a
(4.1)
Shoshan 39
Conclusion
In the days of classical physics everything was predictable. Given all the forces and
momenta of all the particles in a system, the entire future of the system can theoretically be
determined. In quantum physics however, nothing can be determined definitely.
Consider a simple idealized example. We have a particle in a closed box ( a real box, not
the one from Section 4) that can have only one of two positions A or B. In classical physics we
could say the particle is either definitely in A or definitely in B. If we then open the box and see
it in A, we know it was in A even prior to the measurement.
It is not so in quantum physics; when the box is closed the particle is not in A or in B. It is
in the state that upon measurement it can be found either in A or B. There is a common
misconception that when the box is closed the particle is partly in A partly in B. It is not. It is in
this bizarre quantum state where it doesn’t make sense to ask “where is the particle?”
When we open the box and find it in A, it does not mean it was always in A. Rather, it
means that upon measurement, the particle was forced into position state A. Meaning, before
the measurement it had a probability of being found in A or in B, say 50/50. When the
measurement occurs the particle suddenly finds itself in say A. Before the measurement, the
particle itself doesn’t know where it will end up. So we can’t make a definite prediction as in
pre-quantum times. We can only say it has a 50% chance of being in A.
This may sound very weird, but that is only because we experience a Newtonain world,
not a quantum world. Mass plays a role here, the more mass a particle has the less the wave
function of the particle has an effect. The objects we deal with everyday are so enormously
massive, compared to electrons, that we see no quantum effects.
If there were quantum people, i.e. people with the mass of electrons, they may be just
as amazed with our Newtonian world. They will be astonished at how we can make such
accurate predictions using Newton’s laws.
Richard Feynman once said, “No one understands quantum mechanics.” Perhaps, that is
what makes the subject so fascinating.
Shoshan 40
Bibliography
S.J. Bence, M.P. Hobson, K. F. Riley. Mathematical Methods for Physics and Engineering. Cambridge
University Press, 2006
Borowitz, Sidney. Fundamentals of Quantum Mechanics: Particles, Waves, and Wave
Mechanics.Benjamin, 1967.
Feynman, Richard, R. Leighton and M. Sands. The Feynman Lectures on Physics Volume 3. Addison –
Wesley, 1964
Griffiths, David J. Introduction to Quantum Mechanics. New Jersey: Prentice Hall, 1995.
Shankar, Ramamurti. Principles of Quantum Mechanics. New York: Plenum Press, 1980.
Download