Shoshan 1 Fundamentals of Quantum Mechanics Malkiel Shoshan Shoshan 2 Contents 1. Introduction .............................................................................................................................................. 3 2. Mathematics of Vector Spaces ................................................................................................................. 4 2.1 Linear Vector Spaces ........................................................................................................................... 4 2.2 Inner Product Spaces .......................................................................................................................... 5 2.3 Linear Operators ................................................................................................................................. 6 2.4 Vector Components ............................................................................................................................ 7 2.5 Matrix Representation: Vectors and Operators ................................................................................. 7 2.6 Matrix Representation of the Inner Product and Dual Spaces ........................................................... 9 2.7 The Adjoint Operation: Kets, Scalars, and Operators ....................................................................... 10 2.8 The Projection Operator and the Completeness Relation ................................................................ 10 The Kronecker Delta and the Completeness Relation ........................................................................ 12 2.9 Operators: Eigenvectors and Eigenvalues ........................................................................................ 13 2.10 Hermitian Operators ....................................................................................................................... 14 2.11 Functions as Vectors in Infinite Dimensions ................................................................................... 16 2.12. Orthonormality of Continuous Basis Vectors and the Dirac ๐ Function........................................ 17 2.13. The Dirac ๐ and the Completeness Relation.................................................................................. 19 2.14. The Dirac ๐ Function: How it looks ................................................................................................ 19 The Derivative of the ๐ Function ........................................................................................................ 19 The δ Function and Fourier Transforms.............................................................................................. 21 2.15. The Operators K and X ................................................................................................................... 21 2.16. The propogator .............................................................................................................................. 26 3. The Postulates of Quantum Mechanics .................................................................................................. 30 3.1. The Postualtes .................................................................................................................................. 30 3.2. The continuous Eigenvalue Spectrum.............................................................................................. 32 3.3 Collapse of the Wave Function and the Dirac Delta Function .......................................................... 32 3.4. Measuring X and P ........................................................................................................................... 32 3.5. Schrodinger’s Equation .................................................................................................................... 35 4. Particle in a Box ....................................................................................................................................... 38 Conclusion ................................................................................................................................................... 39 Bibliography ................................................................................................................................................ 40 Shoshan 3 1. Introduction The highly confident and optimistic attitude during the era before the quantum revolution was that all known phenomena were explained by physics. The physicist Albert Michelson, in the 1800s said: “…The more important fundamental laws and facts of physical science have all been discovered and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote.” And then, quantum mechanics came and shook the world of physics at its very foundations. Many different experiments were designed that demonstrated the inadequacy of classical physics. They will not be enumerated here; we discuss only one. Most of the mysterious and peculiar features are contained in the double slit experiment. We first do the (thought) experiment with bullets. 1 P1 2 P2 We have a source (gun), and two walls. Assume the walls are very, very far apart. The bullets shoot off in different directions. Some go through the slits and hit the next wall. We close slit 2, and measure the frequency of hits; we get a curve, P1. The curve has a maximum opposite slit 1 that is where the bullets are most frequent. If we open both slits we get the curve P12. 1 P1 2 P2 Next, we repeat the experiment with light. By light, P1 and P2 interfere, so we get the wavy pattern. The bullets, which are “lumps” (as Feynman liked to call them), don’t interfere. If we stop here, everything will be fine. But, we repeat the experiment a third time, this time with electrons. If we open only one slit we get P1 or P2 as expected. However, if we open both slits we get the wavy pattern we got by light! How is this possible? Let’s look at the point represented by the red dot in the figure. When only slit 2 was open, that point got many electrons. How is it that when another slit is open that point gets fewer electrons? According to classical physics, each electron has a definite trajectory. The electron going through slit 2 should follow that trajectory; they should not care if slit 1 is open. Worse still, they should not even know that slit 1 is open. We will see that every particle has a wave function associated with it that contains all the information about the particle. These are the conundrums that perplexed physicists in the early 1900s. Many scientists contributed to the development of quantum mechanics; Albert Einstein, Erwin Schrodinger, Paul Dirac, Werner Heisenberg and Richard Feynman to name just a few.(At least) Three different but equivalent mathematical treatments were developed. We use a combination of two approaches, Schrodinger’s Wave Mechanics and Heisenberg’s Matrix Mechanics. First, we lay the mathematical foundation. Then, the postulates of quantum mechanics will be explained, and a short example will be presented. Shoshan 4 2. Mathematics of Vector Spaces 2.1 Linear Vector Spaces Definition: A linear vector space, ๐, is a collection of objects called vectors. A vector, V, is denoted by |๐โฉ , read ket V, and satisfies the following properties: 1. They have a rule for addition: |๐โฉ + |๐โฉ 2. They have a rule for scalar multiplication: a|๐โฉ 3. (closure) The result of these operations is an element of the same vector space ๐. 4. (distributive property in vectors) a(|๐โฉ + |๐โฉ) = a|๐โฉ + ๐|๐โฉ 5. (distributive property in scalars) (a + b)|๐โฉ = ๐|๐โฉ + ๐|๐โฉ 6. (commutative) |๐โฉ + |๐โฉ = |๐โฉ + |๐โฉ 7. (associative) |๐โฉ + (|๐โฉ + |๐โฉ) = (|๐โฉ + |๐โฉ) + |๐โฉ 8. A null vector |0โฉ exists so that |๐โฉ + |0โฉ = |๐โฉ 9. An additive inverse vector exists for every |๐โฉ so that |๐โฉ + |−๐โฉ = |0โฉ The familiar arrows from physics obey the above axioms; they are definitely vectors, but not of the most general form: vectors need not have a magnitude or direction. In fact, matrices also satisfy the above properties, and, thus, they are also legitimate vectors. Definition: If we have a set of vectors |๐1 โฉ , |๐2 โฉ, … , |๐๐ โฉ, that are related linearly by ๐ ๏ฟฝ๐๐ |๐๐ โฉ = |0โฉ , ๐=1 (2.1.1) and (2.1.1) is true only if all the scalars ai = 0, then the set of vectors is linearly independent. Consider two parallel vectors, |1โฉ and |2โฉ, in the xy-plane. One can be written as a multiple of the other; accordingly, they are not linearly independent. If they were not parallel, they would be linearly independent. If we now bring in a third vector, |3โฉ, into the plane, |3โฉ = ๐|1โฉ + ๐|2โฉ for some scalars a and b; thus, |1โฉ, |2โฉ, and |3โฉ do not form a linearly independent set. Definition: The dimension, n, of a vector space is the maximum number of linearly independent vectors it can contain. Definition: A basis is a set of n linearly independent vectors in an n – dimensional space. Theorem 1. Any vector, |๐โฉ, in n – dimensional space can be written linearly in terms of a basis in that space so that if |1โฉ, |2โฉ, … , |๐โฉ constitute a basis and the vi are scalars, then ๐ |๐โฉ = ๏ฟฝ ๐ฃ๐ |๐โฉ ๐=1 (2.1.2) Proof. If |๐โฉ can’t be written in terms of a basis, the n – dimensional space will contain n+1 linearly independent vectors; the n vectors of a basis and the vector |๐โฉ itself. However, by definition an n – dimesional space can only accommodate n linearly independent vectors. Definition: The coefficients, vi, in (2.1.2) are called the components of |๐โฉ in the |๐โฉ basis. Shoshan 5 Definition: A subset of a vector space, ๐, that forms a vector space is called a subspace. A subspace ๐ of dimension n is written as ๐n. We may give a subspace a label such as ๐1๐ฆ . The subscript ๐ฆ is the label; it tells us that ๐1๐ฆ contains all vectors along the ๐ฆ axis. The superscript tells us that the subspace is one dimensional. ๐1๐ฆ is a subspace of ๐3(R) which represents 3 – d space. 2.2 Inner Product Spaces Recall, we have an operation between two arrows in the plane called the dot product, denoted by ๐โ โ ๐โ , that produces a number and satisfies the following three properties: • • • ๏ฟฝโ = ๐ต ๏ฟฝโ โ ๐ดโ (symetry) ๐ดโ ⋅ ๐ต ๐ดโ โ ๐ดโ ≥ 0 ๏ฟฝโ + ๐๐ถโ๏ฟฝ = ๐๐ดโ โ ๐ต ๏ฟฝโ + ๐๐ดโ โ ๐ถโ (linearity) ๐ดโ โ ๏ฟฝ๐๐ต We now define an operation called the inner product between any two vectors |๐โฉ and |๐โฉ. It is denoted by โจ๐|๐โฉ, and satisfies three properties that are analogous to those of the dot product: • • • โจ๐|๐โฉ = โจ๐|๐โฉ* (skew symetry; the * says: take the complex conjugate) โจ๐|๐โฉ ≥ 0 โจ๐|(๐|๐โฉ + ๐|๐โฉ) = ๐โจ๐|๐โฉ + ๐โจ๐|๐โฉ (linearity) The first axiom says the inner product depends on the order of the vectors; the results are complex conjugates of each other. As with arrows where ๐โ โ ๐โ is the norm or length squared of the vector, we want โจ๐|๐โฉ to also represent the length squared of the vector, so โจ๐|๐โฉ better be positive; the second axiom expresses this. In addition, โจ๐|๐โฉ should be real. The first axiom ensures this: โจ๐|๐โฉ = โจ๐|๐โฉ* ๐ + ๐๐ = ๐ − ๐๐ where i = √−1, and a, b are constants. (2.2.1) can only be true if b = 0 so that โจ๐|๐โฉ = ๐ is real. (2.2.1) The third axiom expresses the linearity of the inner product when a linear combination of vectors occurs in the second factor. The axiom says that the scalars a, b,…, can come out of the inner product. If a linear combination occurs in the first factor, we first envoke skew-symetry: โจ๐๐ + ๐๐|๐โฉ = โจ๐|๐๐ + ๐๐โฉ* and then we use the third axiom โจ๐|๐๐ + ๐๐โฉ* = (๐โจ๐|๐โฉ + ๐โจ๐|๐โฉ)* = ๐*โจ๐|๐โฉ* + ๐*โจ๐|๐โฉ* = ๐*โจ๐|๐โฉ + ๐*โจ๐|๐โฉ (2.2.2) In short, when there is a linear combination in the first factor, the scalars in the first factor get complex conjugated and taken out of the inner product. Definition: Two vectors are orthogonal if their inner product is zero. Definition: The norm of a vector is expressed as ๏ฟฝโจ๐|๐โฉ ≡ |๐| Shoshan 6 Definition: A set of basis vectors, all of unit norm and mutually orthogonal, is referred to as an orthonormal basis. We can now derive a formula for the inner product. Given |๐โฉ and |๐โฉ with components vi and wj, respectively, written in terms of an orthonormal basis |1โฉ, |2โฉ, … , |๐โฉ, |๐โฉ = ๏ฟฝ ๐ฃ๐ |๐โฉ ๐ |๐โฉ = ๏ฟฝ ๐ค๐ | ๐โฉ, ๐ โจ๐|๐โฉ = ๏ฟฝ๏ฟฝ ๐ฃ๐ |๐โฉ ๏ฟฝ ๏ฟฝ ๐ค๐ |๐โฉ๏ฟฝ ๐ ๐ By (2.2.2), the components, vi, of |๐โฉ (the vector in the first factor) can come out of the inner product provided we take their complex conjugate. Thus, โจ๐|๐โฉ = ∑๐ ∑๐ ๐ฃ๐ *๐ค๐ โจ๐|๐โฉ (2.2.3) However, we still need to know the inner product between the basis vectors, โจ๐|๐โฉ. We know that the inner product between an orthonormal vector and itself is, by definition, the norm of the orthonormal vector which, also by definition, equals one. In addition, the inner product between two different orthonormal vectors is, by defintion, zero. Thus, the value of โจ๐|๐โฉ will be one or zero depending if i = j or i ≠ j: โจ๐|๐โฉ = ๏ฟฝ 1 if ๐ = ๐ ≡ ๐ฟ๐๐ 0 if ๐ ≠ ๐ (2.2.4) where δij, called the Kronecker delta, is shorthand for the function in (2.2.4). Due to the Kronecker delta, only the terms that occur with i = j will survive. So, (2.2.3) now collapses to โจ๐|๐โฉ = ∑๐ ๐ฃ๐ * ๐ค๐ (2.2.5) โจ๐|๐โฉ = ∑๐ |๐ฃ๐ |2 (2.2.6) Accordingly, the inner product between a vector and itself will be We calculate the norm as defined before, ๏ฟฝโจ๐|๐โฉ. A vector is normalized if its norm equals one. The kronecker delta combines the orthogonality and normalization of orthonormal basis vectors into one entity. If ๐ = ๐, we get the inner product between a vector and itself; for the basis vectors this inner product equals one, so we know they are normalized. If ๐ ≠ ๐, we get an inner product of zero, so they are orthogonal. Simply put, if we know that โจ๐|๐โฉ = ๐ฟ๐๐ , the basis is orthonormal. 2.3 Linear Operators An operator acts on a vector, |๐โฉ, and transforms it into another vector |๐ ′ โฉ. If Ω is an operator, we reprsent its action as follows: Ω|๐โฉ = |๐ ′ โฉ (2.3.1) Shoshan 7 We will deal only with linear operators, operators that satisfy the following criterion: Ω(๐|๐โฉ + ๐|๐โฉ) = ๐Ω|๐โฉ + ๐Ω|๐โฉ (2.3.2) Two examples of operators include the identity operator, I, and the rotation operator Rπ(k) I|๐โฉ = |๐โฉ R|๐โฉ = |๐โฉ R|๐โฉ = −|๐โฉ The identity operator leaves the vector unchanged, the rotation operator rotates vectors in 3 – d space around the z-axis by π. By (2.1.2), any vector |๐โฉ can be written as a linear combination of basis vectors. Therefore if we know how an operator transforms the basis vectors, we will know how it transform |๐โฉ. So, if Ω|๐โฉ = |๐′โฉ where i is an index for the basis |1โฉ, |2โฉ, … , |๐โฉ, then for a vector |๐โฉ which can be expressed as ๐ |๐โฉ = ๏ฟฝ ๐ฃ๐ |๐โฉ ๐ we have ๐ ๐ ๐ ๐=1 ๐=1 ๐=1 Ω|Vโฉ = Ω ๏ฟฝ ๐ฃ๐ |๐โฉ = ๏ฟฝ ๐ฃ๐ Ω|๐โฉ = ๏ฟฝ ๐ฃ๐ |๐′โฉ 2.4 Vector Components (2.3.4) To find the components of any vector, |๐โฉ, in an orthonormal basis |1โฉ, |2โฉ, … ,|๐โฉ, we take the inner product between |๐โฉ and the basis vectors: |๐โฉ = ๏ฟฝ ๐ โจ๐|๐โฉ = ๏ฟฝ ๐ ๐ค๐ |๐โฉ ๐=1 ๐ค๐ โจ๐|๐โฉ = ๐ค๐ ๐ฟ๐๐ ๐=1 โจ๐|๐โฉ = ๐ค๐ (2.4.1) 2.5 Matrix Representation: Vectors and Operators A vector |๐โฉ in some basis |1โฉ, |2โฉ, … , |๐โฉ, can be represented by an n × 1 column matrix whose elements are the components of |๐โฉ in the given basis. So, if we have 5 basis vectors, and, then |๐โฉ can be represented by |๐โฉ = 5|1โฉ + 3|2โฉ − 4|3โฉ + 2|5โฉ Shoshan 8 5 โก 3โค โข โฅ |๐โฉ ↔ โข −4 โฅ โข 0โฅ โฃ 2โฆ and the basis vectors, |๐โฉ, will then be represented by n × 1 column matrices where the nth element is 1. Then, 1 โก0โค โข โฅ |1โฉ ↔ โข0โฅ โข0โฅ โฃ0โฆ and, |๐โฉ ↔ 5 โก 3โค โข โฅ โข−4โฅ โข 0โฅ โฃ 2โฆ 0 โก1โค โข โฅ |2โฉ ↔ โข0โฅ โข0โฅ โฃ0โฆ 1 โก0โค โข โฅ = 5 โข0โฅ โข0โฅ โฃ0โฆ 0 โก1โค โข โฅ + 3 โข0โฅ โข0โฅ โฃ0โฆ 0 โก0โค โข โฅ |5โฉ ↔ โข0โฅ โข0โฅ โฃ1โฆ 0 โก0โค โข โฅ − 4 โข1โฅ โข0โฅ โฃ0โฆ 0 โก0โค โข โฅ + 2 โข0โฅ โข0โฅ โฃ1โฆ Just as a vector can be represented by an n × 1 matrix, a linear operator can be represented by an n × n matrix. If we have a linear operator Ω acting on an orthonormal basis |1โฉ , |2โฉ , … , |๐โฉ where Ω|๐โฉ = |๐′โฉ then any vector V written in terms of this basis will be transformed, according to (2.3.4), into ๐ Ω|๐โฉ = ๏ฟฝ ๐ฃ๐ |๐′โฉ ๐=1 (2.5.1) According to (2.4.1), to find the components of |๐′โฉ, the transformed basis vector, in terms of the original basis vector |๐โฉ, we take the inner product between them: โจ๐|๐′โฉ = โจ๐|Ω|๐โฉ ≡ Ω๐๐ where |๐โฉ and |๐โฉ are different indices refering to the same basis, and Ω๐๐ is a shorthand. We can then build an n × n matrix representing Ω. The Ω๐๐ from (2.5.2) are the n × n elements; Ω๐๐ is the element in the jth column and the ๐th row. If we have Ω|๐โฉ = |๐ ′ โฉ, we can write the new components, vi’ , in terms of the original components, vi , as in (2.5.2): ๐ฃ๐′ = โจ๐|๐′โฉ = โจ๐|Ω|๐โฉ Since V can be written as a linear combination of its basis vectors, we have ๐ฃ๐′ = โจ๐|Ω|๐โฉ = โจ๐|Ω๏ฟฝ∑๐ ๐ฃ๐ |๐โฉ๏ฟฝ = ๏ฟฝ๐๏ฟฝ ∑๐ ๐ฃ๐ Ω ๏ฟฝ๐๏ฟฝ = ∑๐ ๐ฃ๐ โจ๐|Ω|๐โฉ (2.5.2) Shoshan 9 and by our shorthand from (2.5.2) we get ๐ฃ๐′ = ∑๐ ๐ฃ๐ Ω๐๐ (2.5.3) According to the rules of matrix multiplication, (2.5.3) is equivalent to the following: or, Ω11 ๐ฃ′ โก 1โค โก โข๐ฃ2′ โฅ โขΩ21 โข โฅ = โข โขโฎโฅ โข โฎ โข โข โฅ โฃΩ๐1 โฃ๐ฃ๐′ โฆ Ω12 Ω22 … … โฑ … Ω1๐ โค โฎ โฅ โฅ โฅ โฅ Ω๐๐ โฆ ๐′ = Ω๐ ๐ฃ1 โก โค โข๐ฃ2 โฅ โข โฎ โฅ= โข โฅ โฃ๐ฃ๐ โฆ โจ1|Ω|1โฉ โจ1|Ω|2โฉ โก โขโจ2|Ω|1โฉ โจ2|Ω|2โฉ โข โข โฎ โข โฃโจn|Ω|1โฉ … ↔ … โฑ … โจ1|Ω|๐โฉ โค โฎ โฅ โฅ โฅ โฅ โจn|Ω|๐โฉโฆ |๐′โฉ = Ω|๐โฉ ๐ฃ1 โก โค โข๐ฃ2 โฅ โข โฎ โฅ (2.5.4) โข โฅ โฃ๐ฃ๐ โฆ (2.5.5) These two equations say that the vector, |๐′โฉ, resulting from the action of Ω on V, can be represented by the product of the matrix representing Ω and the matrix representing V. Simply put, to see how an operator, Ω, transforms a vector, V, multiply their matrices. 2.6 Matrix Representation of the Inner Product and Dual Spaces If we have two vectors |๐โฉ and |๐โฉ represented by matrices as column vectors ๐ฃ1 ๐ฃ2 |๐โฉ ↔ ๏ฟฝ โฎ ๏ฟฝ ๐ฃ๐ ๐ค1 ๐ค2 |๐โฉ ↔ ๏ฟฝ โฎ ๏ฟฝ , ๐ค๐ we can’t write the inner product, โจ๐|๐โฉ, as a product of matrices because we can’t multiply two column vectors together. However, if we take the adjoint or transpose conjugate of the first vector, so that the matrix for |๐โฉ will be then we can write the inner product as [๐ฃ1 * ๐ฃ2 * โฏ ๐ฃ๐ *] ๐ค1 โก โค โข๐ค2 โฅ โจ๐|๐โฉ = [๐ฃ๐ * ๐ฃ๐ * โฏ ๐ฃ๐ *] โข โฅ โฎ โข โฅ โฃ๐ค๐ โฆ This serves as motivation for the notation attributed to Paul Dirac. Just as we associate with each column matrix a ket, we will associate with each row matrix a new entity called a bra. Ket V is written as |๐โฉ, and bra V is written as โจ๐|. The two entities are adjoints of each other. That is, to get bra V, take the transpose conjugate of the column matrix representing V as we did above. Shoshan 10 Thus, we have two distinct vector spaces, the space of bras and the space of kets. The inner product is defined between a bra and a ket. Above we had an inner product between ket |๐โฉ and bra โจ๐|; together, โจ๐|๐โฉ, they form a bra – ket (or bracket). 2.7 The Adjoint Operation: Kets, Scalars, and Operators If a scalar, a, multiplies a ket |๐โฉ, so that ๐|๐โฉ = |๐๐โฉ then the corresponding bra will be โจ๐๐| = โจ๐|๐* We can see why if we look at the matrix representations. The adjoint of this matrix is ๐๐ฃ1 ๐๐ฃ2 ๐|๐โฉ = |๐๐โฉ ↔ ๏ฟฝ โฎ ๏ฟฝ ๐๐ฃ๐ [๐*๐ฃ1 * ๐*๐ฃ2 * โฏ ๐*๐ฃ๐ *] = [๐ฃ1 * ๐ฃ2 * โฏ ๐ฃ๐ *]๐* ↔ โจ๐|๐* We now define the adjoint of an operator Ω as follows. If Ω acts on |๐โฉ we have Ω|๐โฉ = |Ω๐โฉ (2.7.1) (where |Ω๐โฉ is the vector that results after Ω acts on |๐โฉ). The corresponding bra is โจΩV| = โจ๐|Ω† (2.7.2) Ω† is called the adjoint of Ω. Equations (2.7.1) and (2.7.2) say if Ω acts on ket V to produce |๐′โฉ, Ω† acts on bra V to produce |. It follows that Ω† also has a matrix associated with it. We can calculate its elements in a basis as we did in (2.5.2) โจ๐ ′ By (2.7.2), this becomes (Ω†)๐๐ = โจ๐|Ω† |๐โฉ (Ω†)๐๐ = โจΩ๐|๐โฉ = โจ๐|Ω๐โฉ* = โจ๐|Ω|๐โฉ* = Ω๐๐ * Thus, to find the adjoint of an operator take the transpose conjugate of its matrix. 2.8 The Projection Operator and the Completeness Relation We know that for any vector |๐โฉ, Shoshan 11 ๐ ๐ ๐=1 ๐=1 |๐โฉ = ๏ฟฝ ๐ฃ๐ |๐โฉ = ๏ฟฝ|๐โฉโจ๐|๐โฉ (2.8.1) Consider, the object in the summation, |๐โฉโจ๐|๐โฉ, gives us the component of V along the ๐th basis vector multiplied by that basis vector. Alternatively, it is the projection of |๐โฉ along |๐โฉ. The entity, |๐โฉโจ๐|, can be thought of as the projection operator. It acts on |๐โฉ to give |๐โฉโจ๐|๐โฉ = |๐โฉ๐ฃ๐ We call |๐โฉโจ๐| , โ๐ , so |๐โฉโจ๐|๐โฉ becomes โ๐ |๐โฉ. Then we can write (2.8.1) as ๐ |๐โฉ = ๏ฟฝ โ๐ |๐โฉ. ๐=1 In (2.8.2),which operator does (∑๐๐=1 โ๐ ) correspond to? If we rewrite (2.8.2) as ๐ |๐โฉ = ๏ฟฝ๏ฟฝ โ๐ ๏ฟฝ |๐โฉ. ๐=1 (2.8.2) (2.8.3) We see that after the operator acts on |๐โฉ, the result remains |๐โฉ; then the operator must be the identity operator I. Therefore, we get ๐ ๐ ๐=1 ๐=1 ๐ผ = ๏ฟฝ โ๐ = ๏ฟฝ|๐โฉ โจ๐| (2.8.4) Equation (2.8.4) is called the completeness relation. It says the sum of all the projections of a vector V along its basis vectors is the vector V itself. (2.8.4) expresses the fact that basis vectors are complete; meaning any vector V can be written as a linear combination of the basis vectors. This highlights the difference between a basis and any other linearly independent set of vectors. Any vector can’t necessarily be expressed as a linear combination of a linear independent set, but it must be able to be expressed as a linear combination of basis vectors. For example, in 3 – d space, if we have the unit vectors i and j, they form a linearly independent set but not a basis, and we can’t write the unit vector k in terms of i and j. However, if we have all three i, j, k they form a basis and any vector can be written as In short, all basis are complete. ket v = ai + bj + ck We now look at the matrix representation of the operator, โ, and of (2.8.4). We have the basis 0 โก0โค โข โฅ โฎ |๐โฉ ↔ โข โฅ 1 โข โฅ โขโฎโฅ โฃ0โฆ Shoshan 12 and the basis bra โจ๐| ↔ [0 The projection operator โ = |๐โฉโจ๐| is the product of these 0 โก โค โข0โฅ โข โฅ โฎ โ = |๐โฉโจ๐| = โข โฅ [0 โข1โฅ โข โฅ โขโฎโฅ โฃ0โฆ 0 โก โข0 โข โขโฎ โฏ 0] = โข โข โข โข โฃ0 0 โฏ 1 The sum of all projections will be 1 โก โค โข0โฅ โข โฅ โข โฎ โฅ [1 โข0โฅ โข โฅ โฃ0โฆ 0 … 0 0] 1 โก โข0 = โขโฎ โข โฃ0 0 0 โฏ โฏ โฑ โฏ 0] 0 โฏ 1 + 0 โก โค โข 1โฅ โข โฅ โข0โฅ [0 1 โขโฎโฅ โข โฅ โฃ 0โฆ 0 0 โค โก โฎโฅ โข0 โฅ + โขโฎ โฅ โข โฃ0 0โฆ the identity matrix as expected. 0 โฑ 0 โก โค โข0โฅ โข โฅ โข1โฅ [0 0 โข0โฅ โข โฅ + 0 1 0 0 โค โก โฎโฅ โข0 โฅ + โขโฎ โฅ โข โฃ0 0โฆ โฏ 1 โก โข0 = โขโฎ โข โฃ0 0 โฑ 0 1 โฏ โฏ โฑ 1 0 โฑ 0 1 โฏ 0 โฏ 0] โฏ โฏ โฃโฎโฆ 0 โค โฎโฅ โฅ 0โฅ 1โฆ 0 0 โฏ โฑ 1 0 โฏ 1 โฑ โค โฎโฅ โฅ โฅ โฅ โฅ โฅ โฅ 0โฆ โฏ] 0 โค โฎโฅ โฅ โฅ 0โฆ + โฏ + โฏ The Kronecker Delta and the Completeness Relation We know that if |๐ฅโฉ is a basis vector, |Ψโฉ = ๏ฟฝ|๐ฅโฉโจ๐ฅ|Ψโฉ ๐ฅ If |Ψโฉ itself is a basis vector, |๐ฆโฉ, the completeness relation should still hold in an orthonormal basis, Shoshan 13 |yโฉ = ๏ฟฝ|๐ฅโฉโจ๐ฅ|yโฉ ๐ฅ But |๐ฅโฉ and |๐ฆโฉ are basis vectors so โจ๐ฅ|๐ฆโฉ = ๐ฟ๐ฅ๐ฆ and |๐ฆโฉ = ๏ฟฝ|๐ฅโฉ๐ฟ๐ฅ๐ฆ ๐ฅ the δxy will kill all terms except for the term that occurs when x = y , so we’re left with |๐ฆโฉ = |๐ฆโฉ Thus, the kronecker delta was defined in a way that is consistent with the completeness relation. 2.9 Operators: Eigenvectors and Eigenvalues For every operator, Ω, there are vectors that when acted upon by Ω will only undergo a rescaling and not any other transformation so that Ω|๐โฉ = ๐|๐โฉ (2.9.1) The vectors, |๐โฉ, that satisfy (2.9.1) are called the eigenvectors of Ω, and the scale factors, ω, are its eigenvalues; (2.9.1) is known as an eigenvalue equation. In general, if |๐โฉ is an eigenvector of Ω, ๐|๐โฉ will also be an eigenvector: Ω๐|๐โฉ = ๐Ω|๐โฉ = ๐๐|๐โฉ Thus, the vector ๐|๐โฉ has also been rescaled by the factor ω, so, it, too, is an eigenvector. For now, we look at two examples of operators and their eigenvectors; later we will meet more. Example 1. The identity operator I The eigenvalue equation is ๐ผ|๐โฉ = ๐|๐โฉ (2.9.2) We know that if ω = 1, any vector will satisfy (2.9.2) because the vector just remains the same. Hence, I has an infinite number of eigenvectors; in fact, every vector is an eigenvector of I. I has only one eigenvalue which is 1; i.e. the scale of the vectors doesn’t change. Example 2. The rotation operator we mentioned before, Rπ(k) Once again, this operator rotates all vectos in 3 – d space π radians around the z – axis. Obviously, the only vectors in real space it doesn’t rotate are the vectors that lay along the z – axis. Some of its eigenvectors include the unit vector k (or |3โฉ as we have called it before) and any multiple of k, ak. For these eigenvectors, its eigenvalue is one since it doesn’t rescale. (2.9.1): We now develop a method to find solutions to the eigenvalue equation. We first rearrange (Ω − ๐ผ๐)|๐โฉ = 0 (2.9.3) Shoshan 14 and |๐โฉ = (Ω − ๐ผ๐)−1 |0โฉ (2.9.4) we know that the inverse of a matrix A is ๐ด−1 = ๐ถ๐๐๐๐๐ก๐๐ ๐ด๐ ๐ท๐๐ก ๐ด Hence, the only way (2.9.4) can happen is if ๐ท๐๐ก(Ω − ๐๐ผ) = 0 (2.9.5) After expanding the determinant of (Ω − ๐๐ผ), we will get a polynomial with powers of ω and coefficients, a0, a1, …, an; (2.9.5) will be of the form ๐ ๏ฟฝ ๐๐ ๐๐ = 0 ๐=0 (2.9.6) (2.9.6) is known as the characteristic equation. Solving the characteristic equation gives us the eigenvalues, after which we can find the eigenvectors. 2.10 Hermitian Operators Definition: An operator, Ω, is hermitian if Ω = Ω† Because operators can be written as matrices, we also have the following definition. Definition: A matrix, M, is hermitian if Ω = Ω† Theorem 2. The eigenvalues of a hermitian operator Ω are real. Proof. If Ω|๐โฉ = ๐|๐โฉ we take the inner product of both sides with a bra โจ๐| โจ๐|Ω|๐โฉ = ๐โจ๐|๐โฉ and then also take the adjoint of both sides since Ω is hermitian we have Ω = Ω† (2.10.1) โจ๐|Ω† |๐โฉ = ๐*โจ๐|๐โฉ โจ๐|Ω|๐โฉ = ๐*โจ๐|๐โฉ We now subtract (2.10.2) from (2.10.1) โจ๐|Ω|๐โฉ − โจ๐|Ω|๐โฉ = ๐โจ๐|๐โฉ − ๐*โจ๐|๐โฉ (2.10.2) Shoshan 15 we are left with 0 = (๐ − ๐*)โจ๐|๐โฉ ω = ω* the only way this is possible is if ω is real. Theorem 3. The eigenvectors of every hermitian operator form an orthonormal basis; in this basis, โฆ, can be written as a diagonal matrix with its eigenvalues as diagonal entries. Proof: Following the method of solving eigenvalue equations developed earlier, we solve the characteristic equation of โฆ. We will get at least one root ω1 that corresponds to an eigenvector |๐1 โฉ. Assuming Ω operates in the vector space ๐n, we look at the subspace ๐n-1 that is orthogonal to |๐1 โฉ. We then take any n – 1 orthonormal vectors from ๐n-1 and the original normalized |๐1 โฉ, and form a basis. In terms of this basis, Ω looks like ๐1 โก โข0 โข Ω ↔ โข0 โข โฎ โข 0 0 โฏ โฃ0 0 โค โฅ โฅ โฅ โฅ โฅ โฆ (2.10.3) The first element is ω1 because (as was discussed in section 2.5 on operators) Ω11 = โจ1|Ω|1โฉ, here the first basis vector is |๐1 โฉ, so we have Ω11 = โจ๐1 |Ω|๐1 โฉ = ๐1 โจ๐1 |๐1 โฉ = ๐1 The rest of the column is zero because the basis is orthogonal; thus, if |๐โฉ is any of the other basis vectors, โจ๐|Ω|๐1 โฉ = ๐1 โจ๐|๐1 โฉ = 0 The rest of the row is zero because Ω is hermitian and must be symetric along the diagonal. The blank part of the matrix consists of elements about to be determined. Consider, the blank part of the matrix in (2.10.3), which we’ll call M, can itself be a matrix of dimension n – 1; we can solve its eigenvalue earlier too. Once again, the characteristic equation yields a root, ω2, corresponding to an eigenvector (which we normalize) |๐2 โฉ. Once again, we choose a subspace one dimension less, ๐n-2, that is orthogonal to |๐2 โฉ. We choose any n – 2 orthonormal vectors from ๐n-2 and our original |๐2 โฉ to form a basis. In terms of this basis, M looks like Shoshan 16 ๐2 โก โข0 โข ๐ = โข0 โข โฎ โข Then, the original operator, Ω, looks like 0 0 โฏ 0 โฃ0 ๐ โก 1 โข0 โข Ω ↔ โข โฎ โข โข โฃ0 0 ๐2 0 โฏ 0 โค โฅ โฅ โฅ โฅ โฅ โฆ (2.10.4) 0 โค 0โฅ โฅ โฅ โฅ โฅ โฆ โฏ โฎ 0 We then repeat the process on the blank part in Ω until we are finally left with Ω ↔ ๐ โก 1 โข0 โข โข0 โข โข โฎ โข โข โฃ0 0 ๐2 0 โฎ 0 0 โฏ 0 โฏ 0 โฑ 0 … ๐3 โฎ 0 0 โฏ ๐๐−1 0 โค 0โฅ โฅ 0โฅ โฅ โฎ โฅ 0โฅ โฅ ๐๐ โฆ The eigenvectors |๐1 โฉ, |๐2 โฉ, … , |๐๐ โฉ form an orthonormal basis because each subspace ๐n-1, ๐ , … , ๐1 was chosen to be orthogonal to the previous one. n-2 2.11 Functions as Vectors in Infinite Dimensions Going back to our basic properties of vectors, we notice that there are functions that satisfy these properties. If we have two functions, f(x) and g(x), there is a definite rule for addition, there is a rule for scalar multiplication, we design functions that have inverses, and, in short follow all the properties enumerated in section 2.1. We first look at discrete functions. Let us look at a function f. It is defined at x1, x2, … , xn with values f(x1), f(x2), … , f(xn), respectively. The basis vectors, |๐ฅโฉ, will constitute of functions that are nonzero at only one point, xi with the value f(xi) = 1. The basis vectors are orthonormal that is, and obey the completeness relation โจ๐ฅ๐ |๐ฅ๐ โฉ = ๐ฟ๐๐ ๏ฟฝ|๐ฅ๐ โฉโจ๐ฅ๐ | = ๐ผ ๐ (2.11.1) Shoshan 17 To find the component of f along a basis vector |๐ฅโฉ, we follow the typical method, we take the inner product between f and that basis vector โจ๐ฅ|๐โฉ = ๐(๐ฅ). We can, as in (2.8.1), write f as a linear combination of basis vectors |๐โฉ = ๏ฟฝ|๐ฅ๐ โฉโจ๐ฅ๐ |๐โฉ ๐ = ๏ฟฝ ๐(๐ฅ๐ ) |๐ฅ๐ โฉ ๐ The inner product between functions will be the same as before โจ๐|๐โฉ = ๏ฟฝ ๐*(๐ฅ๐ )๐(๐ฅ๐ ) ๐ (2.11.2) (2.11.3) Now, we would like to extend the above to continuous functions. The basis vectors are the same as before only now there is a basis vector for every point x, i.e. there is an infinite number of basis vectors. A function f will then be a vector in infinite dimensions. Continuous functions have relations similar to equations (2.11.1), (2.11.2), (2.11.3); just in each case the summation is replaced by an integral. The completeness relation will then be ๏ฟฝ|๐ฅโฉโจ๐ฅ|๐๐ฅ = ๐ผ (2.11.2) will become |๐โฉ = ๏ฟฝ๐(๐ฅ)|๐ฅโฉ ๐๐ฅ and the inner product will be โจ๐|๐โฉ = ๏ฟฝ ๐*(๐ฅ)๐(๐ฅ)๐๐ฅ 2.12. Orthonormality of Continuous Basis Vectors and the Dirac ๐ Function Basis vectors are orthogonal, so โจ๐ฅ|๐ฆโฉ = 0 What is the normalization of the basis vectors? Should โจ๐ฅ|๐ฆโฉ = 1 ? To answer this question we start with the completeness relation. ๐ ๏ฟฝ|๐ฆโฉโจ๐ฆ|๐๐ฆ = ๐ผ ๐ We then take the inner product with a ket |๐โฉ and a bra โจ๐ฅ| Shoshan 18 ๐ ๏ฟฝโจ๐ฅ|๐ฆโฉโจ๐ฆ|๐โฉ๐๐ฆ = โจ๐ฅ|๐โฉ ๐ ๐ ๏ฟฝโจ๐ฅ|๐ฆโฉ๐(๐ฆ)๐๐ฆ = ๐(๐ฅ) ๐ Let us call โจ๐ฅ|๐ฆโฉ an unknown function ๐ฟ(๐ฅ, ๐ฆ) ๐ ๏ฟฝ ๐ฟ(๐ฅ, ๐ฆ)๐(๐ฆ)๐๐ฆ = ๐(๐ฅ) ๐ We know that ๐ฟ(๐ฅ, ๐ฆ) is zero wherever ๐ฅ ≠ ๐ฆ (since they’re orthogonal), so we can limit the integral to the infinitesimal region where ๐ฅ = ๐ฆ ๐ฅ+๐ผ ๏ฟฝ ๐ฟ(๐ฅ, ๐ฆ)๐(๐ฆ)๐๐ฆ = ๐(๐ฅ) ๐ฅ−๐ผ We see that the integral of the delta function together with ๐(๐ฆ) is ๐(๐ฅ). If we take the limit as ๐(๐ฆ) → ๐(๐ฅ). Then, in the limit, we can pull ๐(๐ฆ) out of the integral as ๐(๐ฅ) (2.12.1) α → 0, ๐ฅ+๐ผ ๐(๐ฅ) ๏ฟฝ ๐ฟ(๐ฅ, ๐ฆ)๐๐ฆ = ๐(๐ฅ) ๐ฅ−๐ผ ๐ฅ+๐ผ (2.12.2) ๏ฟฝ ๐ฟ(๐ฅ, ๐ฆ)๐๐ฆ = 1 ๐ฅ−๐ผ So ๐ฟ(๐ฅ, ๐ฆ) is a very peculiar function. It’s zero everywhere except for one point, ๐ฅ = ๐ฆ. At that point it can’t be finite because then the integral in (2.12.2), which is over an infinitesimal region, will be zero not one. At ๐ฅ = ๐ฆ, ๐ฟ(๐ฅ, ๐ฆ) must be infinite in such a way that the integral sums to one. Nowhere did we care about the actual values of ๐ฅ and ๐ฆ, only about whether they were equal or not. This leads us to denote the delta function by ๐ฟ(๐ฅ − ๐ฆ). It’s known as the Dirac Delta function with the following values. ๐ ๐ฟ(๐ฅ − ๐ฆ) = 0 ๏ฟฝ ๐ฟ(๐ฅ − ๐ฆ)dy = 1 ๐ when ๐ฅ≠๐ฆ (2.12.3๐) when ๐ฅ = ๐ฆ (2.12.3๐) Thus, โจ๐ฅ|๐ฆโฉ = ๐ฟ(๐ฅ − ๐ฆ). In the continuous case, basis vectors are not normalized to one, rather they are normalized to the Dirac delta function. Shoshan 19 2.13. The Dirac ๐ and the Completeness Relation If |๐ฅโฉ and |๐ฆโฉ are basis vectors, we have ๐ |๐ฆโฉ = ๏ฟฝ|๐ฅโฉโจ๐ฅ|๐ฆโฉ๐๐ฅ ๐ ๐ |๐ฆโฉ = ๏ฟฝ|๐ฅโฉ๐ฟ(๐ฅ − ๐ฆ)๐๐ฅ ๐ (2.13.1) If โจ๐ฅ|๐ฅโฉ would equal one, as in the discrete case, that is, if โจ๐ฅ|๐ฅโฉ = ๐ฟ(๐ฅ − ๐ฅ) = ๐ฟ(0) = 1, (2.13.1) would not be true. However, now that ๐ฟ(0) does not equal one, rather it follows (2.12.3b), (2.13.1) is of the form of the following equation shown earlier: ๐ฅ+๐ผ ๏ฟฝ ๐ฟ(๐ฅ, ๐ฆ)๐(๐ฆ)๐๐ฆ = ๐(๐ฅ) ๐ฅ−๐ผ (2.12.1) The Dirac delta collapses the integral to the one term when ๐ฅ = ๐ฆ. It is the counterpart of the Kronecker delta. The Kronecker delta collapses sums; the Dirac delta collapses integrals. 2.14. The Dirac ๐ Function: How it looks From, the discussion above it should be clear that ๐ฟ(๐ฅ − ๐ฅ′) [where |๐ฅโฉ and |๐ฅ′โฉ are basis vectors] is sharply peaked at ๐ฅ and zero elsewhere. The sharp peak comes from the fact that the area under it must total one (2.12.3b). It helps to think of the deltafunction as a limit of the Gaussian function: Taking the limit We have the familiar bell curve of width Δ. The area under it is one. The delta function could then be the limit of the Gaussian as Δ → 0. The area would still be one, and we’ll have a very sharp peak at ๐ฅ. The peak has to be high “enough” so that the area should still be one. The delta function has a value of zero or infinite. The Derivative of the ๐ Function We examine the derivative of ๐ฟ(๐ฅ − ๐ฅ′) because it will be used later. The derivative is denoted by Shoshan 20 ๐ ๐ฟ(๐ฅ − ๐ฅ′) = ๐ฟ′(๐ฅ − ๐ฅ′) ๐๐ฅ We first look at its graph, and as before we compare it to the Gaussian. The derivative of the Gaussian, graph A, will be ๐๐(๐ฅ ′ − ๐ฅ) ๐๐ฅ ๐ When we take the limit of it as Δ → 0, we get the delta derivative in graph B ๐ฅ Graph A is the Delta function spiked at ๐ฅ. In B there are two rescaled δ functions, one is positive at ๐ฅ − ๐ผ, one is negative at ๐ฅ + ๐ผ . We had earlier (2.12.1), when we integrate the delta function with another function we get back the function itself. In particular for a coordinate ๐ฅ ๐ ๏ฟฝ ๐ฟ(๐ฅ − ๐ฅ ′ )๐(๐ฅ ′ )๐๐ฅ′ = ๐(๐ฅ) ๐ the δ function picks out the value ๐(๐ฅ). We see this in Graph C. The derivative of δ, ๐ฟ ′ (๐ฅ − ๐ฅ ′ ), will pick out two values, one at ๐ฅ − ๐ผ and one at ๐ฅ + ๐ผ; one gives the value ๐(๐ฅ − ๐ผ), one gives the value – ๐(๐ฅ + ๐ผ) We then have ๏ฟฝ ๐ฟ′(๐ฅ − ๐ฅ ′ )๐(๐ฅ ′ )๐๐ฅ ′ = The numerator will be equal to ๐(๐ฅ − ๐ผ) − ๐(๐ฅ + ๐ผ) 2๐ผ ๐(๐ฅ − ๐ผ) − ๐(๐ฅ + ๐ผ) = 2๐ผ ๐๐(๐ฅ) ๐๐ฅ ′ (2.14.1) (2.14.2) This is the amount the function ๐ changes by over the interval [๐ฅ − ๐ผ, ๐ฅ + ๐ผ]. Putting (2.14.1) into (2.14.2) we get ๏ฟฝ ๐ฟ′(๐ฅ − ๐ฅ ′ )๐(๐ฅ ′ )๐๐ฅ ′ = ๐๐(๐ฅ) ๐๐ฅ ′ (2.14.3) For comparison, the integral of ๐ฟ(๐ฅ − ๐ฅ ′ ) with๐(๐ฅ ′ ) gives us ๐(๐ฅ), the integral of ๐ฟ′(๐ฅ − ๐ฅ′) with ๐(๐ฅ ′ ) gives us ๐′(๐ฅ). Thus, Shoshan 21 ๐ฟ ′ (๐ฅ − ๐ฅ ′ )๐(๐ฅ ′ ) gives the same result as ๐ฟ(๐ฅ − ๐ฅ ′ ) because ๐ ๐(๐ฅ ′ ) ๐๐ฅ ′ ๏ฟฝ ๐ฟ(๐ฅ − ๐ฅ ′ ) ๐ ๐(๐ฅ ′ ) ๐๐ฅ ′ = ๏ฟฝ ๐ฟ(๐ฅ − ๐ฅ ′ )๐ ′ (๐ฅ ′ ) = ๐′(๐ฅ) so we can write ๐ฟ ′ (๐ฅ − ๐ฅ ′ ) = ๐ฟ(๐ฅ − ๐ฅ ′ ) ๐ ๐๐ฅ The δ Function and Fourier Transforms From Fourier analysis we have Fourier’s Inversion Theorem that says ∞ ∞ −∞ −∞ 1 ๏ฟฝ ๐ ๐๐๐ก ๐๐ ๏ฟฝ ๐(๐ข)๐ −๐๐๐ข ๐๐ข ๐(๐ก) = 2๐ ∞ ∞ 1 = ๏ฟฝ ๐(๐ข)๐๐ข ๏ฟฝ ๏ฟฝ ๐ ๐๐(๐ก−๐ข) ๐๐๏ฟฝ 2๐ −∞ −∞ Comparing this to (2.12.1) we see that ∞ 1 ๏ฟฝ ๐ ๐๐(๐ก−๐ข) ๐๐ ๐ฟ(๐ก − ๐ข) = 2๐ −∞ 2.15. The Operators K and X If a function ๐ is a vector, we can have an operator Ω that operates on ๐ to produce another function Ω|๐โฉ = |๐ฬโฉ Later on, we will look at two important operators, ๐ and ๐พ. For now, we look at the differentiation operator ๐ท. ๐ท|๐โฉ = ๏ฟฝ ๐๐ ๏ฟฝ = |๐′โฉ ๐๐ฅ (2.15.1) We know that we can compute the matrix elements, Ω๐๐ , of an operator Ω by taking the inner product with basis kets and bras. Ω๐๐ = โจ๐|Ω|๐โฉ Therefore, we can compute the matrix elements ๐ท๐ฅ๐ฅ′ of ๐ท by Shoshan 22 ๐ท๐ฅ๐ฅ' = โจ๐ฅ|๐ท|๐ฅ′โฉ We start with (2.15.1), and take the inner product with a basis bra โจ๐ฅ|, โจ๐ฅ|๐ท|๐โฉ = ๐๐(๐ฅ) ๐๐ฅ We then write the left side in terms of the completeness relation ๏ฟฝ|๐ฅ′โฉโจ๐ฅ ′ |๐๐ฅ ′ โจ๐ฅ|๐ท|๐โฉ = ๏ฟฝโจ๐ฅ|๐ท|๐ฅ ′ โฉโจ๐ฅ′|๐โฉ๐๐ฅ ′ = = ๏ฟฝโจ๐ฅ|๐ท|๐ฅ ′ โฉ๐(๐ฅ ′)๐๐ฅ′ = ๐๐(๐ฅ) ๐๐ฅ ๐๐(๐ฅ) ๐๐ฅ (2.15.2) This equation (2.15.2) is very similar to something we had earlier If we compare, we see that or ๏ฟฝ ๐ฟ′(๐ฅ − ๐ฅ ′ )๐(๐ฅ ′ )๐๐ฅ ′ = ๐๐(๐ฅ) ๐๐ฅ ′ (2.14.3) โจ๐ฅ|๐ท|๐ฅ′โฉ = ๐ฟ′(๐ฅ − ๐ฅ ′ ) ๐ท๐ฅ๐ฅ' = ๐ฟ(๐ฅ − ๐ฅ ′ ) ๐ ๐๐ฅ' ๐ท, then, is a matrix of infinite dimensions. Is ๐ท Hermitian? If it were, we would have Let us check, we know and But since the δ function is real ∗ ๐ท๐ฅ๐ฅ' = ๐ท๐ฅ'๐ฅ ๐ท๐ฅ๐ฅ ' = ๐ฟ′(๐ฅ − ๐ฅ ′ ) (2.15.3) ∗ = ๐ฟ′(๐ฅ′ − ๐ฅ)* ๐ท๐ฅ'๐ฅ and since it’s an odd function ๐ฟ′(๐ฅ′ − ๐ฅ)* = ๐ฟ′(๐ฅ′ − ๐ฅ) so ๐ฟ′(๐ฅ′ − ๐ฅ) = − ๐ฟ′(๐ฅ − ๐ฅ′) ∗ = − ๐ฟ′(๐ฅ − ๐ฅ′) ๐ท๐ฅ'๐ฅ Comparing equations (2.15.3) and (2.15.4) we see that ∗ ๐ท๐ฅ๐ฅ' = − ๐ท๐ฅ'๐ฅ (2.15.4) Shoshan 23 Therefore, ๐ท is not Hermitian. However, we can make ๐ท Hermitian by multiplying it by −๐. We call the resulting operator ๐พ, ๐พ = −๐๐ท = −๐๐ฟ′(๐ฅ − ๐ฅ′) For ๐พ, we have but, once again, δ is real so ∗ ∗ ๐พ๐ฅ'๐ฅ = −๐๐ท๐ฅ'๐ฅ = −๐๐ฟ′(๐ฅ′ − ๐ฅ)* and since it’s odd ∗ = ๐๐ฟ′(๐ฅ′ − ๐ฅ) ๐พ๐ฅ'๐ฅ ∗ ๐พ๐ฅ'๐ฅ = −๐๐ฟ′(๐ฅ − ๐ฅ′) but the right side is what we had for K, then ∗ ๐พ๐ฅ๐ฅ' = ๐พ๐ฅ'๐ฅ As of now it seems that ๐พ is Hermitian, but we now use a different approach. If we have two vectors |๐โฉ and |๐โฉ, we should have by skew symetry we have โจ๐|๐พ|๐โฉ = โจ๐|๐พ๐โฉ from section 2.7 we know that โจ๐|๐พ๐โฉ = โจ๐พ๐|๐โฉ* (2.15.5) โจ๐พ๐| = โจ๐|๐พ † and But, if ๐พ is Hermitian, โจ๐พ๐|๐โฉ* = โจ๐|๐พ † |๐โฉ* โจ๐|๐พ † |๐โฉ* = โจ๐|๐พ|๐โฉ* (2.15.6) Taking equations (2.15.5) and (2.15.6), we conclude that if ๐พ is Hermitian, we must have โจ๐|๐พ|๐โฉ = โจ๐|๐พ|๐โฉ* We rewrite both sides in terms of inegrals: we interpose the completeness relation twice, so we get double integrals. Is the following equation true? ๐ ๐ ∗ ๏ฟฝโจ๐|๐ฅโฉโจ๐ฅ|๐พ|๐ฅ′โฉโจ๐ฅ′|๐โฉ๐๐ฅ ′ ๐๐ฅ = ๏ฟฝ๏ฟฝโจ๐|๐ฅโฉโจ๐ฅ|๐พ|๐ฅ ′ โฉโจ๐ฅ ′ |๐โฉ๐๐ฅ ′ ๐๐ฅ ๏ฟฝ ๐ ๐ We now simplify the integrals. We start with the left side. First, by skew symetry โจ๐|๐ฅโฉ = โจ๐ฅ|๐โฉ* (2.15.7) Shoshan 24 this gives the component of |๐โฉ along |๐ฅโฉ โจ๐ฅ|๐โฉ* = ๐*(๐ฅ) also, โจ๐ฅ ′ |๐โฉ = ๐(๐ฅ ′ ). Next, we have โจ๐ฅ|๐พ|๐ฅ′โฉ = ๐พ๐ฅ๐ฅ' = −๐๐ท๐ฅ๐ฅ' = −๐๐ฟ ′ (๐ฅ − ๐ฅ ′ ) = −๐๐ฟ(๐ฅ − ๐ฅ ′ ) ๐ ๐๐ฅ' this operates on whatever follows it in the integral, i.e. ๐(๐ฅ ′ ). Then the inner integral on the left side of (2.15.7) looks like ๐ ๏ฟฝ ๐*(๐ฅ)(−๐)๐ฟ(๐ฅ − ๐ฅ ′ ) ๐ ๐ ๐ ๐(๐ฅ ′ )๐๐ฅ ′ ๐๐ฅ' −๐๐๐(๐ฅ) = ๐*(๐ฅ)(−๐) ๏ฟฝ ๐ฟ(๐ฅ − ๐ฅ ′ )๐ ′ (๐ฅ ′ )๐๐ฅ ′ = ๐*(๐ฅ) ๏ฟฝ ๏ฟฝ ๐๐ฅ ๐ Once again, the delta function kills off the integral. For the outer integral we have ๐ −๐๐๐(๐ฅ) ๏ฟฝ ๐๐ฅ ๏ฟฝ ๐*(๐ฅ) ๏ฟฝ ๐๐ฅ ๐ We do the same simplification to the right side of (2.15.7) and we’re left with ๐ ๐ −๐๐๐(๐ฅ) −๐๐๐(๐ฅ) ๏ฟฝ ๐*(๐ฅ) ๏ฟฝ ๏ฟฝ ๐๐ฅ = ๏ฟฝ๏ฟฝ ๐*(๐ฅ) ๏ฟฝ ๏ฟฝ ๐๐ฅ ๏ฟฝ ๐๐ฅ ๐๐ฅ ๐ Due the conjugation, the right side becomes ๐ ๐ ๏ฟฝ ๐(๐ฅ) ๐ ∗ ๐ ๐๐*(๐ฅ) ๐๐ฅ ๐๐ฅ We now integrate the left side of (2.15.8) by parts with the following substitutions ๐ข = −๐๐*(๐ฅ) ๐๐ข = −๐ ๐๐*(๐ฅ) ๐๐ฅ ๐๐ฃ = ๐๐(๐ฅ) ๐๐ฅ ๐๐ฅ ๐ฃ = ๐(๐ฅ) (2.15.8) Shoshan 25 Finally, we have [−๐๐*(๐ฅ)๐(๐ฅ)]๐๐ or ๐ ๐ ๐๐*(๐ฅ) ๐๐*(๐ฅ) + ๐ ๏ฟฝ ๐(๐ฅ) ๐๐ฅ = ๐ ๏ฟฝ ๐(๐ฅ) ๐๐ฅ ๐๐ฅ ๐๐ฅ ๐ ๐ [−๐๐*(๐ฅ)๐(๐ฅ)]๐๐ = 0 (2.15.9) If this equation is true, ๐พ is Hermitian, meaning ๐พ is Hermitian in a space of functions that satisfy this equation.There are two sets of functions that obey (2.15.9): functions that are zero at endpoints, and periodic functions that follow ๐(๐) = ๐(๐ + 2๐) For the first type, we have [−๐๐*(๐ฅ)๐(๐ฅ)]๐๐ = 0 − 0 = 0 For the second type, we have [−๐๐*(๐ฅ)๐(๐ฅ)]2๐ 0 = −๐๐*(2๐)๐(2๐) + ๐๐*(0)๐(0) = 0 We now find the eigenvectors of ๐พ. ๐พ|Ψ๐ โฉ = ๐|Ψ๐ โฉ To find components in the |๐ฅโฉ basis, we follow the typical procedure: taking the inner product with a bra โจ๐ฅ| โจ๐ฅ|๐พ|Ψ๐ โฉ = ๐โจ๐ฅ|Ψ๐ โฉ ๏ฟฝโจ๐ฅ|๐พ|๐ฅ′โฉโจ๐ฅ′|Ψ๐ โฉ = ๐โจ๐ฅ|Ψ๐ โฉ −๐ ๐ Ψ (๐ฅ) = ๐Ψ๐ (๐ฅ) ๐๐ฅ ๐ We know that the function whose derivative is the function itself is the exponential Ψ๐ (๐ฅ) = ๐ด๐ ๐๐๐ฅ = 1 √2๐ ๐ ๐๐๐ฅ Thus, ๐พ has an infinite number of eigenvectors |๐โฉ, one for every real number. We choose ๐ด to normalize them to the Dirac δ ∞ โจ๐|๐′โฉ = ๏ฟฝ −∞ โจ๐|๐ฅโฉโจ๐ฅ|๐ ′โฉ๐๐ฅ 2 ∞ = ๐ด ๏ฟฝ ๐ −๐๏ฟฝ๐−๐ −∞ ′ ๏ฟฝ๐ฅ ๐๐ฅ = 1 ๐ฟ(๐ − ๐ ′ ) 2๐ We now look at the ๐ operator. Its action on a vector is to multiply the vector by ๐ฅ, so ๐|๐โฉ = ๐ฅ|๐โฉ Shoshan 26 By dotting both sides with a basis bra โจ๐ฅ|, we have, in the |๐ฅโฉ basis โจ๐ฅ|๐|๐โฉ = ๐ฅโจ๐ฅ|๐โฉ = ๐ฅ๐(๐ฅ) (2.15.10) We now find its eigenvectors. The eigenvalue equation is ๐|๐ฅโฉ = ๐ฅ|๐ฅโฉ In the |๐ฅโฉ basis โจ๐ฅ′|๐|๐ฅโฉ = ๐ฅโจ๐ฅ′|๐ฅโฉ = ๐ฅ๐ฟ(๐ฅ ′ − ๐ฅ) Comparing this to (2.15.10) we see that the eigenvectors are ๐ is a Hermitian operator because 1 2.16. The propogator |๐ฅโฉ = ๐ฟ(๐ฅ ′ − ๐ฅ) 1 ๏ฟฝ ๐*(๐ฅ)๐ฅ๐(๐ฅ)๐๐ฅ = ๏ฟฝ[๐ฅ๐*(๐ฅ)]๐(๐ฅ)๐๐ฅ −1 −1 We now explain and demonstrate the concept of a propogator through the use of an example. Suppose we have a vibrating string that’s clamped at both ends, ๐ฅ = 0, ๐ฅ = ๐ฟ, where ๐ฟ is the length of the string. We would like to know how the string will look in the future. The function Ψ(๐ฅ) gives the displacement of the string at the point ๐ฅ. It can be thought of as a snapshot of the string at one instant, Ψ(๐ฅ, ๐ก) will then be a snapshot of the string at time ๐ก. Ψ(๐ฅ, ๐ก) changes with time according to Since ๐พ = −๐ ๐ ๐๐ฅ ๐2Ψ ๐2Ψ = ๐๐ก 2 ๐๐ฅ 2 , the right side can be written as −๐พ 2 |Ψโฉ so we have |Ψฬ(๐ก)โฉ = −๐พ 2 |Ψ(๐ก)โฉ (2.16.1) |Ψ(๐ก)โฉ = ๐(๐ก)|Ψ(0)โฉ (2.16.2) Our goal is to find a function ๐(๐ก) such that Meaning, for any initial state |Ψ(0)โฉ we just multiply it by a function ๐(๐ก) and the result will be |Ψ(๐ก)โฉ; ๐(๐ก) is called the propogator. The utility of the propogator comes from the fact that once we find the propogator we have completely solved the problem. The first step in constructing the propogator is solving the eigenvalue problem of the operator, here −๐พ 2 . The eigenvalue equation is or In the |๐ฅโฉ basis we have −๐พ 2 |Ψโฉ = −๐ 2 |Ψโฉ ๐พ 2 |Ψโฉ = ๐ 2 |Ψโฉ (2.16.3) Shoshan 27 ๐พ 2 โจ๐ฅ|Ψโฉ = ๐ 2 โจ๐ฅ|Ψโฉ ๐2 Ψ (๐ฅ) = ๐ 2 Ψ๐ (๐ฅ) ๐๐ฅ 2 ๐ The solution to this differential equation gives us the eigenvectors − Ψ๐ (๐ฅ) = ๐ด cos ๐๐ฅ + ๐ต sin ๐๐ฅ (2.16.4) ๐ด and ๐ต are constants about to be determined. We know that Ψ๐ (0) = 0 then, Also, ๐ด cos ๐(0) + ๐ต sin ๐(0) = 0 ๐ด=0 Ψ๐ (๐ฟ) = 0 So ๐ต sin ๐๐ฟ = 0 If we want a solution other than ๐ต = 0, we must have ๐๐ฟ equal to a multiple of π. then (2.16.4) becomes ๐๐ฟ = ๐๐ ๐ = 1, 2, 3, โฏ ๐๐๐ฅ Ψ๐ (๐ฅ) = ๐ต sin ๏ฟฝ ๏ฟฝ ๐ฟ We choose ๐ต so that Ψ๐ (๐ฅ) is normalized: ๐ฟ ๐๐๐ฅ 2 ๏ฟฝ๏ฟฝ ๐๐ฅ = 1 ๏ฟฝ ๏ฟฝ๐ต sin ๏ฟฝ ๐ฟ 0 ๐ฟ ๐๐๐ฅ ๏ฟฝ ๐๐ฅ = 1 ๐ต ๏ฟฝ ๐ ๐๐2 ๏ฟฝ ๐ฟ 2 ๐ฟ 0 2๐๐๐ฅ ๐ต2 ๏ฟฝ ๏ฟฝ1 − cos ๏ฟฝ ๏ฟฝ๏ฟฝ ๐๐ฅ = 1 2 ๐ฟ 0 ๐ฟ๐ต2 =1 2 โน 2 ๐ต= ๏ฟฝ ๐ฟ (2.16.5) Shoshan 28 then (2.16.5) is ๐๐๐ฅ 2 Ψ๐ (๐ฅ) = ๏ฟฝ sin ๏ฟฝ ๏ฟฝ ๐ฟ ๐ฟ We now have an eigenbasis made of the eigenvectors ๐๐ 2 ๐๐๐ฅ 2 ๏ฟฝ |๐โฉ → ๏ฟฝ sin ๏ฟฝ ๐ฟ ๐ฟ in the |๐ฅโฉ basis with eigenvalues ๏ฟฝ ๏ฟฝ . The eigenbasis is orthonormal, i.e. โจ๐|๐โฉ = ๐ฟ๐๐ . When ๐ = ๐, ๐ โจ๐|๐โฉ = 1, because we normalized the vectors above. if ๐ ≠ ๐, we have ๐ฟ Making the substitution ๐ข = ๐๐ฅ ๐ฟ using the trigonometric identity ๐๐๐ฅ ๐๐๐ฅ 2 โจ๐|๐โฉ = ๏ฟฝ sin ๏ฟฝ ๏ฟฝ sin ๏ฟฝ ๏ฟฝ ๐๐ฅ ๐ฟ ๐ฟ ๐ฟ 0 ๐ 2 โจ๐|๐โฉ = ๏ฟฝ sin(๐๐ข) sin(๐๐ข) ๐๐ข ๐ 0 sin ๐ด sin ๐ต = ๐ 1 [cos(๐ด − ๐ต) − cos(๐ด + ๐ต)] 2 ๐ 1 โจ๐|๐โฉ = ๏ฟฝ๏ฟฝ cos(๐๐ข − ๐๐ข)๐๐ข − ๏ฟฝ cos(๐๐ข + ๐๐ข)๐๐ข๏ฟฝ ๐ 0 = ๐ 0 ๐ 1 sin(๐ − ๐)๐ข 1 sin(๐ + ๐)๐ข ๏ฟฝ ๏ฟฝ − ๏ฟฝ ๏ฟฝ ๐ ๐− ๐ ๐ ๐+ ๐ 0 0 โจ๐|๐โฉ = 0 Now that we have solved the eigenvalue problem of −๐พ 2 we need to construct the propogator in terms of the eigenbasis. We write (2.16.1) in this eigenbasis; we dot both sides with bra โจ๐| ๐2 โจ๐|Ψ(๐ก)โฉ = −๐พ 2 โจ๐|Ψ(๐ก)โฉ ๐๐ก 2 But by the eigenvalue equation we had before the right side becomes Shoshan 29 or ๐๐ 2 −๐พ 2 โจ๐|Ψ(๐ก)โฉ = −๐ 2 โจ๐|Ψ(๐ก)โฉ = − ๏ฟฝ ๏ฟฝ โจ๐|Ψ(๐ก)โฉ ๐ฟ ๐2 ๐๐ 2 โจ๐|Ψ(๐ก)โฉ ๏ฟฝ โจ๐|Ψ(๐ก)โฉ = − ๏ฟฝ ๐๐ก 2 ๐ฟ Solving the differential equation we get We can now write ๐๐๐ก โจ๐|Ψ(๐ก)โฉ = โจ๐|Ψ(0)โฉ cos ๏ฟฝ ๏ฟฝ ๐ฟ |Ψ(๐ก)โฉ = ๏ฟฝ|๐โฉโจ๐|Ψ(๐ก)โฉ ๐ ๐๐๐ก ๏ฟฝ |Ψ(๐ก)โฉ = ๏ฟฝ|๐โฉโจ๐|Ψ(0)โฉ cos ๏ฟฝ ๐ฟ ๐ comparing this to the propogator equation we had earlier (2.16.2) we see that |Ψ(๐ก)โฉ = ๐(๐ก)|Ψ(0)โฉ ๐๐๐ก ๐(๐ก) = ๏ฟฝ|๐โฉโจ๐| cos ๏ฟฝ ๏ฟฝ ๐ฟ ๐ Finally we write the propogator in the |๐ฅโฉ basis; we dot both sides with bra โจ๐ฅ| and ket |๐ฅ′โฉ ๐๐๐ก โจ๐ฅ|๐(๐ก)|๐ฅ′โฉ = ๏ฟฝโจ๐ฅ|๐โฉโจ๐|๐ฅ′โฉ cos ๏ฟฝ ๏ฟฝ ๐ฟ ๐ ๐๐๐ฅ ๐๐๐ฅ′ ๐๐๐ก 2 โจ๐ฅ|๐(๐ก)|๐ฅ′โฉ = ๏ฟฝ sin ๏ฟฝ ๏ฟฝ sin ๏ฟฝ ๏ฟฝ ๏ฟฝ cos ๏ฟฝ ๐ฟ ๐ฟ ๐ฟ ๐ฟ ๐ We then go back to (2.16.2) and write it in |๐ฅโฉ basis (2.16.6) ๐ฟ โจ๐ฅ|Ψ(๐ก)โฉ = Ψ(๐ฅ, ๐ก) = โจ๐ฅ|๐(๐ก)|Ψ(0)โฉ = ๏ฟฝโจ๐ฅ|๐(๐ก)|๐ฅ′โฉโจ๐ฅ′|Ψ(0)โฉ ๐๐ฅ ′ (2.16.7) 0 We substitute what we get for the propogator from (2.16.6) into (2.16.7) and perform the integral. Shoshan 30 3. The Postulates of Quantum Mechanics 3.1. The Postualtes The postulates of quantum mechanics are now stated and explained. They are postulates in the sense that we can’t derive them mathematically. They can be compred to Newton’s third law, ๐น = ๐๐. Just as Newton’s law can’t be proven, only supported by experiment, these postulates are assumptions that yield the correct results when tested by experiment. The postualtes here deal with one particle in one dimension. The first three deal with the state of the particle at one instant; the fourth deals with how the state evolves with time. Postulate I. The state of a particle is represented by a vector |Ψโฉ in the physical Hilbert space. We know that some vectors are normalizable to one, and some can be normalized to the δ function. The space of both of these types of vectors is called the physical Hilbert Space. In classical physics, we just need to know two dynamic variables, ๐ฅ(๐ก) and ๐(๐ก), i.e. position and momentum, to know everything about the particle. The state of the particle is representes by (๐ฅ, ๐), in 3 ๏ฟฝโ, ๏ฟฝ๐โ). All information about any dynamic variable, ๐(๐ฅ, ๐) can be determined from it. For – d (๐ example, kinetic energy and angular momentum will be ๐2 ๐ธ= 2๐ ๐๐๐ ๏ฟฝ๐ณโ = ๐ ๏ฟฝโ × ๐ ๏ฟฝโ In quantum mechanics, the state is represnted by what we call a wave a function |Ψโฉ. The wave function, in the |๐ฅโฉ basis, at one instant in time is a function of ๐ฅ: Ψ(๐ฅ). This is a vector in infinite dimensions. Thus, while in classical mechanics we only needed two variables to express the state of the particle, in quantum mechanics we need an infinite number of them. We saw how in classical physics we can easily extract information from the state (๐ฅ, ๐). How do we do the same in quantum mechanics? The second and third posulates tell us just that. The second postualte defines the equivalence of the classical variables ๐ฅ and ๐. Postulate II. The Hermitian operators ๐ and ๐ represent the classical variables ๐ฅ and ๐. Other dynamical variables, ๐(๐ฅ, ๐), are represented by Ω(๐, ๐). The ๐ operator is the same discussed in section 2.15 with matrix elements The operator ๐ is โจ๐ฅ|๐|๐ฅ′โฉ = ๐ฅ๐ฟ(๐ฅ − ๐ฅ ′ ) ๐ = โ๐พ = −๐โ๐ท where ๐พ was discussed in the same section and โ = โ⁄2๐ The matrix elements of ๐, in the |๐ฅโฉ basis, will then be โจ๐ฅ|๐|๐ฅ ′ โฉ = −๐โ๐ฟ ′ (๐ฅ − ๐ฅ ′ ) Postulate III. The measurement of a variable Ω(๐, ๐), that has eigenvectors |๐โฉ, will yield an eigenvalue, ω, of Ω with a probability proportional to |โจ๐|๐โฉ|2 . The state will change from |Ψโฉ to |๐โฉ. Shoshan 31 As was said earlier, postulates II and III tell us how to extract information from the state |Ψโฉ. First, we follow postulate II to find the operator Ω that corresponds to the dynamic variable we’re measuring. Then, we solve the eigenvalue problem of Ω, and find all eigenvectors, |๐๐ โฉ, and eigenvalues, ωi, of Ω. Since Ω is Hermitian, its eigenvectors form a basis. Then we can expand |Ψโฉ in terms of this eigenbasis using the completeness relation. If the eigenvalues are discrete If they are continuous |Ψโฉ = ๏ฟฝ|๐๐ โฉโจ๐๐ |Ψโฉ ๐ |Ψโฉ = ๏ฟฝ|๐โฉโจ๐|Ψโฉ๐๐ If |Ψโฉ is normalized, the probability of measuring a value |๐โฉ is equal to |โจ๐|๐โฉ|2 , otherwise it is proportional to it. We first note the use of two theorems proved earlier. The eigenvalues of Hermititian operators are real, and the eigenvectors of Hermitian form a basis. If we make a measurement, we expect the result to be real, and we can always write the state |Ψโฉ as a linear combination of eigenvectors because they form a basis. The quantum theory only allows probablistic predictions, and the only possible values to be measured are the eigenvalues. We illustrate with an example. Suppose we have an operator Ω and a state |Ψโฉ written in terms of the eigenvectors of Ω: |1โฉ, |2โฉ, |3โฉ, and suppose that |Ψโฉ is a vector in 3 – d ๏ฟฝโ. Say ๏ฟฝโ ๐, ๏ฟฝโ ๐ space. |1โฉ, |2โฉ, |3โฉ are the unit vectors ๐, If we normalize it, we get |Ψโฉ = √2|1โฉ + √3|2โฉ + |3โฉ |Ψโฉ = √3 √2 |1โฉ + 2 |2โฉ + 3 1 1 1 √6|3โฉ The probabilities of measuring |1โฉ, |2โฉ, |3โฉ are , , respectively. No other value is possible. 3 2 6 According to the third postulate, if we make a measurement and measure, say, the second eigenvector, |2โฉ, the state |Ψโฉ changes and becomes the eigenvector |2โฉ. In terms of the projection operator, the measurement acts like one; here it acted like, โ2 , i.e. the projection operator along the basis vector |2โฉ โ2 |Ψโฉ = |2โฉโจ2|Ψโฉ If |Ψโฉ is in the eigenstate |2โฉ, = √2 |2โฉ 2 |Ψโฉ = |2โฉ Subsequent measurements of the same operator, Ω, will yield the same state |2โฉ; repeated applications of the projection operator don’t have an added effect. Shoshan 32 Thus, we have three physical states. Each state has a vector that corresponds to it. A measurement filters out one of those vectors, and that is the result of the measurement. 3.2. The continuous Eigenvalue Spectrum In the previous example, the eigenvalues were discrete. If they are continuous the probability needs a different interpretation. โจ๐|Ψโฉ2 can’t be the probability of measuring |๐โฉ, because there is an infinite number of values for ω, so each has a probability of zero. Rather, we interpret it to be the probability density. That means ๐(๐)๐๐ = โจ๐|Ψโฉ2 is the probability of getting a value between ω and ω + dω. The two operators ๐ and ๐ both have continuous eigenvalue spectrums. Earlier, we noted that while in classical mechanics we only needed two variables ๐ฅ and ๐, in quantum mechanics we need an infinite number of variables all contained in the wave function. It is now clear, in classical mechanics a particle is in a state where it has exactly one definite value. In quantum mechaincs, it is in the state where it can yield any value upon measurement. Then, we must have probabilites for all those results. 3.3 Collapse of the Wave Function and the Dirac Delta Function According to postulate III, after a measurement of Ω, the state |Ψโฉ will collapse to the measured value |๐โฉ. Before the measurement, |Ψโฉ was a linear combination of the eigenvectors of Ω, |๐โฉ |Ψโฉ = ๏ฟฝ|๐๐ โฉ โจ๐๐ |Ψโฉ ๐ or |Ψโฉ = ๏ฟฝ|๐โฉ โจ๐|Ψโฉ๐๐ After the measurement, it collapses and becomes the measured vector |๐โฉ |Ψโฉ = |๐โฉ Suppose we measured the position of the particle. The corresponding operator is ๐. Prior to the measurement we had Now, we have |Ψโฉ = ๏ฟฝ|๐ฅโฉ โจ๐ฅ|Ψโฉ๐๐ฅ |Ψโฉ = |๐ฅ′โฉ To find how Ψ(๐ฅ) looks like, we dot both sides with a basis bra โจ๐ฅ|Ψโฉ = โจ๐ฅ|๐ฅ ′ โฉ = ๐ฟ(๐ฅ − ๐ฅ ′ ) The collapsed wave function becomes the delta function. This makes sense. When we make a measurement and find the particle, we expect it to be at the spot we found it, so all the probability is spiked at that spot. 3.4. Measuring X and P In the previous example, information was extracted from |Ψโฉ where the eigenbasis was discrete. In the following examples, we will deal with continuous eigenbasis. We have a particle that is in the state |Ψ(๐ฅ)โฉ; the wave function |Ψ(๐ฅ)โฉ is the Gaussian. Shoshan 33 Ψ(๐ฅ) = ๐ด๐ −(๐ฅ−๐) Let us first normalize it. ∞ ๏ฟฝ −∞ |Ψ(๐ฅ)|2 ∞ ๐๐ฅ = ๏ฟฝ ๐ด๐ −(๐ฅ−๐) −∞ ∞ ๐ผ = ๏ฟฝ ๐ด2 ๐ −(๐ฅ−๐) −∞ We set ๐ผ equal to this integral. Then, ๐ผ 2 will be ∞ ๐ผ 2 = ๐ด4 ๏ฟฝ ๐ −∞ −(๐ฅ−๐)2 ⁄Δ2 We make the substitutions ๐ข = (๐ฅ − ๐)⁄Δ, ๐ผ 2 4 2 ∞ = ๐ด Δ ๏ฟฝ๐ −∞ −๐ข2 ∞ ๐๐ข ๏ฟฝ ๐ −∞ 2 ⁄2Δ2 2 ⁄Δ2 ๐ด๐ −(๐ฅ−๐) 2 ⁄2Δ2 ๐๐ฅ ๐๐ฅ = 1 ∞ ๐๐ฅ ๏ฟฝ ๐ −(๐ฆ−๐) −∞ 2 ⁄Δ2 ๐๐ฆ ๐ฃ = (๐ฆ − ๐)⁄Δ and −๐ฃ 2 2 ⁄2Δ2 4 2 ∞ ∞ ๐๐ฃ = ๐ด Δ ๏ฟฝ ๏ฟฝ ๐ −(๐ข and ๐ 2 = ๐ข2 + ๐ฃ 2 Then, we switch to polar coordinates: ๐๐ข ๐๐ฃ = ๐ ๐๐ ๐๐ ∞ ๐ผ 2 = ๐ด4 Δ2 ๏ฟฝ ๐ ๐ 0 −∞ −∞ 2 +๐ฃ 2 ) ๐๐ฃ ๐๐ข 2๐ −๐ 2 After another substitution, ๐ง = ๐ 2 , the integral becomes ๐๐ ๏ฟฝ ๐๐ 0 ∞ ๐ผ 2 = ๐ด4 Δ2 ๐ ๏ฟฝ ๐ −๐ง ๐๐ง = ๐ด4 Δ2 ๐ 0 and ๐ผ = ๐ด2 Δ √๐ = 1 1 Thus, ๐ด = (๐Δ2 )−4 and the normalized state is |Ψโฉ = 1 1 (๐Δ2 )4 ๐ −(๐ฅ−๐) 2 ⁄2Δ2 Lets say we want to know where the particle is, i.e. what is its position. The operator corresponding to position is ๐. We follow the algorithm we developed. First, we expand |Ψโฉ in the eigenbasis of ๐. ∞ ∞ −∞ −∞ |Ψโฉ = ๏ฟฝ |๐ฅโฉ โจ๐ฅ|Ψโฉ ๐๐ฅ = ๏ฟฝ |๐ฅโฉ Ψ(๐ฅ) ๐๐ฅ Shoshan 34 The probability of finding the particle between ๐ฅ and ๐ฅ + ๐๐ฅ is 1 |Ψ(๐ฅ)|2 ๐๐ฅ = ๐ −(๐ฅ−๐) 1 2 (๐Δ2 ) 2 ⁄Δ2 ๐๐ฅ The particle will probably be found around ๐, that is where the probability density is largest. Next, we look at the momentum of the particle. The operator corresponding to momentum is ๐. In this case the process is longer. We found the eigenvectors last section. โจ๐|Ψโฉ2 1 |๐โฉ = Ψ๐ (๐ฅ) = √2๐โ will give us the probability density. ∞ ๐ ๐๐๐ฅ โ ∞ โจ๐|Ψโฉ = ๏ฟฝ โจ๐|๐ฅโฉ โจ๐ฅ|Ψโฉ ๐๐ฅ = ๏ฟฝ Ψ๐∗ (๐ฅ) Ψ(๐ฅ) ๐๐ฅ −∞ −∞ ∞ 2 2 ๐ −๐๐๐ฅ⁄โ ๐ −(๐ฅ−๐) ⁄2Δ ๐๐ฅ (๐Δ2 )1⁄4 √2๐โ โจ๐|Ψโฉ = ๏ฟฝ −∞ To integrate, we expand the powers, add the fractions, and make the following substitutions: ๐= ๐ฝ= 1 √2๐โ (๐Δ2 )−1⁄4 ๐โ − ๐๐Δ2 โΔ2 ๐ผ= ๐พ= − ∞ โจ๐|Ψโฉ = ๐๐ ๐พ ๏ฟฝ ๐ −๐ผ๐ฅ −∞ Completing the square, we get ๐ฝ 2 ⁄4๐ผ ๐พ โจ๐|Ψโฉ = ๐ ๐ ๐ 1 2Δ2 ∞ ๏ฟฝ๐ −∞ 2 +๐ฝ๐ฅ ๐2 2Δ2 ๐๐ฅ −๐ผ๏ฟฝ๐ฅ− ๐ฝ 2 ๏ฟฝ 2๐ผ ๐๐ฅ Using the same method we used when we normalized the state we get, Substituting back the original values, โจ๐|Ψโฉ = ๐ ๐ ๐พ ๐ ๏ฟฝ๐ฝ 1⁄4 Δ2 โจ๐|Ψโฉ = ๏ฟฝ 2 ๏ฟฝ ๐โ 2 ⁄4๐ผ ๏ฟฝ ๐ ๐ผ ๏ฟฝ ๐ (−๐๐๐⁄โ) ๐ ๏ฟฝ−๐ 2 Δ2 ⁄2โ2 ๏ฟฝ In both of these examples, position and momentum, the eigenvectors reprsented definite states. When we measure position, |Ψโฉ changes from being a sum over the states |๐ฅโฉ to taking the value of one of the|๐ฅโฉ. Before the measurement, the particle had no definite position. It was in a state where it could yield any |๐ฅโฉ, i.e. any position, with a certain probability. After the measurement, it is forced Shoshan 35 into a definite state of position, |๐ฅโฉ. It is the same with momentum; |๐โฉ represents a state of definite momentum. For energy, we first have to get the states of definite energy by solving this equation − โ2 ๐2 Ψ๐ธ + ๐(๐ฅ)Ψ๐ธ (๐ฅ) = ๐ธΨ๐ธ (๐ฅ) 2๐ ๐๐ฅ 2 This is called the time – independent Schrodinger equation. Ψ๐ธ represent states of definite energy. For position and momentum the definite states are always the same. For energy they depend on the potential the particle is experiencing. 3.5. Schrodinger’s Equation We now move on to the fourth postulate. While the first three postulates dealt with the state |Ψ(๐ฅ)โฉ at one instant, the fourth deals with how the state |Ψ(๐ฅ, ๐ก)โฉ evolves with time. Postulate IV. The wave function |Ψ(๐ฅ, ๐ก)โฉ follows the Schrodinger equation ๐โ ๐ |Ψ(๐ก)โฉ = ๐ป|Ψ(๐ก)โฉ ๐๐ก Schrodinger’s equation is the quantum mechanical counterpart of Newton’s equation ๐นโ = ๐ , they both tell you how a particle evolves with time. ๐ป is the Hamiltonian; it gives the total energy of the particle ๐ป =๐+๐ Where ๐ is the kinetic energy and ๐ is the potential energy. For every particle the potential, may be different, so the Hamiltonian may be different. The Schrodinger equation tells us how the state will change. |Ψโฉ starts looking perhaps like A and later like B. If we will solve the Schrodinger equation we will know how it will look. We develop a solution using two different but equivalent approaches. We first rewrite ๐ป ๐ป =๐+๐ = ๐2 โ2 ๐ 2 +๐ = − +๐ 2๐ 2๐ ๐๐ฅ 2 So, the Schrodinger equation can be written as ๐โ ๐Ψ โ2 ๐2 Ψ(๐ฅ) = − + ๐Ψ(๐ฅ) ๐๐ก 2๐ ๐๐ฅ 2 ๐2 ๐ฅ ๐๐ก 2 Shoshan 36 Assuming ๐ doesn’t depend on time, we look for solutions of the form Ψ(๐ฅ, ๐ก) = Ψ(๐ฅ)๐(๐ก) ๐๐ ๐Ψ = Ψ ๐๐ก ๐๐ก ๐2Ψ ๐2 Ψ = ๐ ๐๐ฅ 2 ๐๐ฅ 2 and Putting these into Schrodinger’s equation ๐๐ โ2 ๐2 Ψ ๐โΨ(๐ฅ, ๐ก) = − ๐(๐ก) + ๐Ψ(๐ฅ, ๐ก)๐(๐ก) 2๐ ๐๐ฅ 2 ๐๐ก Dividing both sides by Ψ๐ we get โ2 ๐2 Ψ 1 1 ๐๐ = − +๐ ๐โ 2๐ ๐๐ฅ 2 Ψ ๐ ๐๐ก The left side only depends on ๐ก, and the right side only on ๐ฅ. This equation is true only if both sides equal a constant. Otherwise, if we vary either ๐ฅ or ๐ก keeping the other side constant, the equation will not be true. Thus, ๐โ and − (3.5.1) Is easily solved 1 ๐๐ =๐ธ ๐ ๐๐ก โ2 ๐2 Ψ 1 +๐ =๐ธ 2๐ ๐๐ฅ 2 Ψ or or − ๐๐ ๐ธ = −๐ ๐ ๐๐ก โ (3.5.1) โ2 ๐ 2 Ψ + ๐(๐ฅ)Ψ(๐ฅ) = ๐ธΨ(๐ฅ) 2๐ ๐๐ฅ 2 (3.5.2) ๐๐ก = ๐ −๐๐ธ๐ก⁄โ We recognize (3.5.2) as the time independent Schrodinger equation. This tells us that Ψ(๐ฅ) is in a state of definite energy. We conclude that the Schrodinger equation does admit solutions of the form Ψ(๐ฅ, ๐ก) = Ψ(๐ฅ)๐(๐ก), provided that ๐ is the exponential written above and that Ψ(๐ฅ) is in a state of definite energy. If these are true then Ψ(๐ฅ, ๐ก) evolves according to Ψ(๐ฅ, ๐ก) = Ψ(๐ฅ)๐ −๐๐ธ๐ก⁄โ If not, Ψ(๐ฅ) will be a sum over definite states of energy and evolve in some complex manner. Ψ(๐ฅ, ๐ก) = ๏ฟฝ ๐ด๐ธ Ψ๐ธ (๐ฅ)๐ −๐๐ธ๐ก⁄โ ๐ธ Where ๐ด๐ธ are constants. We now follow a different approach starting with Schrodinger’s equation Shoshan 37 ๐โ ๐ |Ψโฉ = ๐ป|Ψโฉ ๐๐ก We solve the problem using the propogator. We look for ๐(๐ก) that satisfies We rewrite the Schrodinger equation as |Ψ(๐ก)โฉ = ๐(๐ก)|Ψ(0)โฉ ๏ฟฝ๐โ (3.5.3) ๐ − ๐ป๏ฟฝ |Ψโฉ = 0 ๐๐ก (3.5.4) As in the string problem (section 2.16) we need to write the propogator in terms of the eigebasis of ๐ป. ๐ป|๐ธโฉ = ๐ธ|๐ธโฉ This is the time independent Schrodinger equation. We expand |Ψโฉ in the eigenbasis |Eโฉ |Ψ(๐ก)โฉ = ๏ฟฝ|๐ธโฉโจ๐ธ|Ψ(๐ก)โฉ = ๏ฟฝ๐ด๐ธ (๐ก)|๐ธโฉ Where we made the substitution ๐ด๐ธ (๐ก) = โจ๐ธ|๐น(๐ก)โฉ. We operate on both sides with ๏ฟฝ๐โ ๏ฟฝ๐โ ๐ ๐๐ก (3.5.5) − ๐ป๏ฟฝ ๐ ๐ − ๐ป๏ฟฝ |๐น(๐ก)โฉ = ๏ฟฝ ๐โ ๐ด๐ธ (๐ก)|๐ธโฉ − ๐ป๐ด๐ธ (๐ก)|๐ธโฉ = ๏ฟฝ๏ฟฝ๐โ๐ดฬ๐ธ (๐ก) − ๐ธ๐ด๐ธ ๏ฟฝ |๐ธโฉ ๐๐ก ๐๐ก By (3.5.4) the left side is zero, so we have ๐โ๐ดฬ๐ธ = ๐ธ ๐ด๐ธ This is easily solved as ๐ด๐ธ (๐ก) = ๐ด๐ธ (0)๐ −๐๐ธ๐ก⁄โ Substituting back ๐ด๐ธ (๐ก) = โจ๐ธ|๐น(๐ก)โฉ, and plugging it into (3.5.5) โจ๐ธ|๐น(๐ก)โฉ = โจ๐ธ|๐น(0)โฉ๐ −๐๐ธ๐ก⁄โ |๐น(๐ก)โฉ = ๏ฟฝ|๐ธโฉโจ๐ธ|๐น(0)โฉ ๐ −๐๐ธ๐ก⁄โ By comparing this to (3.5.3) we see that ๐(๐ก) = ๏ฟฝ|๐ธโฉโจ๐ธ| ๐ −๐๐ธ๐ก⁄โ ๐ธ Shoshan 38 4. Particle in a Box We now consider a situation that features one of the amazing results of Quantum Physics. We have a particle between 0 and ๐ in a potential ๐(๐ฅ) so that ๐(๐ฅ) ๐(๐ฅ) = ๏ฟฝ 0, ∞, 0≤๐ฅ<๐ ๐๐๐ ๐๐คโ๐๐๐ The particle can’t escape because it needs an infinite amount of energy. We look for states of definite energy, so we use the time independent Schrodinger equation. − โ2 ๐2 Ψ = ๐ธΨ(๐ฅ) 2๐ ๐๐ฅ 2 ๐2 Ψ = −๐ 2 Ψ ๐๐ฅ 2 ๐≡ The solution to this equation is 0 √2๐๐ธ โ Ψ(๐ฅ) = ๐ด sin ๐๐ฅ + ๐ต cos ๐๐ฅ Where the potential is infinte Ψ(๐ฅ) = 0. Then, the wave function will be continuous only if Ψ(0) = Ψ(๐) = 0 Ψ(0) = ๐ด sin(0) + ๐ต cos(0) = ๐ต = 0 Ψ(๐) = ๐ด sin ๐๐ = 0 If ๐ด is to be nonzero, we must have ๐๐ = ๐, 2๐, 3๐, โฏ Or ๐= ๐๐ ๐ We substitute back for ๐ from (4.1) and solve for ๐ธ Replacing ๐ with the values we got, ๐ = 1, 2 ,3, โฏ ๐ธ= ๐ธ๐ = ๐2 ๐ 2 โ2 2๐๐2 โ2 ๐ 2 2๐ ๐ = 1, 2, 3, โฏ Thus, the definite states of energy can’t have just any energy. Energy is quantized to certain values. a (4.1) Shoshan 39 Conclusion In the days of classical physics everything was predictable. Given all the forces and momenta of all the particles in a system, the entire future of the system can theoretically be determined. In quantum physics however, nothing can be determined definitely. Consider a simple idealized example. We have a particle in a closed box ( a real box, not the one from Section 4) that can have only one of two positions A or B. In classical physics we could say the particle is either definitely in A or definitely in B. If we then open the box and see it in A, we know it was in A even prior to the measurement. It is not so in quantum physics; when the box is closed the particle is not in A or in B. It is in the state that upon measurement it can be found either in A or B. There is a common misconception that when the box is closed the particle is partly in A partly in B. It is not. It is in this bizarre quantum state where it doesn’t make sense to ask “where is the particle?” When we open the box and find it in A, it does not mean it was always in A. Rather, it means that upon measurement, the particle was forced into position state A. Meaning, before the measurement it had a probability of being found in A or in B, say 50/50. When the measurement occurs the particle suddenly finds itself in say A. Before the measurement, the particle itself doesn’t know where it will end up. So we can’t make a definite prediction as in pre-quantum times. We can only say it has a 50% chance of being in A. This may sound very weird, but that is only because we experience a Newtonain world, not a quantum world. Mass plays a role here, the more mass a particle has the less the wave function of the particle has an effect. The objects we deal with everyday are so enormously massive, compared to electrons, that we see no quantum effects. If there were quantum people, i.e. people with the mass of electrons, they may be just as amazed with our Newtonian world. They will be astonished at how we can make such accurate predictions using Newton’s laws. Richard Feynman once said, “No one understands quantum mechanics.” Perhaps, that is what makes the subject so fascinating. Shoshan 40 Bibliography S.J. Bence, M.P. Hobson, K. F. Riley. Mathematical Methods for Physics and Engineering. Cambridge University Press, 2006 Borowitz, Sidney. Fundamentals of Quantum Mechanics: Particles, Waves, and Wave Mechanics.Benjamin, 1967. Feynman, Richard, R. Leighton and M. Sands. The Feynman Lectures on Physics Volume 3. Addison – Wesley, 1964 Griffiths, David J. Introduction to Quantum Mechanics. New Jersey: Prentice Hall, 1995. Shankar, Ramamurti. Principles of Quantum Mechanics. New York: Plenum Press, 1980.