P2 Operators#
What you need to know
For every experimental observable there is a corresponding operator in quantum mechanics.
Operators must be linear. Becasue they are dervied from Schrodinger equation which itself is linear.
Operatprs must be Hermitian becasue only Hermitian operators produce real eigenvalues.
Operatprs must produce real eigenvalues. Becasue eigenvalue are the only possible values that are measured in experiments.
Operators commutations show weather two experimental observables can be measured simulateneously. E.g can one simulatenoulsy and precisely determine the values of position and momentum of an electron.
Commuting operators share eignefunctions, non-commuiting operators have different eigenfunctions.
Operators#
In quantum mechanics, operators represent physical observables and are denoted by a hat symbol (\(\hat{}\)), which indicates a mathematical operation on functions.
For example, the momentum operator is differentiating the function with respect to \(x\), then multiplies the result by \(-i\hbar\).
\[ \hat{p}_x = -i\hbar\frac{d}{dx} \]When this operates on a function, it
The position operator simply multiplies the function by \(x\).
\[ \hat{x} = x \]In quantum mechanics we use a simple recepie to find operators: Take expressions from classical mechancis and replace positions and momentum by their respective operator expressions.
Linearity of Operators#
Operators in quantum mechanics are linear, meaning they satisfy:
Where \(c\) is a constant, and \(\psi_1\), \(\psi_2\), and \(\psi\) are wavefunctions.
\(\hat{x}\), \(\hat{p_x}\), \(\hat{H}\) all satisfy this property
Commutations of operators#
Commutator \(\hat{A}\) and \(\hat{B}\)
From linear algebra we know that order of matrix multiplicaiton matters and that \(AB\neq BA\) for two matrices \(A\) and \(B\)
Thus we also generally ecpect \(\hat{A}\hat{B} \neq \hat{B}\hat{A}\) for any two operators.
We can quantify relationship between two operators by computing the Commutator
If the commutator is zero, it means that order in multiplication of operators or matrices can be changed.
If the commutator is non-zero, the order matters and can not be changed!
Example
Prove that operators \(\hat{A} = x\) and \(\hat{B} = d/dx\) do not commute (i.e., \(\left[\hat{A}, \hat{B}\right] \ne 0\)).
Solution
Let \(f\) be an arbitrary well-behaved function. We need to calculate both \(\hat{A}\hat{B}f\) and \(\hat{B}\hat{A}f\):
Simple rules for commutators#
This shows that operators always commuts with itself and its power. Now lets apply this. Would kinetic operator commute with the momentum operator?
This shows that in comutator order is important. You swap the oeprators in places the sign changes.
Commutators and experimental measurements#
We have seen previously that operators may not always commute (i.e., \([A, B] \ne 0\)). An example of such operator pair is position \(\hat{x}\) and momentum \(\hat{p}_x\):
In contrast, the kinetic energy operator and the momentum operators commute:
We had the uncertainty principle for the position and momentum operators:
In general, it turns out that for operators \(\hat{A}\) and \(\hat{B}\) that do not commute, the uncertainty principle applies in the following form:
Let’s check this relation on the example of momentum and position operators
Denote \(\hat{A} = \hat{x}\) and \(\hat{B} = \hat{p}_x\).
We find that we can not measure precise values of position or momentum simulatneously.
Commuting operators and simultaneous measurments#
Commuting operators share eigenfunction
Proof that commutation implie shared eigenfunctions
We will show that if all eigenfunctions of operators \(\hat{A}\) and \(\hat{B}\) are identical, \(\hat{A}\) and \(\hat{B}\) commute with each other.
Denote the eigenvalues of \(\hat{A}\) and \(\hat{B}\) by \(a_i\) and \(b_i\) and the common eigenfunctions by \(\psi_i\). For both operators we have then:
By using these two equations and expressing the general wavefunction \(\psi\) as a linear combination of the eigenfunctions, the commutator can be evaluated as:
Note that the commutation relation must apply to all well-behaved functions and not just for some given subset of functions!
If opertors commute that means we can simultaneously measure corresponding observables in a single experiment.
For instance operatos of kinetic energy and momentum commute. We can measure momentum and kinetic energy. But we can not do the same for momentum and position.
If we measure observables \(A\) and \(B\) desribed by a common eigenfunction \(\phi_k\) we find the observables to be the corresponding eigenvalues \(a_k\) and \(b_k\)
Expectation expression#
The expectation value of an observable \(\hat{A}\), which gives the average outcome of measurements, is computed as:
\[ \langle A \rangle = \int \psi^* \hat{A} \psi \, d\tau \]Special Case: If the wavefunction \(\psi\) is an eigenfunction of the operator \(\hat{A}\), with eigenvalue \(a\):
\[ \hat{A}\psi = a\psi \]Then the expectation value simplifies to:
\[ \langle A \rangle = \int \psi^* a \psi \, d\tau = a \int \psi^*\psi \, d\tau = a \]Since \(\int \psi^*\psi \, d\tau = 1\) (normalization), the expectation value is simply the eigenvalue \(a\).
Dirac Notation#
To express quantum states and operators more compactly, we use Dirac (bra–ket) notation.
A state is written as a ket, \(|\psi\rangle\), and its complex conjugate (dual) is the bra, \(\langle\psi|\).
The inner product between two states corresponds to the integral over space:
In this notation, the expectation value of an operator \(\hat{A}\) becomes simply:
This form is elegant and general—it applies to all quantum systems, independent of the particular representation (position, momentum, etc.).
Hermitian Property of Operators#
In quantum mechanics, operators often act on complex-valued functions, so we need a notion of “complex conjugate” that applies not just to numbers, but to operators. This leads to the concept of the adjoint operator.
The Adjoint (Conjugate Transpose)#
For complex numbers we take the complex conjugate: \((3 + 2i)^* = 3 - 2i\).
For matrices or linear operators, the corresponding operation is the adjoint, denoted by the dagger symbol \((\dagger)\).
Definition of Adjoint
For an operator, the adjoint is defined through the inner product:
This means that moving an operator from one side of an inner product to the other requires taking its adjoint (and thus a complex conjugate).
For a matrix, the adjoint is its conjugate transpose:
That is, swap rows and columns, then take the complex conjugate of every entry:
In matrix element form taking the adjoint generates different elements
Hermitian (Self-Adjoint) Operators#
An operator is Hermitian if it equals its own adjoint:
This means the operator behaves the same way when acting on either side of the inner product.
Hermitian Matrix
Hermitian Operator
In integral form:
Why Hermitian Operators Matter#
Eigenvalues are real: Observables in quantum mechanics (energy, momentum, position, etc.) are represented by Hermitian operators, ensuring all measurement outcomes are real numbers.
Proof of real eigenvalues
Let \(\psi\) be an eigenfunction of \(\hat{A}\) with eigenvalue \(a\). Choose \(\psi_j = \psi_k = \psi\). Then we can write the result of the left-hand and right-hand sides of the Hermitian condition:
Since the operator is Hermitian, this leads to equality ensuring real nature of eigenvalues.
Eigenfunctions are orthogonal:
Proof of orthogonal eigenfunctions
The Hermitian property can also be used to show that eigenfunctions \(\psi_j\) and \(\psi_k\), corresponding to different eigenvalues \(a_j\) and \(a_k\) (with \(a_j \neq a_k\), i.e., “non-degenerate”), are orthogonal to each other:
Since the operator is Hermitian, we require that LHS = RHS. This results in:
If \(a_j \neq a_k\), then we have:
This shows that \(\psi_j\) and \(\psi_k\) are orthogonal.
Note: If \(a_j = a_k\), meaning the eigenvalues are degenerate, this result does not hold.
Example of Hermitian Matrix
Which of these matricies is Hermitian?
\(\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}\), \(\begin{pmatrix} i & 0 \\ 0 & 1 \end{pmatrix}\), \(\begin{pmatrix} -1 & -3i \\ 3i & 8 \end{pmatrix}\), \(\begin{pmatrix} 1 & 2i \\ 2i & 3 \end{pmatrix}\)
Solution
For the first matrix we have \(a_{12}=2\neq a^{*}_{21}=3\), non-Hermitian
For the second matrix \(a_{11}\neq a^{*}_{11}=0\), non-Hermitian
For the third matrix \(a_{12}=-3i =a^{*}_{21} = (3i)^{*}=-3i\), Hermitian
For the fourth matrix \(a_{12}=2i \neq a^{*}_{21} = (2i)^{*} = -2i\), non-Hermitian
To see that Differentiation operators are Hermitian requires a little more work.
A trick that helps see it is integration by parts where the constant term is zero because wavefunction decays to zero at boundaries (postulate 1, keeping probability finite)!
Example of Hermitian Operator
Prove that the momentum operator (in one dimension) is Hermitian.
Solution
\({\int\limits_{-\infty}^{\infty}\psi_j^*(x)\left(-i\hbar\frac{d\psi_k(x)}{dx}\right)dx} = -i\hbar\int\limits_{-\infty}^{\infty}\psi_j^*(x)\frac{d\psi_k(x)}{dx}dx = \\ \overbrace{\int\limits_{-\infty}^{\infty}\psi_k(x)\left(i\hbar\frac{d\psi_j^*(x)}{dx}\right)dx}^{{integration\, by\, parts}}\) \( = {\int\limits_{-\infty}^{\infty}\psi_k(x)\left(-i\hbar\frac{d\psi_j(x)}{dx}\right)^*dx} \Rightarrow \hat{p}_x\textnormal{ is Hermitian}\).
Geometric Intuition Hermitian operators are the analog of symmetric matrices in real vector spaces. They represent linear transformations that do not rotate vectors into complex directions—only stretch or compress them along real axes.
Problems#
Problem-1: Is \(xd/dx\) operator Hermitian#
Check weather the operator \(\hat{A} = xd/dx\) is Hermitian.
You can test weather the following condition holds:
Note how complex conjugation applies to an expression with an operator inside!
But since our operator contains no imaginary number complex conjugation only applies to the wavefunction
Solution
Step 1: Left-hand side
The left-hand side is:
Step 2: Integration by parts
We apply integration by parts to simplify this expression. Using the product rule for differentiation, we get:
The boundary term \(\left[ x \psi_1^*(x) \psi_2(x) \right]_{a}^{b}\) can be discarded if the wavefunctions vanish at the boundaries (such as in the case of bound states in a box).
Now, for the remaining integral, we apply the derivative to the product \(x \psi_1^*(x)\):
Thus, the left-hand side becomes:
Step 3: Right-hand side
The right-hand side is:
Step 4: Comparison
Now, we compare the two expressions. The left-hand side contains the extra term:
which is not present in the right-hand side. This means:
Since the two sides are not equal, we conclude that the operator \(x \frac{d}{dx}\) is non-Hermitian.
Problem-2: Is \(d^2/dx^2\) operator Hermitian?#
You can test weather the following condition holds
Note how complec conjugation applies to an expression with operator inside. but since our operator contains no imaginary numbers it will only apply to wavefunction
Solution
To show that the operator \(\hat{A} = \frac{d^2}{dx^2}\) is Hermitian, we need to check whether the following condition holds:
Step 1: Left-hand side
The left-hand side is:
Step 2: Integration by parts
We apply integration by parts twice. First, applying integration by parts to the term \(\psi_1^*(x) \frac{d^2}{dx^2} \psi_2(x)\), we get:
The boundary term \(\left[ \psi_1^*(x) \frac{d}{dx} \psi_2(x) \right]_{a}^{b}\) can be discarded if the wavefunctions vanish at the boundaries (as for bound states in a box).
We now apply integration by parts again to the remaining term:
Again, the boundary term \(\left[ \frac{d}{dx} \psi_1^*(x) \psi_2(x) \right]_{a}^{b} \) vanishes if the wavefunctions vanish at the boundaries. This leaves us with:
Step 3: Conclusion
Since the two sides are equal, we conclude that the operator \(\frac{d^2}{dx^2}\) is Hermitian:
Problem-3: Is \(id^2/dx^2\) operator Hermitian?#
You can test weather the following condition holds
Solution
Fro the last problem we learned that the following condition holds which makes second derivative operator \(d^2/dx^2\) hermitian.
Now if we have \(id^2/dx^2\) the complex conjugate part will prduce minus sign which breaks the Hermitian equality
Problem-4: Identify Hermitian Matrices#
Solution
A Matrix
To check if a matrix is Hermitian, it must satisfy the condition \(A = A^\dagger\), where \(A^\dagger\) is the conjugate transpose of \(A\). Since this matrix has real entries, the conjugate transpose is just the transpose.
The transpose of \(A\) is:
\( A^\dagger = \begin{pmatrix} 1 & 2 \\ 2 & 3 \end{pmatrix} \)
Since \(A = A^\dagger\), matrix \(A\) is Hermitian.
B Matrix Now, let’s compute the conjugate transpose of \(B\). We first take the transpose and then take the complex conjugate of each entry:
Clearly, \(B \neq B^\dagger\), so matrix \(B\) is not Hermitian.
C Matrix
The conjugate transpose of \(C\) is:
Since \(C = C^\dagger\), matrix \(C\) is Hermitian.
Problem-5 Momentum Matrix#
Show how the momentum operator looks in matrix form using a finite-dimensional example where you evaluate wavefunction onf 4 points which will correspond to \(4 \times 4\) matrix.
Solution
We can represent the momentum operator \(\hat{p} = -i \hbar \frac{d}{dx}\) in a discrete basis, such as using a position basis. In this case, the matrix elements of the momentum operator can be approximated using finite differences.
For simplicity, let’s assume we are working in a discrete system, where we approximate the derivative \(\frac{d}{dx}\) with finite differences. The finite difference approximation for the derivative at point \(x_n\) is:
where \(\Delta x\) is the spacing between the discrete points.
The corresponding momentum operator matrix in this finite-dimensional space can be written as a skew-symmetric matrix that captures this finite difference behavior.
Here is an example of a \(4 \times 4\) momentum operator matrix \(P\), assuming $\hbar = 1$ for simplicity:
Explanation:
The non-diagonal entries correspond to the finite difference approximation of the derivative.
The factor of \(\frac{i}{2 \Delta x}\) ensures that the momentum operator reflects the correct dimensionality.
The matrix is anti-Hermitian (i.e., \(P^\dagger = -P\)), as expected for the momentum operator.
This \(4 \times 4\) matrix represents the momentum operator in a discrete system with 4 grid points. The matrix elements link neighboring points, reflecting the nature of the derivative approximation.