Linear Variational Method#
What You Need to Know
The variational method is a powerful tool for estimating upper bounds and approximations for ground-state energies in a wide range of quantum mechanical problems.
It plays a key role in electronic structure theories, such as Hartree-Fock.
The linear variational method seeks solutions to the Schrödinger equation by representing trial wavefunctions as linear combinations of simple, computationally efficient functions, such as Gaussians or exponentials.
By applying the linear variational method, the Schrödinger equation is transformed into a linear algebra problem, where the goal is to find eigenvalues (representing energies) and eigenvectors (providing coefficients for the linear combination).
Linearizing the problem#
How does linearization help us in practice? In a typical QM problem, we are attempting to solve for the wavefunction(s) and energy(ies) for a given Hamiltonian. If the Hamiltonian is such that solving the problem exactly is too challenging (e.g. any two or more electron problem) we can expand the wavefunction in a basis and attempt to solve the problem that way. We start with
Truncating this expansion at any finite
leads to an approximate solution. Also, what the variational principle tells us is that we can minimize the energy with respect to variational paramters and still have .
Smallest example#
We will illustrate this idea and the general matrix construction with a simple example of two basis functions (
)
There is currently no need to define these functions explicitly so we will leave them as generic functions
and . We now solve for the energy
Hamiltonian matrix element expressed in basis of S matrix element or overlap integral expressed in basis of . This one measures how similiar is to .
Eigenvalue Problem#
Since
for any trial function , we can minimize the energy by varying the parameters and .To minimize with respect to
, we differentiate with respect to and set the derivative equal to zero:Similarly, minimizing with respect to
gives:These two coupled linear equations can be expressed compactly as a matrix equation:
The matrix on the left can be rewritten as the difference between two matrices:
Rearranging, we can write:
In more compact matrix notation, this becomes:
By left-multiplying both sides by
, we transform this into a standard eigenvalue problem:Therefore, the minimum energies correspond to the eigenvalues of
, and the variational parameters that minimize the energies are the eigenvectors of .
Breaking problem down to matrix eigenvalue eigenvector problem
In the equation
Eigenvalue Problem Form:
In linear algebra, a standard eigenvalue problem is written as:
$ $
where: is a square matrix, is a scalar eigenvalue, is the identity matrix, and is the corresponding eigenvector.
The identity matrix
ensures that scales the eigenvector without altering its direction. The eigenvalue problem is about finding the values of and their associated .Connecting to
:
Here, acts as the operator in the standard eigenvalue problem. is a matrix resulting from left-multiplying by the inverse of . represents the eigenvector. is the eigenvalue (corresponding to the energy in the quantum mechanical system).
The identity matrix
is explicitly included to highlight that is a scalar multiplying the vector . This ensures that the left-hand side (a matrix operation) matches the right-hand side (a scaled vector).Why
Appears:
Initially, we had:
$ \mathbf{S} \mathbf{S}^{-1} \mathbf{S}^{-1}\mathbf{S} = \mathbf{I} $How to Interpret This as an Eigenvalue Problem:
The equation now has the form of a standard eigenvalue problem:
$ $
where: is the effective matrix to diagonalize, are the eigenvalues, corresponding to the energy levels, are the eigenvectors, containing the coefficients of the trial wavefunctions.
Physical Interpretation:
Solving the eigenvalue problem gives the approximate energies ( ) of the quantum system as eigenvalues and the corresponding variational parameters ( ) as eigenvectors. The identity matrix is crucial for preserving the standard form of the eigenvalue problem, ensuring proper mathematical and physical interpretation.
Example: Particle in a Box#
Let’s consider a free particle in 1D bounded to
While we can have solved this problem analytically, it will be instructive to see how the variational solution works. We start by approximating
where the basis functions are
and
In this problem, we have two basis functions
and
Computing the matrix elements:#
where the last equality holds because the second derivative of
Similarly we should get
Thus
We can now complete the Hamiltonian matrix,
It can also be shown that
Show code cell source
import numpy as np
from scipy.linalg import eig, inv, eigh
# Lets adopt simpler units
a = 1.0
hbar = 1.0
m = 1.0
S = np.array([[1.0/30.0, 1.0/140.0],[1.0/140.0,1.0/630.0]])
H = np.array([[1.0/6.0,1.0/30.0],[1.0/30.0,1.0/105.0]])
e, v = eig(inv(S) @ H) #e, v = eigh(H, S) #Same values eigh is for hermitian matrices
print("Eigenvalues:", e)
print("First eigenvector:", v[:,0])
Eigenvalues: [ 4.93487481+0.j 51.06512519+0.j]
First eigenvector: [-0.66168489 -0.74978204]
So we see that the smallest energy in this basis is
How does this compare to the analytic solution? We need to recall that for particle in a box we have:
Plugging in for the ground state,
So we can see that our variational solution worked out well for the energy. Now how about the wavefunction?
Show code cell source
# plot wavefunction
import matplotlib.pyplot as plt
from scipy import integrate
%matplotlib inline
# x values
x = np.arange(0,1,0.01)
# exact wavefunction
psi1Exact = np.sqrt(2)*np.sin(np.pi*x)
plt.plot(x,psi1Exact,'k-',label="Exact",lw=3)
# variational basis function wavefunction
psi1Var = v[0,0]*x*(1-x) + v[1,0]*x**2*(1-x)**2
norm = np.sqrt(integrate.simps(np.power(psi1Var,2),x))
plt.plot(x,-psi1Var/norm,'r--', label="Variational")
plt.legend()
<matplotlib.legend.Legend at 0x7f9083657a90>