Interestingly, the Gaussian distribution also turns out to be the maximum entropy
distribution on the infinite support for a finite second moment 1.
The differential entropy is defined as the expected information
H[x]=−∫−∞∞p(x)logp(x)dx of a random variable x∼p(x).
To find the maximum entropy distribution, we formally write the constrained optimization
problem stated before
The constraints correspond to the normalization of probability distributions,
finite mean (first moment) and finite variance (finite second moment). This
can be converted into an unconstrained optimization problem using Lagrange multipliers 2.
The complete objective becomes
To recover the precise values of the Lagrange multipliers, we substitute them back
into the constraints. The derivation is involved but straightforward. We first manipulate the exponent by
completing the squares by noting that for any general quadratic αx2−βx+γ=α(x−2αβ)2−212αβ2−4αγ, which will allow us to re-use results from the previous section. We also further always make use of the subsitution x′=x−2αβ.
Putting back these new definitions the normalization constraint is,
∫−∞∞exp{α(x′)2}dx′=exp{δ}
Using another change of variable y=−αx′, we reduce this integral to a familiar form and can evaluate using the polar coordinate transformation trick as earlier.
−α1∫−∞∞exp{−y2}dy⟹exp{δ}=exp{δ}=−απ(1)
Similarly, we put this into the finite first moment constraint and re-apply
the substitution y=−αx′. The first term is an integral
of an odd function over the full domain and is nullified.
Combining (1) with (2), we get β=2μα. Substituting α and β
as defined earlier, we get
λ2=0
With similar approaches and substitutions, we substitute values in the integral for finite
second moment constraint.
∫−∞∞(x′+2αβ−μ)2exp{α(x′)2}dx′=σ2exp{δ}
Focusing on the remaining term, we first apply the change of variable y=−αx′ and note
that this is an even function. This allows us to use the next change of variables.
To allow a change of variable further, we first note that this is an even function and symmetric around 0.
Using this knowledge, we can change the limits of integration to positive values and use y2=z
where we utilize the fact that Γ(x+1)=xΓ(x), where Γ(x)=∫0∞ux−1e−udu is the Gamma function, and Γ(1/2)=π. Plugging
everything back and using β=2μα, we get
Substituting λ1,λ2,λ3 back into p(x) gives us the form for p(x)=N(μ,σ2). For maximum entropy distributions under other constraints, see examples on this Wikipedia entry
Gaussian Conditionals
Conditional Gaussian distributions are common in machine learning - common
examples being Gaussian Processes and Gaussian Linear dynamical systems. Here
we discuss the algebraic manipulations on a joint Gaussian distribution. Consider
a joint distribution on two random variables p(x, y) where both x,y may be vectors.
p(y,x)=N([μyμx],[ΣyyΣxyΣyxΣxx])
We are interested in p(y∣x) (x and y are exchangeable throughout). Noting
by the chain rule of probabilities that p(y,x)=p(x)p(y∣x) we focus on quadratic
part of the exponent.
We take a digression and discuss the Schur complement3.
Consider a matrix decomposed into blocks
M=[ACBD]
We want to (block) diagonalize this so that inversion is easy.
[I0−BD−1I][ACBD][I−D−1C0I]=[A−BD−1C00D]
The inverse is now easy to compute by using the property that the inverse of a
block diagonal matrix is the inverse of each of the diagonal blocks. We also define
M/D=A−BD−1C.
Let’s start by resolving the first and last two terms. Remember
that we are working with block vectors and matrices. Transpose operators must be
preserved and block matrix algebra works like the usual matrix multiplications.
We recover the familiar pattern of quadratic terms in the Gaussian exponent. The
exponent can now be written as the product of two exponents such that
p(y,x)=N(μy+ΣyxΣxx−1(x−μx),Σ/Σxx)N(x,Σxx)
where Σ/Σxx=Σyy−ΣyxΣxx−1Σxy
Linear Dynamical Systems
The classic results on a linear dynamical system defined by
p(x)p(y∣x)=N(x;μx,Σx)=N(y;Ax+b,Σy)
can be derived pretty easily now by reverse engineering the above equations. For
instance, the full joint of this linear dynamical system p(x,y) is given by
p(x,y)=N([μxAμx+b],[ΣxAΣxΣxATΣy+AΣxAT])
Computing the joint, marginals and other conditionals now should be a matter of
plugging in the right values.
Footnotes
Bishop, C.M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). ↩↩2
Boyd, S.P., & Vandenberghe, L. (2006). Convex Optimization. IEEE Transactions on Automatic Control, 51, 1859-1859. ↩
Murphy, K. (2012). Machine learning - a probabilistic perspective. Adaptive computation and machine learning series. ↩