The Gibbs distribution and general Bayes
Connections between Gibbs distribution and the general problem of Bayesian inference.
The Gibbs distribution keeps coming up everywhere in machine learning. In this post, I’m going to focus on a general optimization problem and realize its various connections. Towards the end, I’m going to touch upon general Bayes which goes beyond classic posterior inference.
Optimization problem
The optimization problem we are interested in is formulated as below -
We aim to find a distribution that minimizes the function over a family of distributions regularized by the prior via the -divergence. is a loss function to our liking.
Optimal Gibbs distribution
We show two proofs for the fact the distribution that minimizes the optimization problem above is a specific form of a Gibbs distribution.
Variational Calculus
The variational calculus approach to solving would be to simply solve a constrained optimization problem. The only constraint we have is that is a probability distribution.
Converting this to an unconstrained optimization problem via the method of Lagrange multipliers 1, for a multiplier , we have
We take the functional derivatives (see PRML Appendix D 2) w.r.t and the fact that this should be equal to zero for a minimum.
To eliminate , we put it back into the constraint and get
We get the complete form for our optimal Gibbs distribution
KL-divergence minimizer
Alternatively, we can also make some manipulations to our original object of interest as follows
Note that we have sneaked in a normalizer as it does not change the optimization problem. This ensure the denominator remains a probability distribution and allows us to arrive at the following expression,
Things are much simpler now because we know that -divergence is non-negative and zero only when the two arguments are equal. Hence, the minimum (zero) is attained when
The normalizer can be simply arrived at by integrating out the numerator over and we arrive at the optimal Gibbs distribution .
Connections
Bayesian inference can be seen as an infinite dimensional generalization of the optimization problem described above 3 4 5. This result can also be extended to build all sorts of new schemes based on generalized divergences like the -divergence 6. Generalized variational inference 7 encompasses a large family of inference schemes, for instance power likelihoods when we use a tempered divergence .
Below we discuss a few connections to show the broad coverage of this formulation of the optimization problem.
Bayesian posterior
As an example, the classic Bayesian posterior inference can be recovered by assuming . Plugging this back into the optimal Gibbs distribution we get,
which is reminiscent of the classic posterior recovered by the Bayes theorem, .
Further, instead of optimizing over the universal family of distributions , using a restricted family of distributions , we recover variational inference. For instance, when is the family of diagonal covariance Gaussians, we recover Mean-Field variational inference (MFVI).
Entropy-Regularized RL
The Gibbs distribution also comes up in the formulation of Soft Q-Learning. We define an entropy-augmented return 8 as for a discount factor , instantaneuous reward , a scalar coefficient effectively balancing the explore/exploit dichotomy and the instantaneous distance between our policy and a reference policy at state . Without getting into the details, this definition leads us to the state value function at a state under policy as
is the action value function under the policy at state under action . A natural optimal policy to define would be the one that maximizes this state value function greedily. Hence, putting this in the form of our minimization problem earlier, we want to minimize the objective over a family of policies given a reference policy .
By symmetry, and the optimal distribution comes out to be Gibbs distribution of the form
This formulation forms the foundation of Soft Q-Learning and its derivatives. Intuitively, this provides us a straightforward path to build an optimal policy once we’ve arrived at the correct action value function. In the limit , we recover our reference policy which basically amounts exploration under the policy and never exploiting the policy .
General Bayes
We make a small final note. The traditional paradigm of Bayesian inference forces us to define a likelihood model and a prior to arrive at the posterior as
If one believes in the Cox’s Axioms, this formulation is very principled and effective based on decades of evidence. However, as with all inference, Bayesian inference would also break under a violation of assumptions - misspecified model likelihood or a misspecified prior, moreso in some problems than others. For instance, it is still a hard problem to come up with a prior for neural networks as we don’t have a completely understanding to what that would really mean - does a Gaussian prior over each unit mean something?
A broader family of work, inspired by Bayesian inference, instead avoids using the terms model likelihood and prior altogether. Connecting this back to our original optimization problem, instead of looking as a prior, it serves a way for us to represent favorable predictors of the data in the space of distributions . Similarly, instead of , we want to describe as merely an instrument which guides us towards a better predictive algorithm. doesn’t really remain a posterior in the faithful Bayesian sense but is oft referred to as a pseudo-posterior. This approach certainly sounds promising, although still in early stages of development 9 7 for the modern machine learning world.
Footnotes
-
Boyd, S.P., & Vandenberghe, L. (2006). Convex Optimization. IEEE Transactions on Automatic Control, 51, 1859-1859. ↩
-
Bishop, C.M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). ↩
-
Donsker, M., & Varadhan, S. (1975). Asymptotic evaluation of certain Markov process expectations for large time. ↩
-
Zellner, A. (1988). Optimal Information Processing and Bayes’s Theorem. The American Statistician, 42, 278-280. ↩
-
Csiszár, I. (1975). -Divergence Geometry of Probability Distributions and Minimization Problems. Annals of Probability, 3, 146-158. ↩
-
Futami, F., Sato, I., & Sugiyama, M. (2018). Variational Inference based on Robust Divergences. AISTATS. ↩
-
Knoblauch, J., Jewson, J., & Damoulas, T. (2019). Generalized Variational Inference. ArXiv, abs/1904.02063. ↩ ↩2
-
Schulman, J., Abbeel, P., & Chen, X. (2017). Equivalence Between Policy Gradients and Soft Q-Learning. ArXiv, abs/1704.06440. ↩
-
Guedj, B. (2019). A Primer on PAC-Bayesian Learning. ArXiv, abs/1901.05353. ↩