Topics in Bayesian Machine Learning

A resourceful document for entrypoints into Bayesian inference.

🧮 math
Table of Contents

I want this to be a helpful resource for newcomers to the field of Bayesian machine learning. The objective here is to collect relevant literature that brings insights into modern inference methods. Of course, this requires me to extract insights myself to be sure that the papers I put on are meaningful. Therefore, this post remains a living document.

I will post commentary, when I can, in terms of what to expect when reading the material. Often, however, I will only put materials in list to be considered as the recommended reading order. A recommendation for the overall sequence in which topics should be considered is harder to be prescribed. I do, however, suggest that this not be your first excursion into machine learning. I now encourage that this perspective be your first foray into machine learning.

The Big Picture

When diving deep into a topic, we often find ourselves too close to the action. It is important to start with and keep the bigger picture in mind. I recommend the following to get a feel for the fundamental thesis around being Bayesian. It is not a silver bullet, but a set of common-sense principles to abide by.

Less so now, but often arguments around the subjectivity of the prior is brought into question. This is unfortunately a misdirected argument because without subjectivity, “learning” cannot happen and is in general an ill-defined problem to tackle. Although, subjective priors is not the only thing that being Bayesian brings to the table.

Many people, including seasonsed researchers, have the wrong idea of what it means to be Bayesian. Putting prior assumptions does not make one a Bayesian. In that sense, everyone is a Bayesian because they build algorithms starting with priors, whether they know it or not. I die a little when people compare Bayesian methods to simply regularlizing with the prior. That is an effect often misconstrued. For instance, take a look at this fun post by Dan Simpson, “The king must die” on why simply assuming a Laplace prior does not imply sparse solutions unlike its popular maximum a-posteriori variant known as the Lasso.

When explaining the data using a model, we usually have many competing hypothesis available, naturally leading to the model selection problem. Occam’s razor principle advocates that we must choose the simplest possible explanation. Bayesian inference shines here as well by automatically embodying this “principle of parsimony”.

Bayesian model averaging (BMA) is another perk enjoyed by Bayesians, which allows for soft model selection. See Bayesian Model Averaging: A Tutorial for a classic reference. Andrew G. Wilson clarifies the value it adds in a technical report titled The Case for Bayesian Deep Learning. Unfortunately, BMA is often misconstrued as model combination. Minka dispells any misunderstandings in this regard, in his technical note Bayesian model averaging is not model combination.

The Frequentist-vs-Bayesian debate has unfortunately occupied more minds than it should have. Any new entrant to the field will undoubtably still come across this debate and be forced to take a stand (make sure you don’t fall for the trap). Christian Robert’s answer on Cross Validated is the best technical introduction to start with. Then, I highly recommend this talk by a dominant figure in the field, Michael Jordan, titled Bayesian or Frequentist, Which Are You? (Part I, Part II). Having read and listened to all this, one should keep this excellent exposition by Robert E. Kass Statistical Inference: The Big Picture on their reading list always. Everytime someone starts this debate again, ask them to read this first.

Gelman and Yao describe Holes in Bayesian Statistics which may be a worthwhile reader at a later stage.

Many times, the literature erroneously claims that Bayes does not overfit. This is entirely false. It is prudent to keep in mind that Bayesian statistics is prone to overfitting just like any other statistical model, except the degree of overfitting varies. See Yao’s post for a simple argument where overfitting is defined to be positive generalization gap (difference between test error and train error).3

On a concluding note, I would refrain from labelling anyone or any algorithm as an exclusive Bayesian. In one is still hell-bent on being labeled, remember keeping an open mind is the hallmark of a true Bayesian.

Utility References

References so that one doesn’t have to always remember those tricky identities but come up commonly.

Topics

Gaussian Processes

Gaussian Process (GP) research interestingly started as a consequence of the popularity and early success of neural networks.

Sparse Gaussian Processes

The non-parametric nature is slightly at odds with scalability of Gaussian Processes, but we’ve made some considerable progress through first principles in this regard as well.

Covariance Functions

Covariance functions are the way we describe our inductive biases in a Gaussian Process model and hence deserve a separate section altogether.

Monte Carlo algorithms

Monte Carlo algorithms are used for exact inference in scenarios when closed-form inference is not possible.

Markov Chain Monte Carlo

The simple Monte Carlo algorithms rely on independent samples from a target distribution to be useful. Relaxing the independence assumption leads to correlated samples via Markov Chain Monte Carlo (MCMC) family of algorithms.

The following readings are only worth after one has played more closely with MCMC algorithms.

Variational Inference

Pathologies

PRML Chapter 10 1 shows the zero-forcing behavior of the KL term involved in variational inference, as a result underestimating the uncertainty when unimodal approximations are used for multimodal true distributions. This, however, should not be considered a law of the universe, but only a thumb rule as clarified by Turner et. al. Counterexamples to variational free energy compactness folk theorems. Rainforth et. al show that tighter variational bounds are not necessarily better.

Modeling with Bayes

This is a topic that is often considered an implicit skill but one of the benefits of Bayesian inference is its explicit approach to defining what variables we care about and how those variables connect.

Model-Based Machine Learning provides a very nice introductory resource through real-world examples. I recommend reading this book after a first pass through all the other basics.

Research Venues

Cutting-edge research is a good way to sense where the field is headed. Here are a few venues that I occassionally sift through.

Books

Think Bayes by Allen B. Downey is an excellent book for beginners.

Acknowledgements

I’m inspired by Yingzhen Li’s resourceful document on Topics in Approximate Inference (2017). Many of the interesting references also come from discussions with my advisor, Andrew Gordon Wilson.

Footnotes

  1. Bishop, Christopher M.. “Pattern Recognition and Machine Learning (Information Science and Statistics).” (2006). https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf ↩ ↩2 ↩3 ↩4 ↩5

  2. MacKay, D. (2004). Information Theory, Inference, and Learning Algorithms. IEEE Transactions on Information Theory, 50, 2544-2545. https://www.inference.org.uk/mackay/itila/ ↩

  3. Clarke, Bertrand S. and Yuling Yao. “A Cheat Sheet for Bayesian Prediction.” (2023). https://arxiv.org/abs/2304.12218 ↩

  4. MacKay, D. (1998). Introduction to Gaussian processes. ↩

  5. Rasmussen, Carl Edward and Christopher K. I. Williams. “Gaussian Processes for Machine Learning.” Adaptive computation and machine learning (2003). https://gaussianprocess.org/gpml/ ↩ ↩2 ↩3

  6. Schölkopf, B., & Smola, A. (2001). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. Journal of the American Statistical Association, 98, 489-489. https://direct.mit.edu/books/book/1821/Learning-with-KernelsSupport-Vector-Machines ↩