Additional Strategies for Confronting the Partition Function

In the previous post we introduced Boltzmann machines and the infeasibility of computing the gradient of its log-partition function (\nabla_{\theta}\log{Z}). To this end, we explored one strategy for its approximation: Gibbs sampling. Gibbs sampling is a viable alternative because the expression for this gradient simplifies to an expectation over the model distribution, which can be approximated with Monte Carlo samples.

In this post, we’ll highlight the imperfections of even this approach, then present more preferable alternatives.

Pitfalls of Gibbs sampling

To refresh, the two gradients we seek to compute in a reasonable amount of time are:

Via Gibbs sampling, we approximate each by:

  1. Burning in a Markov chain w.r.t. our model, then selecting (n) samples from this chain

  2. Evaluating both functions ((x_i x_j), and (x_i)) at these samples

  3. Taking the average of each

Concretely:

We perform this sampling process at each gradient step.

The cost of burning in each chain

Initializing a Markov chain at a random sample incurs a “burn-in” process which comes at non-trivial cost. If paying this cost at each gradient step, it begins to add up. How can we do better?

In the remainder of the post, we’ll explore two new directives for approximating the negative phase more cheaply, and the algorithms they birth.

Directive #1: Cheapen the burn-in process

Stochastic maximum likelihood

SML assumes the premise: let’s initialize our chain at a point already close to the model’s true distribution—reducing or perhaps eliminating the cost of burn-in altogether. This given, at what sample do we initialize the chain?

In SML, we simply initialize at the terminal value of the previous chain (i.e. the one we manufactured to compute the gradients of the previous mini-batch). As long as the model has not changed significantly since, i.e. as long as the previous parameter update (gradient step) was not too large, this sample should exist in a region of high probability under the current model.

In code, this might look like:

Implications

Per the expression for the full log-likelihood gradient, e.g. (\nabla_{w_{i, j}}\log{\mathcal{L}} = \mathop{\mathbb{E}}{x \sim p{\text{data}}} [x_i x_j] - \mathop{\mathbb{E}}{x \sim p{\text{model}}} [x_i x_j]), the negative phase works to “reduce the probability of the points in which the model strongly, yet wrongly, believes”. Since we approximate this term at each parameter update with samples roughly from the current model’s true distribution, we do not encroach on this foundational task.

Contrastive divergence

Alternatively, in the contrastive divergence algorithm, we initialize the chain at each gradient step with a sample from the data distribution.

Implications

With no guarantee that the data distribution resembles the model distribution, we may systematically fail to sample, and thereafter “suppress,” points that are incorrectly likely under the latter (as they do not appear in the former!). This incurs the growth of “spurious modes” in our model, aptly named.

In code, this might look like:

Cheapening the burn-in phase indeed gives us a more efficient training routine. Moving forward, what are some even more aggressive strategies we can explore?

Directive #2: Skip the computation of (Z) altogether

Canonically, we write the log-likelihood of our Boltzmann machine as follows:

Instead, what if we simply wrote this as:

\log{\mathcal{L}(x)} = H(x) - c $$

or, more generally:

\log{p_{\text{model}}(x)} = \log{\tilde{p}_{\text{model}}(x; \theta)} - c

Under NCE, we’re going to replace two pieces so as to perform the binary classification task described above (with 1 = “model”, and 0 = “noise”).

First, let’s swap (\log{p_{\text{model}}}(x)) with (\log{p_{\text{joint}}}(y = 0\vert x)), where:

p_{\text{joint}}(x, y)

= p_{\text{joint}}(y = 0)p_{\text{noise}}(x) + p_{\text{joint}}(y = 1)p_{\text{model}}(x)

Finally:

\theta, c = \underset{\theta, c}{\arg\max}\ \mathbb{E}{x \sim p{\text{data}}} [\log{p_{\text{joint}}(y = 0\vert x)}]

This equation:

  1. Builds a classifier that discriminates between samples generated from the model distribution and noise distribution trained only on samples from the latter. (Clearly, this will not make for an effective classifier.)

  2. To train this classifier, we note that the equation asks us to maximize the likelihood of the noise samples under the noise distribution—where the noise distribution itself has no actual parameters we intend to train!

In solution, we trivially expand our expectation to one over both noise samples, and data samples. In doing so, in predicting (\log{p_{\text{joint}}(y = 1\vert x)} = 1 - \log{p_{\text{joint}}(y = 0\vert x)}), we’ll be maximizing the likelihood of the data under the model.

\theta, c = \underset{\theta, c}{\arg\max}\ \mathbb{E}{x, y\ \sim\ p{\text{train}}} [\log{p_{\text{joint}}(y \vert x)}]

p_{\text{train}}(x\vert y) =

\begin{cases}

p_{\text{noise}}(x)\quad y = 0\

p_{\text{data}}(x)\quad y = 1\

\end{cases} $$

As a final step, we’ll expand our object into something more elegant:

Assuming a priori that (p_{\text{joint}}(x, y)) is (k) times more likely to generate a noise sample, i.e. (\frac{p_{\text{joint}}(y = 1)}{p_{\text{joint}}(y = 0)} = \frac{1}{k}):

Given a joint training distribution over ((X_{\text{data}}, y=1)) and ((X_{\text{noise}}, y=0)), this is the target we’d like to maximize.

Implications

For our training data, we require the ability to sample from our noise distribution.

For our target, we require the ability to compute the likelihood of some data under our noise distribution.

Therefore, these criterion do place practical restrictions on the types of noise distributions that we’re able to consider.

Extensions

We briefly alluded to the fact that our noise distribution is non-parametric. However, there is nothing stopping us from evolving this distribution and giving it trainable parameters, then updating these parameters such that it generates increasingly “optimal” samples.

Of course, we would have to design what “optimal” means. One interesting approach is called Adversarial Contrastive Estimation , wherein the authors adapt the noise distribution to generate increasingly “harder negative examples, which forces the main model to learn a better representation of the data.”

Negative sampling

Negative sampling is the same as NCE except:

  1. We consider noise distributions whose likelihood we cannot evaluate

  2. To accommodate, we simply set (p_{\text{noise}}(x) = 1)

Therefore:

In code

Since I learn best by implementing things, let’s play around. Below, we train Boltzmann machines via noise contrastive estimation and negative sampling.

For this exercise, we’ll fit a Boltzmann machine to the Fashion MNIST dataset.

Below, as opposed to in the previous post, I offer a vectorized implementation of the Boltzmann energy function.

This said, the code is still imperfect: especially re: the line in which I iterate through data points individually to compute the joint likelihood.

Finally, in Model._H, I divide by 1000 to get this thing to train. The following is only a toy exercise (like many of my posts); I did not spend much time tuning parameters.

Train a model using noise contrastive estimation. For our noise distribution, we’ll start with a diagonal multivariate Gaussian, from which we can sample, and whose likelihood we can evaluate (as of PyTorch 0.4!).

Train model

Next, we’ll try negative sampling using some actual images as negative samples

Train model

Sampling

Once more, the (ideal) goal of this model is to fit a function (p(x)) to some data, such that we can:

  1. Evaluate its likelihood (wherein it actually tells us that data to which the model was fit is more likely than data to which it was not)

  2. Draw realistic samples

From a Boltzmann machine, our primary strategy for drawing samples is via Gibbs sampling. It’s slow, and I do not believe it’s meant to work particularly well. Let’s draw 5 samples and see how we do.

Takes forever!

Nothing great. These samples are highly correlated, if perfectly identical, as expected.

To generate better images, we’ll have to let this run for a lot longer and “thin” the chain (taking every every_n samples, where every_n is on the order of 1, 10, or 100, roughly).

Summary

In this post, we discussed four additional strategies for both speeding up, as well as outright avoiding, the computation of the gradient of the log-partition function (\nabla_{\theta}\log{Z}).

While we only presented toy models here, these strategies see successful application in larger undirected graphical models, as well as directed conditional models for (p(y\vert x)). One key example of the latter is a language model; though the partition function is a sum over distinct values of (y) (labels) instead of configurations of (x) (inputs), it can still be intractable to compute! This is because there are as many distinct values of (y) as there are tokens in the given language’s vocabulary, which is typically on the order of millions.

Thanks for reading.

Code

The repository and rendered notebook for this project can be found at their respective links.

References