The Mathematics Behind: Rejection Sampling

Suppose that we have a probability density function (PDF) $latex f(x)$ that is impossible to analyze analytically. How can we ever draw samples from this PDF? Luckily, there are many techniques out there and this time I will highlight rejection sampling. A simple to implement (but not always effective) sampling method.

By the way, you should definitely read this article on curve fitting in MATLAB.

The goal

Suppose we have a function that is not computationally tracktable. How can we ever draw samples from it? One (of the many) methods that is out there, is rejection sampling. In this article, we will try to draw samples from the following distribution:

The secret function in this plot is $latex f(x) = \frac{3}{5} \mathcal{N}(x \frac{7}{20}, \frac{1}{20}) + \frac{2}{5} \mathcal{N}(x \frac{13}{20}, \frac{2}{25})$, which is a multimodal Gaussian. (Note: this function is mathematically tractable, but it is used for demonstration purposes.)

Approximation

Now we have to come up with a probability density function $latex g(x)$, such that $latex M g(x) > f(x)$ for all $latex x$. In other words, we need a PDF that is, when multiplied by a constant $latex M$, higher than $latex f(x)$. On our example, one such a function is $latex g(x)=\mathcal{N}(x \frac{9}{20}, \frac{1}{5})$ and $latex M=3$. How did I found this function? First of all, Gaussians are easy to compute. Therefore, I choose a Gaussian which has its mean around the “middle” of $latex f(x)$. I then tuned its variance and the constant $latex M$ by just trying out some values. The result is the following $latex M \cdot g(x)$, which is indeed always more than $latex f(x)$:

The mathematics

So, what to do next? We sample pairs of $latex (x, u)$ where $latex x$ are samples from the approximation Gaussian and $latex u$ is sampled from a uniform probability distribution in the interval $latex [0, 1]$. Then we look if $latex u < \frac{f(x)}{M \cdot g(x)}$, since if that is the case, then $latex (x, u)$ lives below the function we want to approximate! Note that this pair $latex (x, u)$ is also a sample of the function we wished to approximate.

Results

The theory is summarized in the following Python script:

The resulted samples are the following:

These samples clearly lie below the function which we wished to sample from:

So this method works. However, note that this method is not efficient. We throw away many points and therefore waste a large amount of time. A better method is for example Markov Chain Monte Carlo (MCMC). The rejection sampling method is efficient in easy probability density functions, but you will have many problems (read: you throw a way a lot of points) if you have functions with many peeks.

Exercise

Try to draw samples from the following PDF:

$latex f(x) = \frac{1}{5} \mathcal{N}(x \frac{5}{20}, \frac{4}{20}) + \frac{2}{5} \mathcal{N}(x \frac{19}{20}, \frac{6}{25}) + \frac{2}{5} \mathcal{N}(x \frac{24}{20}, \frac{1}{25})$.