1. Introduction

Restricted Boltzmann Machines are a variant of neural networks. They can be used to classify input data after they have been trained by a training algorithm. Basically, a RBM (Restricted Boltzmann Machine) learns the probability distribution of a data set in the training phase, and can then classify unknown data by using this distribution. The idea of Restricted Boltzmann Machines is nothing new, they were first proposed in 1986, but only recently came into interest of scientists because Geoffrey Hinton and his colleagues invented fast learning algorithms that make use of RBMs [2].

RBMs are the foundation for the different layers in Deep Belief Networks (DBN). In these networks, several RBMs are stacked together to form multiple layers. Each layer is learning features from the extracted features of the layer beneath, resulting in a high level representation of the data.

In contrast to deterministic networks, a RBM is a stochastic network, meaning that its neurons are not activated by their thresholds, but by probabilities.

undirected graph of a RBM

A RBM consists of nodes of visible units , and hidden units . Each visible unit is connected to every hidden unit and vice versa (bipartite undirected graph). Each connection has a weight  associated to it, which is a parameter that gets optimized through training. Each visible and hidden unit in the network furthermore is biased with a bias-term  or . The visible units represent observable data and the hidden units describe the dependencies between the observed variables.

In order to get the probability distribution of the input data, an energy function has to be defined.

1.1 Definition

RBMs are often described with the help of an Energy-Based Model, which means that a scalar energy is associated with each configuration of the variables of interest. For RBMs, the goal is to get low levels of energy for trained values (i.e. vectors that should be recognized) of the variable and high levels of energy for the other values of the variable. The energy function of the RBM can be written as following [6]:

Where  is the weight of the connection between  and , and  and  are bias vectors for the visible and hidden layers. To obtain a probability distribution from this function, the exponential function is used:

Where  is the partition function.  describes the sum of energies for all possible combinations of  and . Because in this case  and  are binary (in this example),  can have an exponential number of values and it is therefore intractable to compute (there are methods to approximate this number). From the formula it can be seen, that real data (which the neural network should recognize) leads to high probabilities and other data leads to low probabilities.

1.2 Probabilities

The conditional inferences of  and  are not dependent on  and can be written as a sum:

Because  and  are binary, the probability of each  can be written as:

So calculating the probability of each  is simple. Because the restricted Boltzmann machine is symmetric (undirected graph), it also yields:

and

 

The unconditional probability  can be written as:

From the formula for  it can be seen that, in order to get a high probability for input data of , the corresponding parameters ,  and  have to be aligned to .

1.3 Training objective

In order to get the parameters of the RBM, it has to be trained. As shown in the last section, the probability for real input data has to be maximized. In order to do that, the average negative log-likelihood has to be minimized:

Besides, stochastic gradient descent is used:

Where  is the parameter of the RBM that is being optimized (,  or ). The positive phase

of the formula is easily calculated, but the negative phase

depends on  and therefore has to be approximated. In order to calculate the formula, the Contrastive Divergence algorithm can be used [3].

 

2. Example

To understand the way a RBM works more clearly, the following video can be used as an example. In this video, digits of the MNIST database [5] are used on a Restricted Boltzmann Machine. So in this case, the RBM is presented with an image recognition task, which is not the topic of this website. But instead of images, voice (or voice features) could be used as well. For visualization, images are better as an example. The video shows the input digit on the left, and the representation of the RBM on the right.

 The video starts using random weights. As it can be seen pretty clearly, the output is only noise and not usable. Next, a RBM which has been pre-trained is used. In this case, the regenerated output is quite good, even though the Restricted Boltzmann Machine sometimes gets confused with digits that are looking alike (e.g. 2 and 7). In the next part, the weights have been fine-tuned by the back-propagation algorithm, showing good results but sometimes ambiguities.

 

References

[1] AIdemos. (2010). RBM demo regenerating images. Retrieved from https://www.youtube.com/watch?v=0LTG64s6Xuc.

[2] Hinton, G. (2006). A fast learning algorithm for deep belief nets. Neural computation, 18(7), 1527-1554.

[3] Hinton, G. (2010). A practical guide to training restricted Boltzmann machines. Momentum, 9(1), 926.

[4] Larochelle, H. (2013). Hugo Larochelle. Retrieved from http://info.usherbrooke.ca/hlarochelle/neural_networks/content.html.

[5] LeCun, Y. (1998). The MNIST database of handwritten digits..

[6] Salakhutdinov, R (2010). Efficient learning of deep Boltzmann machines. In International Conference on Artificial Intelligence and Statistics, pages 693-700.


Contents