1. General concept

The previous page describes the concept of using Hidden Markov Models (HMMs) for speech recognition. One important part of the model is the state output distribution which is a statistical description of the acoustic feature vector. The scope of this article is to explain how Gaussian Mixture Models (GMMs) can be applied for that purpose. A Gaussian Mixture Model is a highly flexible statistical distribution which is able to model almost any set of data. A general introduction can be found in the basics section. The acoustic feature vector obtained by feature extraction is in pratice often multimodal. This can be caused by speaker, accent and gender differences. GMMs are therefore well suited to represent the feature vector and are applied in many Automatic Speech Recognition (ASR) systems. In order to implement such a system, it is necessary to learn the model parameters. For GMMs this can be done with a technique called Expectation Maximization.

2. Complexity Analysis

The feature vector dimension in ASR systems if typically around 40. If we assume for instance 12 MFC coefficients plus the signal energy, the static feature vector has 13 dimensions. Taking into account time dependancy and regarding delta and delta-delta parameters as well, we get a dynamic feature vector with  dimensions. For a given number of mixture components  we can now calculate the parameter count of the covariance matrix.

  • Full covariance matrix:  parameters
    For  M=10 one would already get more than 15,000 parameters per state - certainly too much for a pratical system that should operate in real-time.

  • Diagonal covariance matrix:  parameters
    If we take into account that the feature vectors are at least ideally uncorrelated, then a diagonal covariance matrix is sufficient to describe the GMM. With this approach the number of model components is typically in the order of 10-20. This approach was for instance used by "The 1998 HTK system for transcription of conversational telephone speech" developed at the University of Cambridge [2] which applied 16 component GMMs.

  • Tied (same) diagonal covariance matrix for all components:  parameters
    In order to further reduce the number of parameters, one can also use the same covariance matrix for all components. However this requires increasing the number of components (typically >100) to get a good representation of the feature vector. Since this model spans only a subspace of the total parameter space, it is called subspace GMM.

3. Subspace GMM

The subspace approach became more popular in recent years. We therefore analyse a very basic form of a subspace GMM. The conditional distribution of the feature vector given the state can be written as

.

The mean is derived from the state specific vector 

The same applies for the mixture weights

 and  are globally shared constants.  is the feature vector and  a state specific vector. One can see that the difference between this subspace GMM and a usual GMM is that the parameters of the GMM are not the parameters of the overall model. Instead there exists a vector  for each state. The means and weights of the GMM are defined by a globally shared mapping from . Counting the parameters describing the GMM, namely means and weights, yields  parameters per state. The dimension  is typically much smaller. This is exactly why the model is called "subspace GMM".

The model given here can be extended. For instance, speaker adaptation can be included in the model by adding a speaker vector to the means. To avoid excessive speaker dependent computation, the weights can be left unchanged. A subspace GMM system typically has 2-4 times less parameters than a GMM system. Nevertheless it outperforms a standard GMM system [3].

References

[1] Gales, Mark, and Steve Young. "The application of hidden Markov models in speech recognition." Foundations and Trends in Signal Processing 1.3 (2008): 195-304.

[2] Hain, Thomas, et al. "The 1998 HTK system for transcription of conversational telephone speech." Acoustics, Speech, and Signal Processing, 1999. Proceedings., 1999 IEEE International Conference on. Vol. 1. IEEE, 1999.

[3] Povey, Daniel, et al. "Subspace Gaussian mixture models for speech recognition." Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on. IEEE, 2010.


Contents