The n-gram model is an approach in the language model to determine the most probable word sequence with several word sequences given. By means of a probability model, it is possible to compute the probability of each possible word sequence. The desired word sequence is the sequence with the greatest probability.

 

1 Motivation

 

Generally, the language model is divided into two steps. In the first step, the phoneme sequence generated by the acoustic model is used to determine very probable word sequences. Since there are several possible words for each time step, there are many possible word combinations, which might have been spoken. The task of the second step is to choose that sentence from all possible word sequences, which is most probable. For this, there are different approaches in the literature. One approach is based on the underlying grammatical structure, which is known as grammar model. The grammatical rules of a language are used to choose the word sequence which is grammatically correct. In case that there are two possible word sequences: "I ball" and "I play". Then the grammar model decides for the word sequence "I play", since it knows that it is more probable that a verb follows the word "I " than a noun. A more common approach is the so-called stochastic language model. In this case, the probability of each possible word sequence is computed. Then the word sequence with the greater probability was most probably spoken by the user of the speech recognition software. The probability of each word sequence is determined by means of n-grams. N-grams are a word sequence of n-words. The basic concept of the n-grams is described in this article.

 

2 The n-gram Model

 

Before providing more information on the n-gram model, we first want to consider how to calculate the probability of a sentence. Assume that the sentence  consists of N words . Then the probability  can be determined as follows:


The probability  is very hard or even impossible to determine, since many word sequences do not appear often or are unique. It sounds reasonable to assume that the word  only depends from its last (n-1)-words:

 


This approach is called n-gram model in the literature. In earlier times n-grams models with order  were very common, which are also called bigrams. Nowadays, trigrams () or even greater  are used since the computation power has increased in the last years. A comparison between unigrams (), bigrams and trigrams is shown under the following link. Note that only bigrams will be used in the further article, since the extension from the bigram model to the trigram model and n-gram model is straightforward.

Using the n-gram model, the probability of a sentence yields:


For example, the probability of the sentence "Anne studies in Munich" can be calculated by




Note, that the  token is used as a sentence start such that the probability  is defined.  symbolizes the end of a sentence, such that the probability of all word sequences is one.

  

3 Probabilities of n-grams:

 
First at all, a training set is needed to calculate the probability of the n-grams compared to a given set of word sequences. The training set usually consists of many million words. The probability of the n-grams can be estimated as follows:

 

where  is the number of the word sequence  in the training set. Using the above definition the probability for a bigram yields:

 

4 A small example

 

Let's consider a small example: We know that the sentences "Anne studies in Munich" and "Anton studies in Munich" are probably spoken by some speaker. Now it is of interest, which of them was most probably spoken. The procedure is as follows. First, we determine the probability of each sentence,  and  using the bigram model. Then it is decided, which sentence was spoken by the speaker by comparing both probabilities. The sentence with the greater probability was more probably spoken.
In this example the training set consists of the following three sentences: "Anne studies in Munich. Anton studies in Nuremberg. Anne studies electrical engineering". The training set is used to determine the probabilities of each bigram:

 


 






 

 


 


 



These bigram probabilities are used to calculate the probability of each of the both sentences:




Consequently, the speech recognition software assumes that the spoken sentence was "Anne studies in Munich" since the probability  is greater than .

5 Problem of the N-Gram Model

 
In this section, the example of the previous section is used to show the main problem of the n-gram model.
For example, it should be no problem to determine the probability  using the training set from above. Though it seems reasonable that the probability is unequal to zero, it is always zero since  using the training set from above. This problem does not only exist in this small example. Even in very large training sets, it is not possible to determine the probability of each word combination since many word combinations do not exist due to the sparsity of the training set, especially if trigrams are used instead of bigrams.

There are many approaches which try to compensate this problem. Examples are the  Backoff Approach, the Katz Smoothing Approach, the Kneser-Ney Smoothing, the  Good Turing Smoothing, or the Laplace Smoothing.

 

 

6 References

 

[1]  Huang, X & Deng, L. (2009) An Overview of Modern Speech Recognition, Chapter 15 of the book "Handbook of Natural Language Processing"
 
[2] Gales, M. (2008). The application of hidden Markov models in speech recognition. Foundations and Trends in Signal Processing
 
[3] Adami A. Automatic Speech Recognition: From the Beginning to the Portuguese Language
 
[4] Chen, S. & Goodman, J. (1998) Empirical Study of Smoothing Techniques for Language Modeling

 

 

 


Contents