The Backoff Approach is an extension of the n-gram model. If the n-gram probability is equal to zero because of sparsity of the training set, the probability of the (n-1)-gram is used instead to calculate the probability of a sentence .

 

1 Motivation

 
The n-gram model is a very common approach in the language model to determine the most probable word sequence from various probable word sequences. The problem of this approach is that there is not any training set which can cover all possible n-grams combinations. Consequently, the probability of n-gram, which are rarely used, is zero since they do not appear in the training set. However, the probability should be nonzero because it can also appear in some texts. A naive approach to solve this problem is the Backoff Approach, which is the topic of this article.

 

2 Backoff Approach

 

The main idea of the Backoff Approach is the following: at first, the probability of the n-gram from the training set should be determined. If the probability is zero or in other words the corresponding n-gram does not occur in the training set,  a simpler model replaces the current one. In this case, the probability of the corresponding (n-1)-gram will be computed instead. If this one does not exist either, the (n-2)-gram model replaces the (n-1)-gram model and so on. Using the Backoff Approach and the trigram model, the probability is given by:




where  is the number of occurrences of the word sequence  and  the number of words in the training set.

 

3 A little example

 

Let's have a look at the example from  n-gram model again. Remember, the training set consist of the following three sentences: "Anne studies in Munich. Anton studies in Nuremberg. Anne studies electrical engineering". Therefrom, the following bigram probabilities holds:



The goal was to determine the probability . The result using the simple n-gram approach was that  is equal to zero since . Using the Backoff Approach, we get a result which is unequal to zero. For this, we have to determine the probability  again. Since the word sequence "engineering in" does not occur in the training data, we have to compute  according to the Backoff Approach:



Consequently, the probability  holds:

 

4 Problems

 

The main problem of the Backoff Approach lies in the fact that once one mixes up the probabilities of n-grams with different n, as it was shown in the small example above, the entire model can not describe a probability distribution any more. The Katz Smoothing Approach, which is an extension of the Backoff Approach, solves this problem by gaining the shorter n-gram model such that the entire model is a probability distribution again. 

 

5 References

 

[1] Chen, S. F. & Goodman, J. (1998). Empirical Study of Smoothing Techniques for Language Modeling.

[2] Gales, M. (2008). The application of hidden Markov models in speech recognition. Foundations and Trends in Signal Processing,

 

 

 

 

 


Contents