Interpolated Kneser-Ney smoothing is one of the most widely used modern N-gram smoothing methods. Before the introduction of Kneser-Ney smoothing, it is better to have a look at a discounting method which is called absolute discounting. Kneser-Ney algorithm is largely inspired and based on this discounting method.

1. Motivation

The Good-Turing smoothing is too simple for the complicated situation, that's why practically another advanced smoothing method is preferred, which is the Kneser-Ney Smoothing method. The current study shows that, Kneser-Ney Smoothing is one of the best smoothing methods in many applications. It calculates the backoff probability based on how often each word appeared in the different context, rather than the number of occurrences.

2. Introduction

2.1 Absolute Discounting

By analyzing an example of the estimated results from Good-Turing smoothing (Table 1), it is interesting that the differences between  and  are always approximate 0.75, in this case, except  and  It makes sense that to define a fixed discount and operate the smoothing process by simply taking this discount away from the original c. By doing this, the time for calculating will be saved. That is exactly the principal of absolute discounting.

Table 1
Table 1

 

Just like the interpolation smoothing (Jelinek-Mercer), the absolute discounting involves interpolation of higher- and lower-order models. But instead of multiplying the higher-order  by a , we subtract a fixed discount  from each nonzero count:

 

 P(w) is the regular unigram but is it really good only using the unigram? Let's take a look at one example:  suppose "San Francisco" is common, but "Francisco" occurs only after "San". "Francisco" will get a high unigram probability.[3] According to the absolute discounting, "Francisco" will get a high probability although it only follows the "San". That's why it's better to consider the bigram model as well.

 

2.2 Kneser-Ney Smoothing

We want a heuristic that more accurately estimates the number of times we might expect to see word w in a new unseen context. The Kneser-Ney intuition is to base our estimate on the number or different contexts word w has appeared in.[2]

Let's define how likely is w appeared as a novel continuation with :

Normalized by the total number of word bigram types (Jarufsky):

 

Alternative metaphor: The number of # of word types seen to precede  [1]:

 

 is a normalizing constant; the probability mass we've discounted [1]:

 

Continuation count is the number of unique single word contexts for •.

 

References

[1] Jarufsky, D. & Martin, J. H. (2000). Speech & language processing. Pearson Education India.

[2] MacCartney, B. (2005). NLP Lunch Tutorial: Smoothing.

[3] Luo w., Liu Q. & Bai S. (2009). A Review of the State-of-the-Art of Research on Large-Scale Corpora Oriented Language Modeling. Journal of Computer Research and Development. pp. 1704-1712.


Contents