Description
This operation produces a smoothed, normalized language model from input n-gram count FST. It smooths the model in one of six ways:
- witten_bell: smooths using Witten-Bell (Witten and Bell, 1991), with a hyperparameter k, as presented in Carpenter (2005).
- absolute: smooths based on Absolute Discounting (Ney, Essen and Kneser, 1994), using
bins
and discount
parameters.
- katz: smooths based on Katz Backoff (Katz, 1987), using
bins
parameters.
- kneser_ney: smooths based on Kneser-Ney (Kneser and Ney, 1995), a variant of Absolute Discounting.
- presmoothed: normalizes at each state based on the n-gram count of the history.
- unsmoothed: normalizes the model but provides no smoothing.
See Chen and Goodman (1998) for a discussion of these smoothing methods.
All of the smoothing methods can be used to build either a mixture model (in which higher order n-gram distributions are interpolated with lower order n-gram distributions) or a backoff model (using the
--backoff option, in which lower order n-gram distributions are only used if the higher order n-gram was unobserved in the corpus). Even though some of the methods are typically primarily used with either mixture or backoff smoothing (e.g., Katz with backoff), in this library they can be used with either. Note that mixture models are converted to a backoff topology by pre-summing the mixtures and placing the mixed probability on the highest order transition.
If the
--bins option is left as the default (-1), then the number of bins for the discounting methods (
katz,absolute,kneser_ney
) are set to method appropriate defaults (5 for
katz
, 1 for
absolute
).
The C++ classes are all derived from the base class
NGramMake
.
Usage
ngrammake [--options] [in.fst [out.fst]]
--method: type = string, one of: witten_bell (default) | absolute | katz |
kneser_ney | presmoothed | unsmoothed
--backoff: type = bool, default = false
--bins: type = int64, default = -1
--witten_bell_k, type = double, default = 1.0
--discount_D, type = double, default = 1.0
|
|
class NGramAbsolute ngram(StdMutableFst *countfst);
|
|
class NGramKatz ngram(StdMutableFst *countfst);
|
|
class NGramKneserNey ngram(StdMutableFst *countfst);
|
|
class NGramUnsmoothed ngram(StdMutableFst *countfst);
|
|
class NGramWittenBell ngram(StdMutableFst *countfst);
|
|
In addition to the C++ simple usage above, optional arguments permit the passing of non-default values for various parameters similar to the command-line version.
Examples
To make a Kneser-Ney smoothed model from given counts:
$ ngrammake --method=kneser_ney earnest.cnts >earnest.kn.mod
StdMutableFst *counts = StdMutableFst::Read("in.fst", true);
NGramKneserNey ngram(counts);
ngram.MakeNGramModel();
ngram.GetFst().Write("out.mod");
Caveats
The presmoothed method normalizes at each state based on the n-gram count of the history, which is only appropriate under specialized circumstances, such as when the counts have been derived from strings with backoff transitions indicated.
References
Carpenter, B., 2005. Scaling high-order character language models to gigabytes. In Proceedings of the ACL Workshop on Software, pages 86–99.
Chen, S., Goodman, J., 1998. An empirical study of smoothing techniques for language modeling. Technical report, TR-10-98, Harvard University.
Katz, S. M., 1987. Estimation of probabilities from sparse data for the language model component of a speech recogniser. IEEE Transactions on Acoustics, Speech, and Signal Processing 35 (3), 400–401.
Kneser, R., Ney, H., 1995. Improved backing-off for m-gram language modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). pp. 181–184.
Ney, H., Essen, U., Kneser, R., 1994. On structuring probabilistic dependences in stochastic language modeling. Computer Speech and Language 8, 1–38.
Witten, I. H., Bell, T. C., 1991. The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression. IEEE Transactions on Information Theory 37 (4), 1085–1094.