Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
NGramMake | ||||||||
Line: 48 to 48 | ||||||||
class NGramWittenBell ngram(StdMutableFst *countfst); | | | ||||||||
Added: | ||||||||
> > | In addition to the C++ simple usage above, optional arguments permit the passing of non-default values for various parameters similar to the command-line version. | |||||||
ExamplesTo make a Kneser-Ney smoothed model from given counts: |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
NGramMake | ||||||||
Line: 81 to 81 | ||||||||
Ney, H., Essen, U., Kneser, R., 1994. On structuring probabilistic dependences in stochastic language modeling. Computer Speech and Language 8, 1–38.
Witten, I. H., Bell, T. C., 1991. The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression. IEEE Transactions on Information Theory 37 (4), 1085–1094. | ||||||||
Deleted: | ||||||||
< < | -- MichaelRiley - 09 Dec 2011 |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
NGramMake | ||||||||
Line: 6 to 6 | ||||||||
This operation produces a smoothed, normalized language model from input n-gram count FST. It smooths the model in one of six ways: | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
| ||||||||
Line: 49 to 49 | ||||||||
| |
Examples | ||||||||
Added: | ||||||||
> > | To make a Kneser-Ney smoothed model from given counts:
$ ngrammake --method=kneser_ney earnest.cnts >earnest.kn.mod
StdMutableFst *counts = StdMutableFst::Read("in.fst", true); NGramKneserNey ngram(counts); ngram.MakeNGramModel(); ngram.GetFst().Write("out.mod"); | |||||||
Caveats | ||||||||
Line: 56 to 70 | ||||||||
References | ||||||||
Added: | ||||||||
> > | Carpenter, B., 2005. Scaling high-order character language models to gigabytes. In Proceedings of the ACL Workshop on Software, pages 86–99.
Chen, S., Goodman, J., 1998. An empirical study of smoothing techniques for language modeling. Technical report, TR-10-98, Harvard University. Katz, S. M., 1987. Estimation of probabilities from sparse data for the language model component of a speech recogniser. IEEE Transactions on Acoustics, Speech, and Signal Processing 35 (3), 400–401. Kneser, R., Ney, H., 1995. Improved backing-off for m-gram language modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). pp. 181–184. | |||||||
Added: | ||||||||
> > | Ney, H., Essen, U., Kneser, R., 1994. On structuring probabilistic dependences in stochastic language modeling. Computer Speech and Language 8, 1–38. | |||||||
Added: | ||||||||
> > | Witten, I. H., Bell, T. C., 1991. The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression. IEEE Transactions on Information Theory 37 (4), 1085–1094. | |||||||
-- MichaelRiley - 09 Dec 2011 |
Line: 1 to 1 | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
NGramMake
Description | |||||||||||||
Added: | |||||||||||||
> > | This operation produces a smoothed, normalized language model from input n-gram count FST. It smooths the model in one of six ways:
See Chen and Goodman (1998) for a discussion of these smoothing methods. All of the smoothing methods can be used to build either a mixture model (in which higher order n-gram distributions are interpolated with lower order n-gram distributions) or a backoff model (using the --backoff option, in which lower order n-gram distributions are only used if the higher order n-gram was unobserved in the corpus). Even though some of the methods are typically primarily used with either mixture or backoff smoothing (e.g., Katz with backoff), in this library they can be used with either. Note that mixture models are converted to a backoff topology by pre-summing the mixtures and placing the mixed probability on the highest order transition.
If the --bins option is left as the default (-1), then the number of bins for the discounting methods (
The C++ classes are all derived from the base class | ||||||||||||
Usage | |||||||||||||
Added: | |||||||||||||
> > |
| ||||||||||||
Examples
Caveats | |||||||||||||
Added: | |||||||||||||
> > | The presmoothed method normalizes at each state based on the n-gram count of the history, which is only appropriate under specialized circumstances, such as when the counts have been derived from strings with backoff transitions indicated. | ||||||||||||
References |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
NGramMake | ||||||||
Line: 6 to 6 | ||||||||
Usage | ||||||||
Added: | ||||||||
> > | Examples | |||||||
Caveats
References |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
NGramMake | ||||||||
Line: 6 to 6 | ||||||||
Usage | ||||||||
Deleted: | ||||||||
< < | Complexity | |||||||
Caveats
References |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Added: | ||||||||
> > |
NGramMake
Description
Usage
Complexity
Caveats
References
-- MichaelRiley - 09 Dec 2011 |