About: Factored language model     Goto   Sponge   NotDistinct   Permalink

An Entity of Type : yago:WikicatProbabilisticModels, within Data Space : dbpedia.org associated with source document(s)

The factored language model (FLM) is an extension of a conventional language model introduced by Jeff Bilmes and Katrin Kirchoff in 2003. In an FLM, each word is viewed as a vector of k factors: An FLM provides the probabilistic model where the prediction of a factor is based on parents . For example, if represents a word token and represents a Part of speech tag for English, the expression gives a model for predicting current word token based on a traditional Ngram model as well as the Part of speech tag of the previous word.

AttributesValues
rdf:type
rdfs:label
  • Factored language model
rdfs:comment
  • The factored language model (FLM) is an extension of a conventional language model introduced by Jeff Bilmes and Katrin Kirchoff in 2003. In an FLM, each word is viewed as a vector of k factors: An FLM provides the probabilistic model where the prediction of a factor is based on parents . For example, if represents a word token and represents a Part of speech tag for English, the expression gives a model for predicting current word token based on a traditional Ngram model as well as the Part of speech tag of the previous word.
sameAs
dct:subject
Wikipage page ID
Wikipage revision ID
Link from a Wikipage to another Wikipage
Link from a Wikipage to an external page
foaf:isPrimaryTopicOf
prov:wasDerivedFrom
has abstract
  • The factored language model (FLM) is an extension of a conventional language model introduced by Jeff Bilmes and Katrin Kirchoff in 2003. In an FLM, each word is viewed as a vector of k factors: An FLM provides the probabilistic model where the prediction of a factor is based on parents . For example, if represents a word token and represents a Part of speech tag for English, the expression gives a model for predicting current word token based on a traditional Ngram model as well as the Part of speech tag of the previous word. A major advantage of factored language models is that they allow users to specify linguistic knowledge such as the relationship between word tokens and Part of speech in English, or morphological information (stems, root, etc.) in Arabic. Like N-gram models, smoothing techniques are necessary in parameter estimation. In particular, generalized back-off is used in training an FLM.
http://purl.org/voc/vrank#hasRank
is Link from a Wikipage to another Wikipage of
is foaf:primaryTopic of
Faceted Search & Find service v1.17_git39 as of Aug 09 2019


Alternative Linked Data Documents: PivotViewer | iSPARQL | ODE     Content Formats:       RDF       ODATA       Microdata      About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 07.20.3235 as of Sep 1 2020, on Linux (x86_64-generic-linux-glibc25), Single-Server Edition (61 GB total memory)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2020 OpenLink Software