In mathematics and information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution. In the language of large deviations theory, Sanov's theorem identifies the rate function for large deviations of the empirical measure of a sequence of i.i.d. random variables. , where
* is the joint probability distribution on , and
* is the information projection of q onto A. Furthermore, if A is the closure of its interior,
Attributes | Values |
---|
rdf:type
| |
rdfs:label
| - Satz von Sanov (de)
- Théorème de Sanov (fr)
- Sanov's theorem (en)
- Teorema de Sanov (pt)
|
rdfs:comment
| - Der Satz von Sanov ist ein Resultat des mathematischen Teilgebiets der Stochastik.Er ist eine zentrale Aussage der Theorie der großen Abweichungen (engl. large deviations theory) und zeigt eine enge Verbindung zur Informationstheorie auf. Der Satz formalisiert die Intuition, dass die Gesamtwahrscheinlichkeit eines seltenen Ereignisses von der Wahrscheinlichkeit des plausibelsten Teilereignisses dominiert wird. Er ist nach dem russischen Mathematiker Ivan Nikolajewitsch Sanov (1919–1968) benannt. (de)
- Le théorème de Sanov est un résultat de probabilités et statistique fondamentales démontré en 1957. Il établit un principe de grandes déviations pour la mesure empirique d'une suite de variables aléatoires i.i.d. dont la fonction de taux est la divergence de Kullback-Leibler. (fr)
- Em teoria da informação, o teorema de Sanov dá um limite à probabilidade de observar uma sequência atípica de amostras a partir de uma dada distribuição de probabilidade. (pt)
- In mathematics and information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution. In the language of large deviations theory, Sanov's theorem identifies the rate function for large deviations of the empirical measure of a sequence of i.i.d. random variables. , where
* is the joint probability distribution on , and
* is the information projection of q onto A. Furthermore, if A is the closure of its interior, (en)
|
dcterms:subject
| |
Wikipage page ID
| |
Wikipage revision ID
| |
Link from a Wikipage to another Wikipage
| |
Link from a Wikipage to an external page
| |
sameAs
| |
dbp:wikiPageUsesTemplate
| |
has abstract
| - Der Satz von Sanov ist ein Resultat des mathematischen Teilgebiets der Stochastik.Er ist eine zentrale Aussage der Theorie der großen Abweichungen (engl. large deviations theory) und zeigt eine enge Verbindung zur Informationstheorie auf. Der Satz formalisiert die Intuition, dass die Gesamtwahrscheinlichkeit eines seltenen Ereignisses von der Wahrscheinlichkeit des plausibelsten Teilereignisses dominiert wird. Er ist nach dem russischen Mathematiker Ivan Nikolajewitsch Sanov (1919–1968) benannt. (de)
- In mathematics and information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution. In the language of large deviations theory, Sanov's theorem identifies the rate function for large deviations of the empirical measure of a sequence of i.i.d. random variables. Let A be a set of probability distributions over an alphabet X, and let q be an arbitrary distribution over X (where q may or may not be in A). Suppose we draw n i.i.d. samples from q, represented by the vector . Then, we have the following bound on the probability that the empirical measure of the samples falls within the set A: , where
* is the joint probability distribution on , and
* is the information projection of q onto A. In words, the probability of drawing an atypical distribution is bounded by a function of the KL divergence from the true distribution to the atypical one; in the case that we consider a set of possible atypical distributions, there is a dominant atypical distribution, given by the information projection. Furthermore, if A is the closure of its interior, (en)
- Le théorème de Sanov est un résultat de probabilités et statistique fondamentales démontré en 1957. Il établit un principe de grandes déviations pour la mesure empirique d'une suite de variables aléatoires i.i.d. dont la fonction de taux est la divergence de Kullback-Leibler. (fr)
- Em teoria da informação, o teorema de Sanov dá um limite à probabilidade de observar uma sequência atípica de amostras a partir de uma dada distribuição de probabilidade. (pt)
|
prov:wasDerivedFrom
| |
page length (characters) of wiki page
| |
foaf:isPrimaryTopicOf
| |
is Link from a Wikipage to another Wikipage
of | |
is Wikipage redirect
of | |
is foaf:primaryTopic
of | |