In statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test. Statistical tests (such as hypothesis testing) generally require knowledge of the probability distribution of the test statistic. This is often a problem for likelihood ratios, where the probability distribution can be very difficult to determine.
Attributes | Values |
---|
rdfs:label
| |
rdfs:comment
| - In statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test. Statistical tests (such as hypothesis testing) generally require knowledge of the probability distribution of the test statistic. This is often a problem for likelihood ratios, where the probability distribution can be very difficult to determine. (en)
|
dcterms:subject
| |
Wikipage page ID
| |
Wikipage revision ID
| |
Link from a Wikipage to another Wikipage
| |
Link from a Wikipage to an external page
| |
sameAs
| |
dbp:wikiPageUsesTemplate
| |
has abstract
| - In statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test. Statistical tests (such as hypothesis testing) generally require knowledge of the probability distribution of the test statistic. This is often a problem for likelihood ratios, where the probability distribution can be very difficult to determine. A convenient result by Samuel S. Wilks says that as the sample size approaches , the distribution of the test statistic asymptotically approaches the chi-squared distribution under the null hypothesis . Here, denotes the likelihood ratio, and the distribution has degrees of freedom equal to the difference in dimensionality of and , where is the full parameter space and is the subset of the parameter space associated with . This result means that for large samples and a great variety of hypotheses, a practitioner can compute the likelihood ratio for the data and compare to the value corresponding to a desired statistical significance as an approximate statistical test. The theorem no longer applies when the true value of the parameter is on the boundary of the parameter space: Wilks’ theorem assumes that the ‘true’ but unknown values of the estimated parameters lie within the interior of the supported parameter space. In practice, one will notice the problem if the estimate lies on that boundary. In that event, the likelihood test is still a sensible test statistic and even possess some aymptotic optimality properties, but the significance (the p-value) can not be reliably estimated using the chi-squared distribution with the number of degrees of freedom prescribed by Wilks. In some cases, the asymptotic null-hypothesis distribution of the statistic is a mixture of chi-square distributions with different numbers of degrees of freedom. (en)
|
prov:wasDerivedFrom
| |
page length (characters) of wiki page
| |
foaf:isPrimaryTopicOf
| |
is Link from a Wikipage to another Wikipage
of | |
is Wikipage redirect
of | |
is foaf:primaryTopic
of | |