In computational statistics, the preconditioned Crank–Nicolson algorithm (pCN) is a Markov chain Monte Carlo (MCMC) method for obtaining random samples – sequences of random observations – from a target probability distribution for which direct sampling is difficult. The algorithm as named was highlighted in 2013 by Cotter, Roberts, Stuart and White, and its ergodicity properties were proved a year later by Hairer, Stuart and Vollmer. In the specific context of sampling diffusion bridges, the method was introduced in 2008.
Attributes | Values |
---|
rdf:type
| |
rdfs:label
| - Preconditioned Crank–Nicolson algorithm (en)
|
rdfs:comment
| - In computational statistics, the preconditioned Crank–Nicolson algorithm (pCN) is a Markov chain Monte Carlo (MCMC) method for obtaining random samples – sequences of random observations – from a target probability distribution for which direct sampling is difficult. The algorithm as named was highlighted in 2013 by Cotter, Roberts, Stuart and White, and its ergodicity properties were proved a year later by Hairer, Stuart and Vollmer. In the specific context of sampling diffusion bridges, the method was introduced in 2008. (en)
|
rdfs:seeAlso
| |
dcterms:subject
| |
Wikipage page ID
| |
Wikipage revision ID
| |
Link from a Wikipage to another Wikipage
| |
sameAs
| |
dbp:wikiPageUsesTemplate
| |
has abstract
| - In computational statistics, the preconditioned Crank–Nicolson algorithm (pCN) is a Markov chain Monte Carlo (MCMC) method for obtaining random samples – sequences of random observations – from a target probability distribution for which direct sampling is difficult. The most significant feature of the pCN algorithm is its dimension robustness, which makes it well-suited for high-dimensional sampling problems. The pCN algorithm is well-defined, with non-degenerate acceptance probability, even for target distributions on infinite-dimensional Hilbert spaces. As a consequence, when pCN is implemented on a real-world computer in large but finite dimension N, i.e. on an N-dimensional subspace of the original Hilbert space, the convergence properties (such as ergodicity) of the algorithm are independent of N. This is in strong contrast to schemes such as Gaussian random walk Metropolis–Hastings and the Metropolis-adjusted Langevin algorithm, whose acceptance probability degenerates to zero as N tends to infinity. The algorithm as named was highlighted in 2013 by Cotter, Roberts, Stuart and White, and its ergodicity properties were proved a year later by Hairer, Stuart and Vollmer. In the specific context of sampling diffusion bridges, the method was introduced in 2008. (en)
|
prov:wasDerivedFrom
| |
page length (characters) of wiki page
| |
foaf:isPrimaryTopicOf
| |
is Link from a Wikipage to another Wikipage
of | |
is Wikipage redirect
of | |
is foaf:primaryTopic
of | |