This HTML5 document contains 51 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dctermshttp://purl.org/dc/terms/
dbohttp://dbpedia.org/ontology/
foafhttp://xmlns.com/foaf/0.1/
n13https://global.dbpedia.org/id/
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
freebasehttp://rdf.freebase.com/ns/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
wikipedia-enhttp://en.wikipedia.org/wiki/
dbchttp://dbpedia.org/resource/Category:
dbphttp://dbpedia.org/property/
provhttp://www.w3.org/ns/prov#
xsdhhttp://www.w3.org/2001/XMLSchema#
wikidatahttp://www.wikidata.org/entity/
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:Bayesian_linear_regression
dbo:wikiPageWikiLink
dbr:Bayesian_interpretation_of_kernel_regularization
Subject Item
dbr:Bayesian_interpretation_of_kernel_regularization
rdfs:label
Bayesian interpretation of kernel regularization
rdfs:comment
Within bayesian statistics for machine learning, kernel methods arise from the assumption of an inner product space or similarity structure on inputs. For some such methods, such as support vector machines (SVMs), the original formulation and its regularization were not Bayesian in nature. It is helpful to understand them from a Bayesian perspective. Because the kernels are not necessarily positive semidefinite, the underlying structure may not be inner product spaces, but instead more general reproducing kernel Hilbert spaces. In Bayesian probability kernel methods are a key component of Gaussian processes, where the kernel function is known as the covariance function. Kernel methods have traditionally been used in supervised learning problems where the input space is usually a space of v
dcterms:subject
dbc:Bayesian_statistics dbc:Machine_learning
dbo:wikiPageID
35867897
dbo:wikiPageRevisionID
1109518986
dbo:wikiPageWikiLink
dbr:Gaussian_process dbr:Kernel_methods dbc:Bayesian_statistics dbr:Gramian_matrix dbr:Regularization_(mathematics) dbr:Posterior_probability dbr:Multivariate_normal_distribution dbr:Symmetry_in_mathematics dbr:Likelihood_function dbr:Bayesian_statistics dbr:Supervised_learning dbr:Bayesian_probability dbr:Reproducing_kernel_Hilbert_space dbr:Support_vector_machine dbr:Bayesian_linear_regression dbr:Kernel_methods_for_vector_output dbr:Multi-task_learning dbr:Prior_probability dbr:Positive-definite_function dbr:Machine_learning dbr:Hilbert_space dbr:Estimator dbr:Regularized_least_squares dbr:Gaussian_processes dbr:Tikhonov_regularization dbc:Machine_learning
owl:sameAs
n13:4XAJr freebase:m.0jw_0qr wikidata:Q4874475
dbp:wikiPageUsesTemplate
dbt:EquationNote dbt:EquationRef dbt:NumBlk dbt:Technical dbt:Further dbt:Reflist
dbo:abstract
Within bayesian statistics for machine learning, kernel methods arise from the assumption of an inner product space or similarity structure on inputs. For some such methods, such as support vector machines (SVMs), the original formulation and its regularization were not Bayesian in nature. It is helpful to understand them from a Bayesian perspective. Because the kernels are not necessarily positive semidefinite, the underlying structure may not be inner product spaces, but instead more general reproducing kernel Hilbert spaces. In Bayesian probability kernel methods are a key component of Gaussian processes, where the kernel function is known as the covariance function. Kernel methods have traditionally been used in supervised learning problems where the input space is usually a space of vectors while the output space is a space of scalars. More recently these methods have been extended to problems that deal with multiple outputs such as in multi-task learning. A mathematical equivalence between the regularization and the Bayesian point of view is easily proved in cases where the reproducing kernel Hilbert space is finite-dimensional. The infinite-dimensional case raises subtle mathematical issues; we will consider here the finite-dimensional case. We start with a brief review of the main ideas underlying kernel methods for scalar learning, and briefly introduce the concepts of regularization and Gaussian processes. We then show how both points of view arrive at essentially equivalent estimators, and show the connection that ties them together.
prov:wasDerivedFrom
wikipedia-en:Bayesian_interpretation_of_kernel_regularization?oldid=1109518986&ns=0
dbo:wikiPageLength
17543
foaf:isPrimaryTopicOf
wikipedia-en:Bayesian_interpretation_of_kernel_regularization
Subject Item
dbr:Bayesian_interpretation_of_regularization
dbo:wikiPageWikiLink
dbr:Bayesian_interpretation_of_kernel_regularization
dbo:wikiPageRedirects
dbr:Bayesian_interpretation_of_kernel_regularization
Subject Item
dbr:List_of_things_named_after_Thomas_Bayes
dbo:wikiPageWikiLink
dbr:Bayesian_interpretation_of_kernel_regularization
Subject Item
dbr:Outline_of_machine_learning
dbo:wikiPageWikiLink
dbr:Bayesian_interpretation_of_kernel_regularization
Subject Item
wikipedia-en:Bayesian_interpretation_of_kernel_regularization
foaf:primaryTopic
dbr:Bayesian_interpretation_of_kernel_regularization