This HTML5 document contains 42 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dctermshttp://purl.org/dc/terms/
yago-reshttp://yago-knowledge.org/resource/
dbohttp://dbpedia.org/ontology/
foafhttp://xmlns.com/foaf/0.1/
n7https://global.dbpedia.org/id/
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
freebasehttp://rdf.freebase.com/ns/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
wikipedia-enhttp://en.wikipedia.org/wiki/
dbchttp://dbpedia.org/resource/Category:
dbphttp://dbpedia.org/property/
provhttp://www.w3.org/ns/prov#
xsdhhttp://www.w3.org/2001/XMLSchema#
wikidatahttp://www.wikidata.org/entity/
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:Ridge_regression
dbo:wikiPageWikiLink
dbr:Matrix_regularization
Subject Item
dbr:Matrix_completion
dbo:wikiPageWikiLink
dbr:Matrix_regularization
Subject Item
dbr:Matrix_regularization
rdfs:label
Matrix regularization
rdfs:comment
In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework, Tikhonov regularization optimizes over to find a vector that is a stable solution to the regression problem. When the system is described by a matrix rather than a vector, this problem can be written as
dcterms:subject
dbc:Machine_learning dbc:Matrices dbc:Estimation_theory
dbo:wikiPageID
44628821
dbo:wikiPageRevisionID
1068280164
dbo:wikiPageWikiLink
dbr:Feature_selection dbr:Matching_pursuit dbr:Multiple_kernel_learning dbr:Tikhonov_regularization dbr:Lasso_(statistics) dbr:Proximal_gradient_method dbr:Statistical_learning_theory dbc:Matrices dbc:Machine_learning dbr:Frobenius_inner_product dbr:Multivariate_regression dbr:Regularization_(mathematics) dbr:Multi-task_learning dbr:Schatten_norm dbr:Matrix_completion dbc:Estimation_theory dbr:Laplacian_matrix dbr:Reproducing_kernel_Hilbert_space dbr:Regularization_by_spectral_filtering
owl:sameAs
yago-res:Matrix_regularization n7:2MWRE freebase:m.012n_11w wikidata:Q25048644
dbp:wikiPageUsesTemplate
dbt:Reflist
dbo:abstract
In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework, Tikhonov regularization optimizes over to find a vector that is a stable solution to the regression problem. When the system is described by a matrix rather than a vector, this problem can be written as where the vector norm enforcing a regularization penalty on has been extended to a matrix norm on . Matrix regularization has applications in matrix completion, multivariate regression, and multi-task learning. Ideas of feature and group selection can also be extended to matrices, and these can be generalized to the nonparametric case of multiple kernel learning.
prov:wasDerivedFrom
wikipedia-en:Matrix_regularization?oldid=1068280164&ns=0
dbo:wikiPageLength
14755
foaf:isPrimaryTopicOf
wikipedia-en:Matrix_regularization
Subject Item
dbr:Regularization
dbo:wikiPageWikiLink
dbr:Matrix_regularization
dbo:wikiPageDisambiguates
dbr:Matrix_regularization
Subject Item
dbr:Regularization_(mathematics)
dbo:wikiPageWikiLink
dbr:Matrix_regularization
Subject Item
dbr:Outline_of_machine_learning
dbo:wikiPageWikiLink
dbr:Matrix_regularization
Subject Item
wikipedia-en:Matrix_regularization
foaf:primaryTopic
dbr:Matrix_regularization