dbo:abstract
|
- In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework, Tikhonov regularization optimizes over to find a vector that is a stable solution to the regression problem. When the system is described by a matrix rather than a vector, this problem can be written as where the vector norm enforcing a regularization penalty on has been extended to a matrix norm on . Matrix regularization has applications in matrix completion, multivariate regression, and multi-task learning. Ideas of feature and group selection can also be extended to matrices, and these can be generalized to the nonparametric case of multiple kernel learning. (en)
|
dbo:wikiPageID
| |
dbo:wikiPageLength
|
- 14755 (xsd:nonNegativeInteger)
|
dbo:wikiPageRevisionID
| |
dbo:wikiPageWikiLink
| |
dbp:wikiPageUsesTemplate
| |
dcterms:subject
| |
rdfs:comment
|
- In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework, Tikhonov regularization optimizes over to find a vector that is a stable solution to the regression problem. When the system is described by a matrix rather than a vector, this problem can be written as (en)
|
rdfs:label
|
- Matrix regularization (en)
|
owl:sameAs
| |
prov:wasDerivedFrom
| |
foaf:isPrimaryTopicOf
| |
is dbo:wikiPageDisambiguates
of | |
is dbo:wikiPageWikiLink
of | |
is foaf:primaryTopic
of | |