dbo:abstract
|
- Proximal gradient (forward backward splitting) methods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regularization problems where the regularization penalty may not be differentiable. One such example is regularization (also known as Lasso) of the form Proximal gradient methods offer a general framework for solving regularization problems from statistical learning theory with penalties that are tailored to a specific problem application. Such customized penalties can help to induce certain structure in problem solutions, such as sparsity (in the case of lasso) or group structure (in the case of group lasso). (en)
|
dbo:wikiPageID
| |
dbo:wikiPageLength
|
- 20337 (xsd:nonNegativeInteger)
|
dbo:wikiPageRevisionID
| |
dbo:wikiPageWikiLink
| |
dbp:wikiPageUsesTemplate
| |
dcterms:subject
| |
rdf:type
| |
rdfs:comment
|
- Proximal gradient (forward backward splitting) methods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regularization problems where the regularization penalty may not be differentiable. One such example is regularization (also known as Lasso) of the form (en)
|
rdfs:label
|
- Proximal gradient methods for learning (en)
|
owl:sameAs
| |
prov:wasDerivedFrom
| |
foaf:isPrimaryTopicOf
| |
is dbo:wikiPageRedirects
of | |
is dbo:wikiPageWikiLink
of | |
is foaf:primaryTopic
of | |