An Entity of Type: Thing, from Named Graph: http://dbpedia.org, within Data Space: dbpedia.org

Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also known as Tikhonov regularization, named for Andrey Tikhonov, it is a method of regularization of ill-posed problems. it is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters. In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of bias (see bias–variance tradeoff).

Property Value
dbo:abstract
  • Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also known as Tikhonov regularization, named for Andrey Tikhonov, it is a method of regularization of ill-posed problems. it is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters. In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of bias (see bias–variance tradeoff). The theory was first introduced by Hoerl and Kennard in 1970 in their Technometrics papers “RIDGE regressions: biased estimation of nonorthogonal problems” and “RIDGE regressions: applications in nonorthogonal problems”. This was the result of ten years of research into the field of ridge analysis. Ridge regression was developed as a possible solution to the imprecision of least square estimators when linear regression models have some multicollinear (highly correlated) independent variables—by creating a ridge regression estimator (RR). This provides a more precise ridge parameters estimate, as its variance and mean square estimator are often smaller than the least square estimators previously derived. (en)
  • リッジ回帰(リッジかいき、Ridge regression)は、独立変数が強く相関している場合に、重回帰モデルの係数を推定する方法。計量経済学、化学、工学などの分野で使用されている。 この理論は、1970年に Hoerl と ケナード が Technometrics の論文「RIDGE regressions: biased estimation of nonorthogonal problems」と「RIDGE regressions: applications in nonorthogonal problems」で初めて紹介した 。これは、リッジ分析の分野における 10 年間の研究の結果だった。 リッジ回帰は、線形回帰モデルに多重共線性がある(強く相関する独立変数がある)場合に最小二乗推定量が不正確になることを解決するために開発された。リッジ回帰推定量は、最小二乗推定量よりも精度が高い 。 (ja)
dbo:wikiPageExternalLink
dbo:wikiPageID
  • 954328 (xsd:integer)
dbo:wikiPageLength
  • 26689 (xsd:nonNegativeInteger)
dbo:wikiPageRevisionID
  • 1120671107 (xsd:integer)
dbo:wikiPageWikiLink
dbp:date
  • May 2020 (en)
  • November 2022 (en)
dbp:reason
  • what are the relative dimensions of A, b and x/ is A a square or non-square matrix?; are x and y of the same dimension (en)
  • If multiplying a matrix by x is a filter, what in A is a frequency, and what values correspond to high or low frequencies? (en)
  • does this represent a system of linear equations (en)
dbp:wikiPageUsesTemplate
dcterms:subject
rdf:type
rdfs:comment
  • リッジ回帰(リッジかいき、Ridge regression)は、独立変数が強く相関している場合に、重回帰モデルの係数を推定する方法。計量経済学、化学、工学などの分野で使用されている。 この理論は、1970年に Hoerl と ケナード が Technometrics の論文「RIDGE regressions: biased estimation of nonorthogonal problems」と「RIDGE regressions: applications in nonorthogonal problems」で初めて紹介した 。これは、リッジ分析の分野における 10 年間の研究の結果だった。 リッジ回帰は、線形回帰モデルに多重共線性がある(強く相関する独立変数がある)場合に最小二乗推定量が不正確になることを解決するために開発された。リッジ回帰推定量は、最小二乗推定量よりも精度が高い 。 (ja)
  • Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also known as Tikhonov regularization, named for Andrey Tikhonov, it is a method of regularization of ill-posed problems. it is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters. In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of bias (see bias–variance tradeoff). (en)
rdfs:label
  • リッジ回帰 (ja)
  • Ridge regression (en)
owl:sameAs
prov:wasDerivedFrom
foaf:isPrimaryTopicOf
is dbo:wikiPageDisambiguates of
is dbo:wikiPageRedirects of
is dbo:wikiPageWikiLink of
is owl:differentFrom of
is foaf:primaryTopic of
Powered by OpenLink Virtuoso    This material is Open Knowledge     W3C Semantic Web Technology     This material is Open Knowledge    Valid XHTML + RDFa
This content was extracted from Wikipedia and is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License