An Entity of Type: Thing, from Named Graph: http://dbpedia.org, within Data Space: dbpedia.org

Expert Judgment (EJ) denotes a wide variety of techniques ranging from a single undocumented opinion, through preference surveys, to formal elicitation with external validation of expert probability assessments. Recent books are .In the nuclear safety area, Rasmussen formalized EJ by documenting all steps in the expert elicitation process for scientific review. This made visible wide spreads in expert assessments and teed up questions regarding the validation and synthesis of expert judgments. The nuclear safety community later took onboard expert judgment techniques underpinned by external validation. Empirical validation is the hallmark of science, and forms the centerpiece of the classical model of probabilistic forecasting . A European Network coordinates workshops. Application areas i

Property Value
dbo:abstract
  • Expert Judgment (EJ) denotes a wide variety of techniques ranging from a single undocumented opinion, through preference surveys, to formal elicitation with external validation of expert probability assessments. Recent books are .In the nuclear safety area, Rasmussen formalized EJ by documenting all steps in the expert elicitation process for scientific review. This made visible wide spreads in expert assessments and teed up questions regarding the validation and synthesis of expert judgments. The nuclear safety community later took onboard expert judgment techniques underpinned by external validation. Empirical validation is the hallmark of science, and forms the centerpiece of the classical model of probabilistic forecasting . A European Network coordinates workshops. Application areas include nuclear safety, investment banking, volcanology, public health, ecology, engineering, climate change and aeronautics/aerospace. For a survey of applications through 2006 see and give exhortatory overviews. A recent large scale implementation by the World Health Organization is described in . A long running application at the Montserrat Volcano Observatory is described in .The classical model scores expert performance in terms of statistical accuracy (sometimes called calibration) and informativeness . These terms should not be confused with “accuracy and precision”. Accuracy “is a description of systematic errors” while precision “is a description of random errors”. In the classical model statistical accuracy is measured as the p-value or probability with which one would falsely reject the hypotheses that an expert's probability assessments were statistically accurate. A low value (near zero) means it is very unlikely that the discrepancy between an expert's probability statements and observed outcomes should arise by chance. Informativeness is measured as Shannon relative information (or Kullback Leibler divergence) with respect to an analyst-supplied background measure. Shannon relative information is used because it is scale invariant, tail insensitive, slow, and familiar. Parenthetically, measures with physical dimensions, such as the standard deviation, or the width of prediction intervals, raise serious problems, as a change of units (meters to kilometers) would affect some variables but not others. The product of statistical accuracy and informativeness for each expert is their combined score. With an optimal choice of a statistical accuracy threshold beneath which experts are unweighted, the combined score is a long run “strictly proper scoring rule”: an expert achieves his long run maximal expected score by and only by stating his true beliefs. The classical model derives Performance Weighted (PW) combinations. These are compared with Equally Weighted (EW) combinations, and recently with Harmonically Weighted (HW) combinations, as well as with individual expert assessments. While some mathematicians and decision analysts regard combining expert judgments as a mathematical problem, the classical model regards expert combination as more akin to an engineering problem. A bicycle obeys Newton's Laws but does not follow from them. It is designed to optimize performance under constraints. Similarly expert judgment combination is viewed as a tool for enabling rational consensus by optimizing performance measures under mathematical and decision theoretic constraints. The theory of rational consensus is summarized in . Real expert judgment studies differ in many ways from research or academic exercises. The experts are typically recruited in a traceable peer nomination process based on their knowledge of and engagement with the subject of the study; they may receive remuneration. In all cases, experts’ reasoning is documented, and their names and affiliations are part of the reporting. However, to encourage candid judgments, individuals’ responses are not exchanged within the group and association of names with assessments is not reported in the open literature, but is preserved to enable peer review by the problem owner. Elicitations typically last several hours; the elicitation protocol is formalized and is part of the public reporting. Elicitation styles differ among practitioners, including face-to-face interviews, with or without plenary briefing and training, and "supervised plenary". Remote elicitation is rarely used, but some recent studies use online face-to-face tools. (en)
dbo:thumbnail
dbo:wikiPageExternalLink
dbo:wikiPageID
  • 49784777 (xsd:integer)
dbo:wikiPageLength
  • 24203 (xsd:nonNegativeInteger)
dbo:wikiPageRevisionID
  • 1114129874 (xsd:integer)
dbo:wikiPageWikiLink
dbp:align
  • right (en)
dbp:captionAlign
  • center (en)
dbp:footer
  • Figure5: Out of sample p-values and combined scores of PW and EW, aggregated over same sized training sets, as percentage of all calibration variables, and aggregated over the 33 post-2006 studies (en)
  • Figure4: Comparison of statistical accuracy and combined scores of 33 post-2006 expert judgment studies (en)
dbp:footerAlign
  • left (en)
dbp:image
  • ComparisonEJStudiesCombinedScores.png (en)
  • ComparisonEJStudiesStatisticalAccuracy.png (en)
  • OutOfSampleCombinedScoresPercCalibrationVars.png (en)
  • OutOfSamplePValuesPercCalibrationVars.png (en)
dbp:width
  • 230 (xsd:integer)
  • 243 (xsd:integer)
  • 260 (xsd:integer)
dbp:wikiPageUsesTemplate
dcterms:subject
rdfs:comment
  • Expert Judgment (EJ) denotes a wide variety of techniques ranging from a single undocumented opinion, through preference surveys, to formal elicitation with external validation of expert probability assessments. Recent books are .In the nuclear safety area, Rasmussen formalized EJ by documenting all steps in the expert elicitation process for scientific review. This made visible wide spreads in expert assessments and teed up questions regarding the validation and synthesis of expert judgments. The nuclear safety community later took onboard expert judgment techniques underpinned by external validation. Empirical validation is the hallmark of science, and forms the centerpiece of the classical model of probabilistic forecasting . A European Network coordinates workshops. Application areas i (en)
rdfs:label
  • Structured expert judgment: the classical model (en)
owl:sameAs
prov:wasDerivedFrom
foaf:depiction
foaf:isPrimaryTopicOf
is foaf:primaryTopic of
Powered by OpenLink Virtuoso    This material is Open Knowledge     W3C Semantic Web Technology     This material is Open Knowledge    Valid XHTML + RDFa
This content was extracted from Wikipedia and is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License