An Entity of Type: Thing, from Named Graph: http://dbpedia.org, within Data Space: dbpedia.org

In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to more restrictively as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.

Property Value
dbo:abstract
  • In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to more restrictively as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods. (en)
  • Un AI box, a veces llamada Oracle AI, es un sistema de hardware informático aislado hipotético que tiene una inteligencia artificial posiblemente peligrosa, o AI, que se mantiene restringida en una "prisión virtual" y no se le permite manipular eventos en el mundo externo. Tal caja estaría restringida a canales de comunicación minimalistas. Desafortunadamente, incluso si la caja está bien diseñada, una AI suficientemente inteligente puede ser capaz de persuadir o engañar a sus guardianes humanos para que la liberen, o de otra manera ser capaz de "piratear" su salida de la caja. ​ (es)
dbo:wikiPageExternalLink
dbo:wikiPageID
  • 31641770 (xsd:integer)
dbo:wikiPageLength
  • 23837 (xsd:nonNegativeInteger)
dbo:wikiPageRevisionID
  • 1123952307 (xsd:integer)
dbo:wikiPageWikiLink
dbp:id
  • oAHIa651Wa0 (en)
dbp:title
  • "Presentation titled 'Thinking inside the box: using and controlling an Oracle AI'" (en)
dbp:wikiPageUsesTemplate
dcterms:subject
rdfs:comment
  • In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to more restrictively as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods. (en)
  • Un AI box, a veces llamada Oracle AI, es un sistema de hardware informático aislado hipotético que tiene una inteligencia artificial posiblemente peligrosa, o AI, que se mantiene restringida en una "prisión virtual" y no se le permite manipular eventos en el mundo externo. Tal caja estaría restringida a canales de comunicación minimalistas. Desafortunadamente, incluso si la caja está bien diseñada, una AI suficientemente inteligente puede ser capaz de persuadir o engañar a sus guardianes humanos para que la liberen, o de otra manera ser capaz de "piratear" su salida de la caja. ​ (es)
rdfs:label
  • AI capability control (en)
  • AI box (es)
owl:sameAs
prov:wasDerivedFrom
foaf:isPrimaryTopicOf
is dbo:wikiPageRedirects of
is dbo:wikiPageWikiLink of
is foaf:primaryTopic of
Powered by OpenLink Virtuoso    This material is Open Knowledge     W3C Semantic Web Technology     This material is Open Knowledge    Valid XHTML + RDFa
This content was extracted from Wikipedia and is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License