In artificial intelligence (AI) and philosophy, the AI control problem is the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Its study is motivated by the notion that the human race will have to solve the control problem before any superintelligence is created, as a poorly designed superintelligence might rationally decide to seize control over its environment and refuse to permit its creators to modify it after launch. In addition, some scholars argue that solutions to the control problem, alongside other advances in AI safety engineering, might also find applications in existing non-superintelligent AI.
Attributes | Values |
---|---|
rdf:type | |
rdfs:label |
|
rdfs:comment |
|
rdfs:seeAlso | |
foaf:isPrimaryTopicOf | |
dct:subject | |
Wikipage page ID |
|
Wikipage revision ID |
|
Link from a Wikipage to another Wikipage |
|
sameAs | |
dbp:wikiPageUsesTemplate | |
has abstract |
|
prov:wasDerivedFrom | |
page length (characters) of wiki page |
|
is rdfs:seeAlso of | |
is foaf:primaryTopic of | |
is Link from a Wikipage to another Wikipage of |
|