This HTML5 document contains 334 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
n8http://dbpedia.org/resource/Wikipedia:
dctermshttp://purl.org/dc/terms/
yago-reshttp://yago-knowledge.org/resource/
dbohttp://dbpedia.org/ontology/
n20http://dbpedia.org/resource/File:
foafhttp://xmlns.com/foaf/0.1/
n34https://global.dbpedia.org/id/
n26http://dbpedia.org/resource/The_New_52:
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
dbpedia-svhttp://sv.dbpedia.org/resource/
n33http://dbpedia.org/resource/Artificial_Intelligence:
freebasehttp://rdf.freebase.com/ns/
dbpedia-cshttp://cs.dbpedia.org/resource/
n29http://
n21https://web.archive.org/web/20151030202356/http:/www.bloomberg.com/news/articles/2015-07-01/
n15http://commons.wikimedia.org/wiki/Special:FilePath/
n22http://dbpedia.org/resource/2001:
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
dbpedia-arhttp://ar.dbpedia.org/resource/
owlhttp://www.w3.org/2002/07/owl#
n30http://dbpedia.org/resource/Alien:
n25https://www.bloomberg.com/news/articles/2015-07-01/
wikipedia-enhttp://en.wikipedia.org/wiki/
dbchttp://dbpedia.org/resource/Category:
dbphttp://dbpedia.org/property/
provhttp://www.w3.org/ns/prov#
xsdhhttp://www.w3.org/2001/XMLSchema#
dbpedia-idhttp://id.dbpedia.org/resource/
n9http://dbpedia.org/resource/Superintelligence:
wikidatahttp://www.wikidata.org/entity/
goldhttp://purl.org/linguistics/gold/
dbrhttp://dbpedia.org/resource/
dbpedia-rohttp://ro.dbpedia.org/resource/
n19http://dbpedia.org/resource/The_Precipice:

Statements

Subject Item
dbr:Sam_Harris
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Enlightenment_Now
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Open_Letter_on_Artificial_Intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:List_of_dates_predicted_for_apocalyptic_events
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:List_of_global_issues
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Lethal_autonomous_weapon
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Life_3.0
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Timeline_of_artificial_intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:What_We_Owe_the_Future
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Elon_Musk
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Frank_Wilczek
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Future_of_Life_Institute
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Glossary_of_artificial_intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Computing_Machinery_and_Intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
n26:_Futures_End
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Friendly_artificial_intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Center_for_Applied_Rationality
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Center_for_Human-Compatible_Artificial_Intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Timeline_of_computing_2020–present
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:To_Be_a_Machine
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:AI_alignment
rdfs:seeAlso
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
n30:_Covenant
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Existential_risk_from_advanced_artificial_intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Existential_risk_from_artificial_general_intelligence
rdf:type
dbo:Company owl:Thing
rdfs:label
الخطر الوجودي من الذكاء الاصطناعي العام Existential risk from artificial general intelligence Existentiell risk orsakad av artificiell generell intelligens Krisis eksistensial dari kecerdasan buatan Existenční rizika vývoje umělé inteligence
rdfs:comment
Existenční rizika plynoucí z vývoje umělé inteligence je hypotetickou hrozbou předpovídající, že dramatický pokrok ve vývoji umělé inteligence (AI) by mohl jednoho dne skončit vyhynutím lidské rasy (nebo jinou globální katastrofou). Lidská rasa v současnosti dominuje nad ostatními druhy, jelikož lidský mozek má některé rozhodující schopnosti, které mozky zvířat postrádají. Pokud však AI předčí lidstvo v běžné inteligenci a stane se „super inteligencí“, mohla by se stát velmi mocnou a těžko kontrolovatelnou, osud lidstva by tak tedy mohl potenciálně záviset na jednání budoucí strojové super inteligence. Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence. Krisis eksistensial dari kecerdasan buatan adalah kekhawatiran bahwa kemajuan kecerdasan buatan (AI) suatu hari dapat mengakibatkan kepunahan manusia atau . Kekhawatiran ini didasarkan pada argumen bahwa manusia saat ini mendominasi spesies-spesies lainnya karena otak manusia memiliki kemampuan khusus yang tidak dimiliki oleh hewan-hewan lain. Jika kecerdasan buatan berhasil melampaui manusia dan menjadi sangat-sangat cerdas, maka kecerdasan buatan ini akan menjadi sangat kuat dan sulit untuk dikendalikan. Nasib gorila pegunungan saat ini bergantung kepada itikad baik dari manusia, dan mungkin suatu hari nanti nasib manusia juga akan bergantung pada kecerdasan buatan. الخطر الوجودي من الذكاء الاصطناعي العام يرجع إلى الفرضية القائلة أن التقدم الكبير في الذكاء الاصطناعي العام قد يؤدي إلى الانقراض البشري أو إلى كارثة عالمية غير قابلة للاسترداد. و الحجة الداعمة لهذه الفرضية هي أن البشر مهيمنون على باقي المخلوقات لامتيازهم بدماغ ذو قدرات مميزة تفتقر إليها أدمغة المخلوقات الأخرى (كالحيوانات مثلًا) ، و عليه إذا تفوق الذكاء الاصطناعي العام على الأدمغة البشرية و أصبحت بدورها فائقة الذكاء فإنها ستكون قوية و يُصعب التحكم بها، و يتوقف مصير البشرية على تصرفات هذه الأجهزة.
rdfs:seeAlso
dbr:AI_alignment dbr:Regulation_of_algorithms
foaf:homepage
n29:Bloomberg.com
foaf:depiction
n15:Bill_Gates_June_2015.jpg
dcterms:subject
dbc:Doomsday_scenarios dbc:Future_problems dbc:Human_extinction dbc:Existential_risk_from_artificial_general_intelligence dbc:Technology_hazards
dbo:wikiPageID
46583121
dbo:wikiPageRevisionID
1123781842
dbo:wikiPageWikiLink
dbr:SurveyMonkey dbr:Bill_Gates dbr:What_Happened_(Clinton_book) dbr:Robot_ethics dbr:Eric_Horvitz dbr:Tay_(bot) dbr:Mountain_gorilla dbr:Sun_microsystems dbr:Richard_Posner dbr:Marvin_Minsky dbr:Stanley_Kubrick dbr:John_Rawls dbc:Doomsday_scenarios dbr:Artificial_intelligence dbr:Stephen_Hawking dbr:Paperclip_maximizer dbr:Vicarious_(company) dbr:Conference_on_Neural_Information_Processing_Systems dbr:Dario_Floreano dbr:Artificial_general_intelligence dbc:Future_problems dbr:Peter_Norvig dbr:Common_good n8:WEIGHT dbr:Wired_(magazine) dbr:International_Space_Station dbr:Intelligent_agent dbr:The_New_York_Times dbr:Brian_Christian n9:_Paths,_Dangers,_Strategies dbr:Is-ought_distinction dbr:CERN dbr:Moore's_Law dbr:Andrew_Ng dbr:Nick_Bostrom dbr:Regulation_of_algorithms dbr:Artificial_intelligence_arms_race dbr:Regulation_of_artificial_intelligence dbr:System_accident dbr:Open_Philanthropy_Project dbr:DARPA dbr:Human_Genome_Project dbr:Steve_Omohundro dbr:Jaron_Lanier dbr:Suffering_risks dbr:DeepMind dbr:Superintelligence dbr:Technological_determinism dbr:Robert_D._Atkinson dbr:Lethal_autonomous_weapon dbr:The_Economist dbr:Cybercrime dbr:Human_extinction dbr:Uncertainty dbr:Go_(game) dbr:Three_Laws_of_Robotics dbr:OpenAI dbr:Martha_Nussbaum dbr:Machine_Intelligence_Research_Institute dbr:Life_3.0 dbr:Autonomy dbr:Anthropomorphism dbr:Nature_(journal) dbr:Francesca_Rossi dbr:Amazon_Mechanical_Turk dbr:HAL_9000 dbr:Computer_scientist dbr:Brian_Krzanich dbr:BRAIN_Initiative dbr:Isaac_Asimov dbr:Barack_Obama dbr:Baidu dbr:Global_catastrophic_risk dbr:Nanotechnology dbr:Murray_Shanahan dbr:Human_Compatible dbr:Open_Letter_on_Artificial_Intelligence n20:Bill_Gates_June_2015.jpg dbr:Scenario_planning dbr:Hillary_Clinton dbr:Pre-emptive_nuclear_strike dbr:Max_More dbr:Terminator_(franchise) dbr:Slate_(magazine) dbr:Politicization_of_science dbr:The_Wall_Street_Journal dbr:Dick_Cheney dbr:I._J._Good dbr:Roman_Yampolskiy dbr:China_Brain_Project dbc:Human_extinction dbr:Steven_Pinker dbr:Limits_of_computation dbr:Max_Tegmark dbr:Mark_Zuckerberg dbr:Psychopathy n22:_A_Space_Odyssey dbr:Social_engineering_(security) dbr:AlphaZero dbr:Instrumental_convergence dbr:Alan_Turing dbr:USA_Today dbr:Joi_Ito dbr:Artificial_philosophy dbr:Technological_utopianism dbr:Collaboration dbr:Friendly_artificial_intelligence dbr:Weaponization_of_artificial_intelligence dbr:The_Atlantic_(magazine) dbr:Frank_Wilczek dbr:AI_box dbr:Yann_LeCun dbr:AI_control_problem dbr:Human_brain dbr:Smithsonian_(magazine) dbr:IBM dbr:The_Washington_Post dbr:Bart_Selman dbr:Center_for_Human-Compatible_AI dbr:Future_of_Humanity_Institute dbc:Existential_risk_from_artificial_general_intelligence dbr:Future_of_Life_Institute dbr:Why_The_Future_Doesn't_Need_Us dbr:YouGov dbr:Effective_altruism dbr:Our_Final_Invention dbr:British_Science_Association dbr:Physics_of_the_Future dbr:Rodney_Brooks dbr:Association_for_the_Advancement_of_Artificial_Intelligence dbr:The_Alignment_Problem dbr:Bill_Joy dbr:Tesla,_Inc. dbr:Gray_goo dbc:Technology_hazards dbr:Communications_of_the_ACM dbr:Geoffrey_Hinton dbr:Cockroach dbr:Thomas_G._Dietterich dbr:Robin_Hanson dbr:Astroturfing dbr:Steganography dbr:Elon_Musk dbr:Convergent_evolution dbr:Nuclear_warfare dbr:Loss_function dbr:Utility dbr:Intelligence_explosion dbr:Erewhon dbr:Optimization_problem n19:_Existential_Risk_and_the_Future_of_Humanity dbr:Technological_singularity dbr:Human_Brain_Project dbr:Stuart_J._Russell dbr:International_Conference_on_Machine_Learning dbr:Competition dbr:Edward_Feigenbaum dbr:Samuel_Butler_(novelist) dbr:Computational_complexity dbr:Martin_Ford_(author) dbr:Human_species dbr:Technological_supremacy dbr:Gordon_Bell dbr:Military-civil_fusion n33:_A_Modern_Approach dbr:Hanson_Robotics dbr:Herbert_A._Simon dbr:AI_takeover dbr:AI_takeovers_in_popular_culture dbr:Information_Technology_and_Innovation_Foundation dbr:Darwin_among_the_Machines dbr:Charles_T._Rubin dbr:Gordon_Moore dbr:Instrumentalism dbr:Unintended_consequences dbr:Peter_Thiel dbr:Eliezer_Yudkowsky dbr:Google_DeepMind dbr:National_Public_Radio dbr:Centre_for_the_Study_of_Existential_Risk dbr:Michio_Kaku dbr:Human_enhancement dbr:Michael_Chorost
dbo:wikiPageExternalLink
n21:musk-backed-group-probes-risks-behind-artificial-intelligence n25:musk-backed-group-probes-risks-behind-artificial-intelligence
owl:sameAs
freebase:m.0134_90x dbpedia-cs:Existenční_rizika_vývoje_umělé_inteligence dbpedia-id:Krisis_eksistensial_dari_kecerdasan_buatan dbpedia-sv:Existentiell_risk_orsakad_av_artificiell_generell_intelligens dbpedia-ar:الخطر_الوجودي_من_الذكاء_الاصطناعي_العام dbpedia-ro:Risc_existențial_cauzat_de_inteligența_artificială_puternică wikidata:Q21715237 yago-res:Existential_risk_from_artificial_general_intelligence n34:23whd
dbp:wikiPageUsesTemplate
dbt:Meaning%3F dbt:Doomsday dbt:Blockquote dbt:Endash dbt:Further dbt:Efn dbt:Effective_altruism dbt:See_also dbt:Existential_risk_from_artificial_intelligence dbt:Cquote dbt:Use_dmy_dates dbt:Notelist dbt:Citation_needed dbt:Nbsp dbt:Div_col dbt:Div_col_end dbt:Short_description dbt:Main dbt:Sfn dbt:Excerpt dbt:Artificial_intelligence dbt:Cite_news dbt:Reflist
dbo:thumbnail
n15:Bill_Gates_June_2015.jpg?width=300
dbo:abstract
Existenční rizika plynoucí z vývoje umělé inteligence je hypotetickou hrozbou předpovídající, že dramatický pokrok ve vývoji umělé inteligence (AI) by mohl jednoho dne skončit vyhynutím lidské rasy (nebo jinou globální katastrofou). Lidská rasa v současnosti dominuje nad ostatními druhy, jelikož lidský mozek má některé rozhodující schopnosti, které mozky zvířat postrádají. Pokud však AI předčí lidstvo v běžné inteligenci a stane se „super inteligencí“, mohla by se stát velmi mocnou a těžko kontrolovatelnou, osud lidstva by tak tedy mohl potenciálně záviset na jednání budoucí strojové super inteligence. Vážnost různých rizikových scénářů je široce diskutována a závisí na bezpočtu nevyřešených otázek ohledně budoucího vývoje počítačové vědy. Dvěma hlavními zdroji obav je, že náhlá a nečekaná „exploze inteligence“ může překvapit nepřipravené lidstvo a že kontrola super inteligentního stroje (či dokonce pokus vštěpovat mu lidské hodnoty) může být mnohem větší problém než se naivně předpokládá. الخطر الوجودي من الذكاء الاصطناعي العام يرجع إلى الفرضية القائلة أن التقدم الكبير في الذكاء الاصطناعي العام قد يؤدي إلى الانقراض البشري أو إلى كارثة عالمية غير قابلة للاسترداد. و الحجة الداعمة لهذه الفرضية هي أن البشر مهيمنون على باقي المخلوقات لامتيازهم بدماغ ذو قدرات مميزة تفتقر إليها أدمغة المخلوقات الأخرى (كالحيوانات مثلًا) ، و عليه إذا تفوق الذكاء الاصطناعي العام على الأدمغة البشرية و أصبحت بدورها فائقة الذكاء فإنها ستكون قوية و يُصعب التحكم بها، و يتوقف مصير البشرية على تصرفات هذه الأجهزة. بدأت المخاوف من أجهزة الذكاء الاصطناعي عام 2010م، و قد تحدث عن هذه المخاوف عدة شخصيات مهمة مثل ستيفن هوكينج و بيل غيتس و أيلون موسك، و أصبحت النقاشات حول خطورة هذه الأجهزة واسعة، بسيناريوهات مختلفة. و أحد تلك المخاوف هو القلق من انفجار المعلومات الاستخبارية بشكل مفاجئ يسبق البشر؛ ففي أحد السيناريوهات استطاع برنامج حاسوبي من مضاهاة صانعه، فكان قادرًا على إعادة كتابة خوارزمياته و مضاعفة سرعته و قدراته خلال ستة أشهر من زمن المعالجة المتوازية، و عليه من المتوقع أن بستغرق برنامج الجيل الثاني ثلاثة أشهر لأداء عمل مشابه، و في قد يستغرق مضاعفة قدراته وقتًا أطول إذا كان يواجه فترة خمول أو أسرع إذا خضع إلى ثورة الذكاء الاصطناعي و الذي يسْهل تحوير أفكار الجيل السابق بشكل خاص إلى الجيل التالي. في هذا السيناريو يمر النظام لعدد كبير من الأجيال التي تتطور في فترة زمنية قصيرة، تبدأ بأداء أقل من المستوى البشري و تصل إلى أداء يفوق المستوى البشري في جميع المجالات. و أحد المخاوف من أجهزة الذكاء الاصطناعي هي أن التحكم بهذه الآلات أو حتى برمجتها بقيم مشابهة لقيم الإنسان أمر صعب للغاية، فبعض الباحثين في هذا المجال يعتقدون أن أجهزة الذكاء الاصطناعي قد تحاول إيقاف إغلاقها، في المقابل يقول المشككون مثل Yann LeCun أن هذه الآلات لن تكون لديها الرغبة في الحفاظ على نفسها. Krisis eksistensial dari kecerdasan buatan adalah kekhawatiran bahwa kemajuan kecerdasan buatan (AI) suatu hari dapat mengakibatkan kepunahan manusia atau . Kekhawatiran ini didasarkan pada argumen bahwa manusia saat ini mendominasi spesies-spesies lainnya karena otak manusia memiliki kemampuan khusus yang tidak dimiliki oleh hewan-hewan lain. Jika kecerdasan buatan berhasil melampaui manusia dan menjadi sangat-sangat cerdas, maka kecerdasan buatan ini akan menjadi sangat kuat dan sulit untuk dikendalikan. Nasib gorila pegunungan saat ini bergantung kepada itikad baik dari manusia, dan mungkin suatu hari nanti nasib manusia juga akan bergantung pada kecerdasan buatan. Besarnya risiko kecerdasan buatan saat ini masih diperdebatkan dan terdapat beberapa skenario mengenai masa depan ilmu komputer. Sebelumnya kekhawatiran ini hanya masuk ke dalam ranah fiksi ilmiah, tetapi kemudian mulai dipertimbangkan secara serius pada tahun 2010-an dan dipopulerkan oleh tokoh-tokoh besar seperti Stephen Hawking, Bill Gates, dan Elon Musk. Salah satu kekhawatiran utama adalah perkembangan kecerdasan buatan yang begitu cepat, mendadak dan tidak terduga, sehingga manusia pun tidak siap untuk menghadapinya. Kekhawatiran lainnya berasal dari kemungkinan bahwa mesin yang sangat-sangat cerdas sangat sulit untuk dikendalikan. Beberapa peneliti kecerdasan buatan berkeyakinan bahwa kecerdasan buatan secara alami akan melawan upaya untuk mematikannya, dan pemrograman kecerdasan buatan dengan etika dan moral manusia yang rumit mungkin merupakan hal teknis yang sulit untuk dilakukan. Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence. The chance of this type of scenario is widely debated, and hinges in part on differing scenarios for future progress in computer science. Once the exclusive domain of science fiction, concerns about superintelligence started to become mainstream in the 2010s, and were popularized by public figures such as Stephen Hawking, Bill Gates, and Elon Musk. One source of concern is that controlling a superintelligent machine, or instilling it with human-compatible values, may be a harder problem than naïvely supposed. Many researchers believe that a superintelligence would naturally resist attempts to shut it off or change its goals—a principle called instrumental convergence—and that preprogramming a superintelligence with a full set of human values will prove to be an extremely difficult technical task. In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation. A second source of concern is that a sudden and unexpected "intelligence explosion" might take an unprepared human race by surprise. To illustrate, if the first generation of a computer program able to broadly match the effectiveness of an AI researcher is able to rewrite its algorithms and double its speed or capabilities in six months, then the second-generation program is expected to take three calendar months to perform a similar chunk of work. In this scenario the time for each generation continues to shrink, and the system undergoes an unprecedentedly large number of generations of improvement in a short time interval, jumping from subhuman performance in many areas to superhuman performance in all relevant areas. Empirically, examples like AlphaZero in the domain of Go show that AI systems can sometimes progress from narrow human-level ability to narrow superhuman ability extremely rapidly.
gold:hypernym
dbr:Risk
prov:wasDerivedFrom
wikipedia-en:Existential_risk_from_artificial_general_intelligence?oldid=1123781842&ns=0
dbo:wikiPageLength
118681
foaf:isPrimaryTopicOf
wikipedia-en:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Global_catastrophic_risk
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
n19:_Existential_Risk_and_the_Future_of_Humanity
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:AI_risk
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Jaan_Tallinn
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:TechnoCalyps
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Singularitarianism
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:AI_aftermath_scenarios
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:AI_capability_control
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:AI_takeover
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:AI_takeovers_in_popular_culture
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:A_Human_Algorithm
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:John_C._Lilly
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Effective_Altruism_Global
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Effective_altruism
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Regulation_of_algorithms
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Artificial_intelligence_arms_race
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Fermi_paradox
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Human_Compatible
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Human_extinction
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Terminator_scenario
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:IHuman_(film)
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Instrumental_convergence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Neuralink
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:OpenAI
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Open_Philanthropy_(organization)
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Orthogonality_thesis
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Longtermism
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Murray_Shanahan
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Mind_uploading
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Ethics_of_artificial_intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Suffering_risks
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Evidence-based_policy
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Workplace_impact_of_artificial_intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Outline_of_artificial_intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Existential_risk_of_artificial_general_intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Universal_Paperclips
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Tomáš_Mikolov
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Agi_risk
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Technological_supremacy
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Existential_Risk_from_Artificial_General_Intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Existential_risk_from_advanced_AI
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Existential_risk_from_agi
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Existential_risk_from_ai
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Existential_risk_of_AI
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Existential_risk_of_artificial_intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:Existential_risks_from_artificial_general_intelligence
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
dbr:X-risk_from_AI
dbo:wikiPageWikiLink
dbr:Existential_risk_from_artificial_general_intelligence
dbo:wikiPageRedirects
dbr:Existential_risk_from_artificial_general_intelligence
Subject Item
wikipedia-en:Existential_risk_from_artificial_general_intelligence
foaf:primaryTopic
dbr:Existential_risk_from_artificial_general_intelligence