This HTML5 document contains 119 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dctermshttp://purl.org/dc/terms/
yago-reshttp://yago-knowledge.org/resource/
dbohttp://dbpedia.org/ontology/
foafhttp://xmlns.com/foaf/0.1/
n11http://www.cassandra.org/pomdp/
n25http://bitbucket.org/bami/
n16https://global.dbpedia.org/id/
n20https://longhorizon.org/trey/zmdp/
yagohttp://dbpedia.org/class/yago/
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
n24https://cran.r-project.org/web/packages/pomdp/
n6http://bigbird.comp.nus.edu.sg/pmwiki/farm/appl/
freebasehttp://rdf.freebase.com/ns/
n22https://github.com/JuliaPOMDP/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
dbpedia-zhhttp://zh.dbpedia.org/resource/
dbpedia-frhttp://fr.dbpedia.org/resource/
wikipedia-enhttp://en.wikipedia.org/wiki/
n7http://dbpedia.org/resource/List_of_acronyms:
dbchttp://dbpedia.org/resource/Category:
dbphttp://dbpedia.org/property/
provhttp://www.w3.org/ns/prov#
xsdhhttp://www.w3.org/2001/XMLSchema#
goldhttp://purl.org/linguistics/gold/
wikidatahttp://www.wikidata.org/entity/
dbrhttp://dbpedia.org/resource/
dbpedia-jahttp://ja.dbpedia.org/resource/
n12https://www.cs.kent.ac.uk/people/staff/mg483/code/IsoFreeBB/

Statements

Subject Item
n7:_P
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:List_of_computer_scientists
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Monte_Carlo_POMDP
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:One-pass_algorithm
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Partially_observable_Markov_decision_process
rdf:type
yago:Abstraction100002137 yago:YagoPermanentlyLocatedEntity yago:Procedure101023820 yago:Event100029378 yago:Model105890249 yago:Act100030358 yago:WikicatMarkovProcesses yago:Content105809192 yago:WikicatStochasticProcesses yago:Cognition100023271 yago:StochasticProcess113561896 yago:PsychologicalFeature100023100 yago:Idea105833840 yago:Activity100407535 yago:Hypothesis105888929 yago:Concept105835747
rdfs:label
Partially observable Markov decision process 部分観測マルコフ決定過程 部分可觀察馬可夫決策過程 Processus de décision markovien partiellement observable
rdfs:comment
A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a sensor model (the probability distribution of different observations given the underlying state) and the underlying MDP. Unlike the policy function in MDP which maps the underlying states to the actions, POMDP's policy is a mapping from the history of observations (or belief states) to the actions. En théorie de la décision et de la théorie des probabilités, un processus de décision markovien partiellement observable (POMDP pour partially observable Markov decision process) est une généralisation d'un processus de décision markoviens (MDP pour Markov decision process). Comme dans un MDP, l'effet des actions est incertain mais, contrairement à ce qui se passe pour un MDP, l'agent n'a qu'une information partielle de l'état courant. Les POMDP sont des modèles de Markov cachés (HMM pour hidden Markov model) particuliers, dans lesquels on dispose d'actions probabilistes. Le tableau suivant montre la place des POMDP dans la famille des processus de décision : 部分可觀察马尔可夫決策過程(Partially Observable Markov Decision Process,缩写:POMDP),是一種通用化的馬可夫決策過程。POMDP模擬代理人決策程序是假設系統動態由MDP決定,但是代理人無法直接觀察目前的狀態。相反的,它必須要根據模型的全域與部分區域觀察結果來推斷狀態的分佈。 因為POMDP架構的通用程度足以模擬不同的真實世界的連續過程,應用於機器人導航問題、機械維護和不定性規劃。架構最早由研究機構所建立,隨後人工智慧與社群繼續發展。 部分観測マルコフ決定過程(ぶぶんかんそくマルコフけっていかてい、英: partially observable Markov decision process; POMDP)はマルコフ決定過程 (MDP) の一般化であり,状態を直接観測できないような意思決定過程におけるモデル化の枠組みを与える. POMDP は実世界におけるあらゆる逐次的な意思決定過程をモデル化するのに十分であり,ロボットのナビゲーションや機械整備 (machine maintenance),および不確実な状況下でのプランニングなどに応用されている.POMDP はオペレーションズリサーチを起源とし,のちに人工知能や自動計画のコミュニティに引き継がれた.
dcterms:subject
dbc:Stochastic_control dbc:Dynamic_programming dbc:Markov_processes
dbo:wikiPageID
3063552
dbo:wikiPageRevisionID
1104376990
dbo:wikiPageWikiLink
dbr:Parity_game dbr:Principle_component_analysis dbc:Stochastic_control dbr:Büchi_automaton dbr:Computational_complexity_theory dbr:Operations_research dbr:Bellman_equation dbc:Dynamic_programming dbr:Markov_decision_process dbr:Imperfect_information dbr:Julia_(programming_language) dbr:EXPTIME dbr:Undecidable_problem dbr:Michael_L._Littman dbc:Markov_processes dbr:Artificial_intelligence dbr:Karl_Johan_Åström dbr:Automated_planning dbr:Leslie_P._Kaelbling
dbo:wikiPageExternalLink
n6:index.php%3Fn=Main.HomePage n11:index.shtml n12: n20: n22:POMDPs.jl n24:index.html n25:pypomdp
owl:sameAs
dbpedia-zh:部分可觀察馬可夫決策過程 n16:i7Nr freebase:m.08p0k3 yago-res:Partially_observable_Markov_decision_process wikidata:Q176814 dbpedia-fr:Processus_de_décision_markovien_partiellement_observable dbpedia-ja:部分観測マルコフ決定過程
dbp:wikiPageUsesTemplate
dbt:Reflist dbt:Short_description
dbo:abstract
A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a sensor model (the probability distribution of different observations given the underlying state) and the underlying MDP. Unlike the policy function in MDP which maps the underlying states to the actions, POMDP's policy is a mapping from the history of observations (or belief states) to the actions. The POMDP framework is general enough to model a variety of real-world sequential decision processes. Applications include robot navigation problems, machine maintenance, and planning under uncertainty in general. The general framework of Markov decision processes with imperfect information was described by Karl Johan Åström in 1965 in the case of a discrete state space, and it was further studied in the operations research community where the acronym POMDP was coined. It was later adapted for problems in artificial intelligence and automated planning by Leslie P. Kaelbling and Michael L. Littman. An exact solution to a POMDP yields the optimal action for each possible belief over the world states. The optimal action maximizes the expected reward (or minimizes the cost) of the agent over a possibly infinite horizon. The sequence of optimal actions is known as the optimal policy of the agent for interacting with its environment. 部分可觀察马尔可夫決策過程(Partially Observable Markov Decision Process,缩写:POMDP),是一種通用化的馬可夫決策過程。POMDP模擬代理人決策程序是假設系統動態由MDP決定,但是代理人無法直接觀察目前的狀態。相反的,它必須要根據模型的全域與部分區域觀察結果來推斷狀態的分佈。 因為POMDP架構的通用程度足以模擬不同的真實世界的連續過程,應用於機器人導航問題、機械維護和不定性規劃。架構最早由研究機構所建立,隨後人工智慧與社群繼續發展。 En théorie de la décision et de la théorie des probabilités, un processus de décision markovien partiellement observable (POMDP pour partially observable Markov decision process) est une généralisation d'un processus de décision markoviens (MDP pour Markov decision process). Comme dans un MDP, l'effet des actions est incertain mais, contrairement à ce qui se passe pour un MDP, l'agent n'a qu'une information partielle de l'état courant. Les POMDP sont des modèles de Markov cachés (HMM pour hidden Markov model) particuliers, dans lesquels on dispose d'actions probabilistes. Le tableau suivant montre la place des POMDP dans la famille des processus de décision : Les modèles de cette famille sont, entre autres, utilisés en intelligence artificielle pour le contrôle de systèmes complexes comme des agents intelligents. 部分観測マルコフ決定過程(ぶぶんかんそくマルコフけっていかてい、英: partially observable Markov decision process; POMDP)はマルコフ決定過程 (MDP) の一般化であり,状態を直接観測できないような意思決定過程におけるモデル化の枠組みを与える. POMDP は実世界におけるあらゆる逐次的な意思決定過程をモデル化するのに十分であり,ロボットのナビゲーションや機械整備 (machine maintenance),および不確実な状況下でのプランニングなどに応用されている.POMDP はオペレーションズリサーチを起源とし,のちに人工知能や自動計画のコミュニティに引き継がれた.
gold:hypernym
dbr:Generalization
prov:wasDerivedFrom
wikipedia-en:Partially_observable_Markov_decision_process?oldid=1104376990&ns=0
dbo:wikiPageLength
19353
foaf:isPrimaryTopicOf
wikipedia-en:Partially_observable_Markov_decision_process
Subject Item
dbr:List_of_SRI_International_people
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Decentralized_partially_observable_Markov_decision_process
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:List_of_numerical_analysis_topics
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Glossary_of_artificial_intelligence
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Shlomo_Zilberstein
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
dbp:knownFor
dbr:Partially_observable_Markov_decision_process
dbo:knownFor
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Steve_Young_(software_engineer)
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Leslie_P._Kaelbling
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
dbp:knownFor
dbr:Partially_observable_Markov_decision_process
dbo:knownFor
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Predictive_state_representation
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Sven_Koenig_(computer_scientist)
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Markov_model
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:AI_alignment
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Preference_elicitation
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Reinforcement_learning
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Karl_Johan_Åström
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Automated_planning_and_scheduling
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Free_energy_principle
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Michael_L._Littman
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Catalog_of_articles_in_probability_theory
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Markov_chain
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Markov_decision_process
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:List_of_things_named_after_Andrey_Markov
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:List_of_undecidable_problems
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Partially_observable_system
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Planning_Domain_Definition_Language
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Partially_observable_markov_decision_process
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
dbo:wikiPageRedirects
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:POMDP
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
dbo:wikiPageRedirects
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Partially-observable_Markov_decision_process
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
dbo:wikiPageRedirects
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Partially-observed_MDPs
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
dbo:wikiPageRedirects
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Partially-observed_Markov_decision_process
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
dbo:wikiPageRedirects
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Partially_Observed_Markov_Decision_Processes
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
dbo:wikiPageRedirects
dbr:Partially_observable_Markov_decision_process
Subject Item
dbr:Partially_observable_Markov_decision_problem
dbo:wikiPageWikiLink
dbr:Partially_observable_Markov_decision_process
dbo:wikiPageRedirects
dbr:Partially_observable_Markov_decision_process
Subject Item
wikipedia-en:Partially_observable_Markov_decision_process
foaf:primaryTopic
dbr:Partially_observable_Markov_decision_process