This HTML5 document contains 80 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dcthttp://purl.org/dc/terms/
dbohttp://dbpedia.org/ontology/
foafhttp://xmlns.com/foaf/0.1/
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
n14https://web.archive.org/web/20210922093841/https:/nlp.seas.harvard.edu/2018/04/03/
n5http://commons.wikimedia.org/wiki/Special:FilePath/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
wikipedia-enhttp://en.wikipedia.org/wiki/
dbphttp://dbpedia.org/property/
provhttp://www.w3.org/ns/prov#
dbchttp://dbpedia.org/resource/Category:
xsdhhttp://www.w3.org/2001/XMLSchema#
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:Ashish_Vaswani
dbo:knownFor
dbr:Transformer_(deep_learning_architecture)
dbp:knownFor
dbr:Transformer_(deep_learning_architecture)
Subject Item
dbr:Transformer_(deep_learning_architecture)
rdf:type
owl:Thing
rdfs:seeAlso
dbr:Large_language_model dbr:Timeline_of_machine_learning
rdfs:label
Transformer (deep learning architecture)
foaf:depiction
n5:Transformer,_attention_block_diagram.png n5:DeepSeek_KV_cache_comparison_between_MHA,_GQA,_MQA,_MLA.svg n5:Transformer,_one_encoder-decoder_block.png n5:Transformer,_one_decoder_block.png n5:DeepSeek_MoE_and_MLA_(DeepSeek-V2).svg n5:Transformer_encoder,_with_norm-first_and_norm-last.png n5:Transformer_architecture_-_FFN_module.png n5:Multi-Token_Prediction_(DeepSeek)_01.svg n5:Transformer_architecture_-_Attention_Head_module.png n5:Transformer_decoder,_with_norm-first_and_norm-last.png n5:Multiheaded_attention,_block_diagram.png n5:Transformer_architecture_-_Multiheaded_Attention_module.png n5:Transformer,_full_architecture.png n5:Positional_encoding.png n5:Transformer,_schematic_object_hierarchy,_for_implementation_in_object-oriented_programming.png n5:Transformer,_one_encoder_block.png n5:Transformer,_stacked_multilayers.png n5:Transformer,_stacked_layers_and_sublayers.png
prov:wasDerivedFrom
wikipedia-en:Transformer_(deep_learning_architecture)?oldid=1296343905&ns=0
dbo:description
機械学習モデル architecture d'apprentissage automatique ikasketa automatikoko eredua samhail mheaisínfhoghlama ó Google Brain 機械學習模型 machine-learning model architecture first developed by Google Brain modelo de aprendizaje automático מודל למידה עמוקה maskinelæringsmodel typ modelu založeného na hlubokém učení, využívající mechanismus pozornosti 采用注意力机制的深度学习模型 модель машинного навчання maskininlärningsmodell från Google Brain mô hình học máy پردازش زبان طبیعی maŝinlerna arĥitekturo Model d'aprenentatge automàtic مۆدێلی فێربوونی ئامێر Maschinenlernmodell von Google Brain um modelo de aprendizagem de máquina do Google Brain modello di apprendimento profondo model uczenia głębokiego մեքենայական ուսուցման մոդել 기계 학습 모형의 하나 arhitectura unui model de învățare automată dezvoltată inițial de Google Brain
dbo:thumbnail
n5:Transformer,_full_architecture.png?width=300
dct:subject
dbc:Google_software dbc:2017_in_artificial_intelligence dbc:Neural_network_architectures
foaf:isPrimaryTopicOf
wikipedia-en:Transformer_(deep_learning_architecture)
dbp:wikiPageUsesTemplate
dbt:Artificial_intelligence_navbox dbt:Requested_move_notice dbt:Anchor dbt:Refbegin dbt:Cite_web dbt:Cite_arXiv dbt:Pg dbt:Main dbt:Annotated_link dbt:Google_AI dbt:Short_description dbt:Reflist dbt:Further dbt:Refend dbt:Webarchive dbt:TOC_limit dbt:NoteTag dbt:See_also dbt:Machine_learning dbt:Nnbsp
dbp:date
2021-09-22
dbp:url
n14:attention.html
Subject Item
wikipedia-en:Transformer_(deep_learning_architecture)
foaf:primaryTopic
dbr:Transformer_(deep_learning_architecture)
Subject Item
dbr:Generative_pre-trained_transformer
rdfs:seeAlso
dbr:Transformer_(deep_learning_architecture)
Subject Item
dbr:Attention_(machine_learning)
rdfs:seeAlso
dbr:Transformer_(deep_learning_architecture)