This HTML5 document contains 96 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dbpedia-dehttp://de.dbpedia.org/resource/
dctermshttp://purl.org/dc/terms/
dbohttp://dbpedia.org/ontology/
foafhttp://xmlns.com/foaf/0.1/
n13https://global.dbpedia.org/id/
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
dbpedia-cshttp://cs.dbpedia.org/resource/
n11http://dbpedia.org/resource/ISO/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
wikipedia-enhttp://en.wikipedia.org/wiki/
dbphttp://dbpedia.org/property/
dbchttp://dbpedia.org/resource/Category:
provhttp://www.w3.org/ns/prov#
xsdhhttp://www.w3.org/2001/XMLSchema#
wikidatahttp://www.wikidata.org/entity/
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:Bfloat16_floating-point_format
rdf:type
owl:Thing
rdfs:label
Bfloat16 floating-point format Bfloat16 Bfloat16
rdfs:comment
bfloat16 (brain floating point with 16 bits) ist die Bezeichnung für ein Gleitkommaformat in Computersystemen. Es handelt sich um ein binäres Datenformat mit einem Bit für das Vorzeichen, 8 Bits für den Exponenten und 7 Bits für die Mantisse. Es handelt sich also um eine in der Mantisse gekürzte Version des IEEE 754 single-Datentyps. bfloat16 wird insbesondere in Systemen für maschinelles Lernen eingesetzt, wie beispielsweise TPUs, sowie bestimmten Intel-Xeon-Prozessoren und Intel FPGAs. V oboru informatiky je bfloat16 (brain floating point) označení konkrétního způsobu reprezentace čísel v počítači pomocí pohyblivé řádové řárky. Jedná se o formát založený na dvojkové soustavě, kde vyjadřuje , dalších 8 bitů vyjadřuje exponent a posledních 7 bitů vyjadřuje . Jedná se v podstatě o variantu dvaatřicetibitového datového typu single definovaného standardem IEEE 754. Byl zaveden zejména pro podporu strojového učení. The bfloat16 (Brain Floating Point) floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a truncated (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage r
dcterms:subject
dbc:Floating_point_types dbc:Binary_arithmetic
dbo:wikiPageID
57499027
dbo:wikiPageRevisionID
1124029922
dbo:wikiPageWikiLink
dbr:Nervana_Systems dbr:Dynamic_range dbr:Xeon dbr:−0 dbr:Single_precision dbr:ARM_architecture dbr:Single-precision_floating-point_format dbr:Hardware_acceleration dbr:Machine_learning dbr:Subnormal_number dbr:Type_conversion n11:IEC_10967 dbr:16-bit dbr:Floating_point dbr:Intelligent_sensor dbc:Floating_point_types dbr:0_(number) dbr:Google_Brain dbr:Offset_binary dbr:Mixed-precision_arithmetic dbr:TensorFlow dbr:Significand dbr:Primitive_data_type dbr:FPGA dbr:IEEE_754 dbr:OpenCL dbr:Sign_bit dbr:Precision_(arithmetic) dbr:Infinity dbr:CUDA dbr:Minifloat dbr:Exponent_bias dbr:Hexadecimal dbr:AMD dbc:Binary_arithmetic dbr:AI_accelerator dbr:AVX-512 dbr:Binary_number dbr:Computer_memory dbr:Half-precision_floating-point_format dbr:Exponent dbr:Computer_number_format dbr:Tensor_processing_unit dbr:NaN
owl:sameAs
dbpedia-cs:Bfloat16 dbpedia-de:Bfloat16 n13:6VPZU wikidata:Q54083815
dbp:wikiPageUsesTemplate
dbt:Data_types dbt:Lowercase_title dbt:Legend dbt:Reflist dbt:Short_description dbt:Floating-point dbt:Confuse
dbo:abstract
The bfloat16 (Brain Floating Point) floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a truncated (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage requirements and increase the calculation speed of machine learning algorithms. The bfloat16 format was developed by Google Brain, an artificial intelligence research group at Google. The bfloat16 format is utilized in Intel AI processors, such as Nervana NNP-L1000, Xeon processors (AVX-512 BF16 extensions), and Intel FPGAs, Google Cloud TPUs, and TensorFlow. ARMv8.6-A, AMD ROCm, and CUDA also support the bfloat16 format. On these platforms, bfloat16 may also be used in mixed-precision arithmetic, where bfloat16 numbers may be operated on and expanded to wider data types. bfloat16 (brain floating point with 16 bits) ist die Bezeichnung für ein Gleitkommaformat in Computersystemen. Es handelt sich um ein binäres Datenformat mit einem Bit für das Vorzeichen, 8 Bits für den Exponenten und 7 Bits für die Mantisse. Es handelt sich also um eine in der Mantisse gekürzte Version des IEEE 754 single-Datentyps. bfloat16 wird insbesondere in Systemen für maschinelles Lernen eingesetzt, wie beispielsweise TPUs, sowie bestimmten Intel-Xeon-Prozessoren und Intel FPGAs. V oboru informatiky je bfloat16 (brain floating point) označení konkrétního způsobu reprezentace čísel v počítači pomocí pohyblivé řádové řárky. Jedná se o formát založený na dvojkové soustavě, kde vyjadřuje , dalších 8 bitů vyjadřuje exponent a posledních 7 bitů vyjadřuje . Jedná se v podstatě o variantu dvaatřicetibitového datového typu single definovaného standardem IEEE 754. Byl zaveden zejména pro podporu strojového učení.
prov:wasDerivedFrom
wikipedia-en:Bfloat16_floating-point_format?oldid=1124029922&ns=0
dbo:wikiPageLength
29531
foaf:isPrimaryTopicOf
wikipedia-en:Bfloat16_floating-point_format
Subject Item
dbr:DL_Boost
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:Power10
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:Minifloat
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:AVX-512
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:Advanced_Vector_Extensions
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:Ampere_(microarchitecture)
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:Floating-point_arithmetic
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:List_of_Intel_CPU_microarchitectures
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:Half-precision_floating-point_format
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:AArch64
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:AI_accelerator
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:Advanced_Matrix_Extensions
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:Mixed-precision_arithmetic
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:CPUID
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:IEEE_754
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
Subject Item
dbr:BF16
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
dbo:wikiPageRedirects
dbr:Bfloat16_floating-point_format
Subject Item
dbr:Bf16
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
dbo:wikiPageRedirects
dbr:Bfloat16_floating-point_format
Subject Item
dbr:Bfloat16
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
dbo:wikiPageRedirects
dbr:Bfloat16_floating-point_format
Subject Item
dbr:Brain_floating-point_format
dbo:wikiPageWikiLink
dbr:Bfloat16_floating-point_format
dbo:wikiPageRedirects
dbr:Bfloat16_floating-point_format
Subject Item
wikipedia-en:Bfloat16_floating-point_format
foaf:primaryTopic
dbr:Bfloat16_floating-point_format