About: Thread block (CUDA programming)     Goto   Sponge   NotDistinct   Permalink

An Entity of Type : owl:Thing, within Data Space : dbpedia.org associated with source document(s)
QRcode icon
http://dbpedia.org/describe/?url=http%3A%2F%2Fdbpedia.org%2Fresource%2FThread_block_%28CUDA_programming%29

A thread block is a programming abstraction that represents a group of threads that can be executed serially or in parallel. For better process and data mapping, threads are grouped into thread blocks. The number of threads in a thread block was formerly limited by the architecture to a total of 512 threads per block, but as of March 2010, with compute capability 2.x and higher, blocks may contain up to 1024 threads. The threads in the same thread block run on the same stream processor. Threads in the same block can communicate with each other via shared memory, barrier synchronization or other synchronization primitives such as atomic operations.

AttributesValues
rdfs:label
  • Thread block (CUDA programming) (en)
  • 线程块 (zh)
rdfs:comment
  • 线程块是CUDA中的一种抽象编程,它表示一组可以串行或并行执行的线程。线程块中的线程数量曾經受架构限制,每个线程块最多只有512个线程,但从2019年7月开始,线程块可以最多包含1024个线程。同一线程块中的线程运行在同一个流处理器上。同一线块中的线程可以通过共享内存、同步屏障相互通信。 多个线程块组合成一个网格(grid)。同一网格中的所有线程块的线程數量相同。 (zh)
  • A thread block is a programming abstraction that represents a group of threads that can be executed serially or in parallel. For better process and data mapping, threads are grouped into thread blocks. The number of threads in a thread block was formerly limited by the architecture to a total of 512 threads per block, but as of March 2010, with compute capability 2.x and higher, blocks may contain up to 1024 threads. The threads in the same thread block run on the same stream processor. Threads in the same block can communicate with each other via shared memory, barrier synchronization or other synchronization primitives such as atomic operations. (en)
foaf:depiction
  • http://commons.wikimedia.org/wiki/Special:FilePath/Block-thread.svg
  • http://commons.wikimedia.org/wiki/Special:FilePath/Software-Perspective_for_thread_block.jpg
  • http://commons.wikimedia.org/wiki/Special:FilePath/Streaming-Multiprocessor.jpg
  • http://commons.wikimedia.org/wiki/Special:FilePath/Warp-Scheduler-Gpu.jpg
dcterms:subject
Wikipage page ID
Wikipage revision ID
Link from a Wikipage to another Wikipage
sameAs
dbp:wikiPageUsesTemplate
thumbnail
date
  • December 2016 (en)
reason
  • Article name and lead lacks correct context. Unable to determine exactly what Thread Block architecture is applied to. CUDA is mentioned in passing. (en)
has abstract
  • A thread block is a programming abstraction that represents a group of threads that can be executed serially or in parallel. For better process and data mapping, threads are grouped into thread blocks. The number of threads in a thread block was formerly limited by the architecture to a total of 512 threads per block, but as of March 2010, with compute capability 2.x and higher, blocks may contain up to 1024 threads. The threads in the same thread block run on the same stream processor. Threads in the same block can communicate with each other via shared memory, barrier synchronization or other synchronization primitives such as atomic operations. Multiple blocks are combined to form a grid. All the blocks in the same grid contain the same number of threads. The number of threads in a block is limited, but grids can be used for computations that require a large number of thread blocks to operate in parallel and to use all available multiprocessors. CUDA is a parallel computing platform and programming model that higher level languages can use to exploit parallelism. In CUDA, the kernel is executed with the aid of threads. The thread is an abstract entity that represents the execution of the kernel. A kernel is a function that compiles to run on a special device. Multi threaded applications use many such threads that are running at the same time, to organize parallel computation. Every thread has an index, which is used for calculating memory address locations and also for taking control decisions. (en)
  • 线程块是CUDA中的一种抽象编程,它表示一组可以串行或并行执行的线程。线程块中的线程数量曾經受架构限制,每个线程块最多只有512个线程,但从2019年7月开始,线程块可以最多包含1024个线程。同一线程块中的线程运行在同一个流处理器上。同一线块中的线程可以通过共享内存、同步屏障相互通信。 多个线程块组合成一个网格(grid)。同一网格中的所有线程块的线程數量相同。 (zh)
prov:wasDerivedFrom
page length (characters) of wiki page
foaf:isPrimaryTopicOf
is Link from a Wikipage to another Wikipage of
is Wikipage redirect of
is foaf:primaryTopic of
Faceted Search & Find service v1.17_git139 as of Feb 29 2024


Alternative Linked Data Documents: ODE     Content Formats:   [cxml] [csv]     RDF   [text] [turtle] [ld+json] [rdf+json] [rdf+xml]     ODATA   [atom+xml] [odata+json]     Microdata   [microdata+json] [html]    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 08.03.3330 as of Mar 19 2024, on Linux (x86_64-generic-linux-glibc212), Single-Server Edition (62 GB total memory, 54 GB memory in use)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2024 OpenLink Software