About: Adaptive coding     Goto   Sponge   NotDistinct   Permalink

An Entity of Type : yago:Rule105846932, within Data Space : dbpedia.org associated with source document(s)
QRcode icon
http://dbpedia.org/describe/?url=http%3A%2F%2Fdbpedia.org%2Fresource%2FAdaptive_coding

Adaptive coding refers to variants of entropy encoding methods of lossless data compression. They are particularly suited to streaming data, as they adapt to localized changes in the characteristics of the data, and don't require a first pass over the data to calculate a probability model. The cost paid for these advantages is that the encoder and decoder must be more complex to keep their states synchronized, and more computational power is needed to keep adapting the encoder/decoder state.

AttributesValues
rdf:type
rdfs:label
  • Adaptive coding (en)
rdfs:comment
  • Adaptive coding refers to variants of entropy encoding methods of lossless data compression. They are particularly suited to streaming data, as they adapt to localized changes in the characteristics of the data, and don't require a first pass over the data to calculate a probability model. The cost paid for these advantages is that the encoder and decoder must be more complex to keep their states synchronized, and more computational power is needed to keep adapting the encoder/decoder state. (en)
foaf:depiction
  • http://commons.wikimedia.org/wiki/Special:FilePath/Damaged_N00153918.jpg
dcterms:subject
Wikipage page ID
Wikipage revision ID
Link from a Wikipage to another Wikipage
sameAs
dbp:wikiPageUsesTemplate
thumbnail
bot
  • yes (en)
date
  • December 2019 (en)
  • June 2009 (en)
reason
  • How many of the images were actually damaged, now that the mission has ended? (en)
has abstract
  • Adaptive coding refers to variants of entropy encoding methods of lossless data compression. They are particularly suited to streaming data, as they adapt to localized changes in the characteristics of the data, and don't require a first pass over the data to calculate a probability model. The cost paid for these advantages is that the encoder and decoder must be more complex to keep their states synchronized, and more computational power is needed to keep adapting the encoder/decoder state. Almost all data compression methods involve the use of a model, a prediction of the composition of the data. When the data matches the prediction made by the model, the encoder can usually transmit the content of the data at a lower information cost, by making reference to the model.This general statement is a bit misleading as general data compression algorithms would include the popular LZW and LZ77 algorithms,which are hardly comparable to compression techniques typically called adaptive.Run-length encoding and the typical JPEG compression with run length encoding and predefined Huffman codes do not transmit a model.A lot of other methods adapt their model to the current file and need to transmit it in addition to the encoded data, because both the encoder and the decoder need to use the model. In adaptive coding, the encoder and decoder are instead equipped with a predefined meta-model about how they will alter their models in response to the actual content of the data, and otherwise start with a blank slate, meaning that no initial model needs to be transmitted. As the data is transmitted, both encoder and decoder adapt their models, so that unless the character of the data changes radically, the model becomes better-adapted to the data it is handling and compresses it more efficiently approaching the efficiency of the static coding. (en)
prov:wasDerivedFrom
page length (characters) of wiki page
foaf:isPrimaryTopicOf
is Link from a Wikipage to another Wikipage of
is foaf:primaryTopic of
Faceted Search & Find service v1.17_git139 as of Feb 29 2024


Alternative Linked Data Documents: ODE     Content Formats:   [cxml] [csv]     RDF   [text] [turtle] [ld+json] [rdf+json] [rdf+xml]     ODATA   [atom+xml] [odata+json]     Microdata   [microdata+json] [html]    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 08.03.3330 as of Mar 19 2024, on Linux (x86_64-generic-linux-glibc212), Single-Server Edition (61 GB total memory, 43 GB memory in use)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2024 OpenLink Software