This HTML5 document contains 55 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dctermshttp://purl.org/dc/terms/
yago-reshttp://yago-knowledge.org/resource/
dbohttp://dbpedia.org/ontology/
n19http://dbpedia.org/resource/File:
foafhttp://xmlns.com/foaf/0.1/
n20https://global.dbpedia.org/id/
yagohttp://dbpedia.org/class/yago/
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
freebasehttp://rdf.freebase.com/ns/
n17http://commons.wikimedia.org/wiki/Special:FilePath/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
wikipedia-enhttp://en.wikipedia.org/wiki/
dbchttp://dbpedia.org/resource/Category:
dbphttp://dbpedia.org/property/
provhttp://www.w3.org/ns/prov#
xsdhhttp://www.w3.org/2001/XMLSchema#
wikidatahttp://www.wikidata.org/entity/
goldhttp://purl.org/linguistics/gold/
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:Catastrophic_interference
rdf:type
yago:Abstraction100002137 yago:Statement106722453 yago:ComputerArchitecture106725249 yago:Description106724763 yago:Specification106725067 yago:NeuralNetwork106725467 yago:WikicatArtificialNeuralNetworks yago:Communication100033020 dbo:Organisation yago:Message106598915
rdfs:label
Catastrophic interference
rdfs:comment
Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to abruptly and drastically forget previously learned information upon learning new information. Neural networks are an important part of the network approach and connectionist approach to cognitive science. With these networks, human capabilities such as memory and learning can be modeled using computer simulations.
foaf:depiction
n17:Pseudorecurrentnetwork.jpg
dcterms:subject
dbc:Machine_learning dbc:Artificial_intelligence dbc:Artificial_neural_networks
dbo:wikiPageID
39182554
dbo:wikiPageRevisionID
1122770121
dbo:wikiPageWikiLink
dbc:Artificial_intelligence dbr:Machine_learning dbr:Neocortex dbr:Generative_model dbc:Artificial_neural_networks dbr:Artificial_neural_network dbr:Long_term_memory dbr:Short_term_memory dbr:Backpropagation dbr:Neural_network dbc:Machine_learning dbr:Lookup_table dbr:Feedforward_neural_network dbr:Hippocampus dbr:Latent_Learning dbr:Transfer_learning n19:Pseudorecurrentnetwork.jpg dbr:Cognitive_science dbr:Human_memory dbr:Connectionism dbr:Orthogonality dbr:One-hot
owl:sameAs
freebase:m.0tkdvqb wikidata:Q16251345 yago-res:Catastrophic_interference n20:bwZT
dbp:wikiPageUsesTemplate
dbt:Machine_learning dbt:Explain dbt:Citation_needed dbt:Reflist dbt:According_to_whom
dbo:thumbnail
n17:Pseudorecurrentnetwork.jpg?width=300
dbo:abstract
Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to abruptly and drastically forget previously learned information upon learning new information. Neural networks are an important part of the network approach and connectionist approach to cognitive science. With these networks, human capabilities such as memory and learning can be modeled using computer simulations. Catastrophic interference is an important issue to consider when creating connectionist models of memory. It was originally brought to the attention of the scientific community by research from McCloskey and Cohen (1989), and Ratcliff (1990). It is a radical manifestation of the 'sensitivity-stability' dilemma or the 'stability-plasticity' dilemma. Specifically, these problems refer to the challenge of making an artificial neural network that is sensitive to, but not disrupted by, new information. Lookup tables and connectionist networks lie on the opposite sides of the stability plasticity spectrum. The former remains completely stable in the presence of new information but lacks the ability to generalize, i.e. infer general principles, from new inputs. On the other hand, connectionist networks like the standard backpropagation network can generalize to unseen inputs, but they are very sensitive to new information. Backpropagation models can be analogized to human memory insofar as they mirror the human ability to generalize but these networks often exhibit less stability than human memory. Notably, these backpropagation networks are susceptible to catastrophic interference. This is an issue when modelling human memory, because unlike these networks, humans typically do not show catastrophic forgetting.
gold:hypernym
dbr:Tendency
prov:wasDerivedFrom
wikipedia-en:Catastrophic_interference?oldid=1122770121&ns=0
dbo:wikiPageLength
30233
foaf:isPrimaryTopicOf
wikipedia-en:Catastrophic_interference