. . . . . . . "Leabra"@en . . . . . . . . . . . . . . "7049330"^^ . . "7003"^^ . "Leabra stands for local, error-driven and associative, biologically realistic algorithm. It is a model of learning which is a balance between Hebbian and error-driven learning with other network-derived characteristics. This model is used to mathematically predict outcomes based on inputs and previous learning influences. This model is heavily influenced by and contributes to neural network designs and models. This algorithm is the default algorithm in emergent (successor of PDP++) when making a new project, and is extensively used in various simulations."@en . . . . . . "Leabra stands for local, error-driven and associative, biologically realistic algorithm. It is a model of learning which is a balance between Hebbian and error-driven learning with other network-derived characteristics. This model is used to mathematically predict outcomes based on inputs and previous learning influences. This model is heavily influenced by and contributes to neural network designs and models. This algorithm is the default algorithm in emergent (successor of PDP++) when making a new project, and is extensively used in various simulations. Hebbian learning is performed using conditional principal components analysis (CPCA) algorithm with correction factor for sparse expected activity levels. Error-driven learning is performed using GeneRec, which is a generalization of the , and approximates Almeida\u2013Pineda recurrent backpropagation. The symmetric, midpoint version of GeneRec is used, which is equivalent to the contrastive Hebbian learning algorithm (CHL). See O'Reilly (1996; Neural Computation) for more details. The activation function is a point-neuron approximation with both discrete spiking and continuous rate-code output. Layer or unit-group level inhibition can be computed directly using a k-winners-take-all (KWTA) function, producing sparse distributed representations. The net input is computed as an average, not a sum, over connections, based on normalized, sigmoidally transformed weight values, which are subject to scaling on a connection-group level to alter relative contributions. Automatic scaling is performed to compensate for differences in expected activity level in the different projections. Documentation about this algorithm can be found in the book \"Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain\" published by MIT press. and in the Emergent Documentation"@en . . . . . . . . . . . . . . . "1102519174"^^ . . . . . . . . . . . . .