Computational Neuroscience Cheat Sheet
The core ideas of Computational Neuroscience distilled into a single, scannable reference — perfect for review or quick lookup.
Quick Reference
Hodgkin-Huxley Model
A mathematical model that describes how action potentials in neurons are initiated and propagated, using a set of nonlinear ordinary differential equations representing ionic conductances across the cell membrane.
Neural Coding
The study of how neurons represent and transmit information through patterns of electrical activity, including rate coding (average firing frequency) and temporal coding (precise spike timing).
Synaptic Plasticity and Hebbian Learning
The ability of synapses to strengthen or weaken over time in response to activity. Hebbian learning, summarized as 'neurons that fire together wire together,' describes how correlated pre- and post-synaptic activity strengthens the connection between them.
Attractor Networks
Recurrent neural network models in which stable patterns of activity (attractors) represent stored memories or decision states. The network dynamics cause neural activity to converge toward these stable states.
Population Coding
The representation of information by the joint activity of a group of neurons rather than by any single neuron. The stimulus is decoded from the combined firing pattern of the population.
Bayesian Brain Hypothesis
The theory that the brain performs approximate Bayesian inference, combining prior expectations with incoming sensory evidence to form probabilistic estimates of the state of the world.
Predictive Coding
A theoretical framework proposing that the brain continuously generates predictions about incoming sensory input, and that neural processing primarily involves computing and propagating prediction errors between hierarchical levels.
Integrate-and-Fire Neuron Model
A simplified neuron model in which incoming synaptic inputs are summed (integrated) over time, and when the membrane potential reaches a threshold, the neuron emits a spike and resets. It captures essential spiking dynamics while remaining computationally efficient.
Spike-Timing-Dependent Plasticity (STDP)
A biological learning rule in which the direction and magnitude of synaptic change depend on the precise timing relationship between pre- and post-synaptic spikes. If the presynaptic spike precedes the postsynaptic spike, the synapse strengthens; if the order is reversed, it weakens.
Reinforcement Learning in the Brain
The computational framework linking dopaminergic neuron activity to reward prediction errors, mirroring the temporal difference (TD) learning algorithm from machine learning. Dopamine signals reflect the difference between received and expected reward.
Key Terms at a Glance
Get study tips in your inbox
We'll send you evidence-based study strategies and new cheat sheets as they're published.
We'll notify you about updates. No spam, unsubscribe anytime.