A recurrent network model trained to transcribe temporally scaled spoken digits into handwritten digits proposes that the brain flexibly encodes time-varying stimuli as neural trajectories that can be traversed at different speeds.
The interplay of recurrent excitation and short-term plasticity enables nonlinear transient amplification, an ideal mechanism for selective amplification, pattern completion, and pattern separation in recurrent neural networks.
A biologically plausible learning rule allows recurrent neural networks to learn nontrivial tasks, using only sparse, delayed rewards, and the neural dynamics of trained networks exhibit complex dynamics observed in animal frontal cortices.
A two-part neural network models reward-based training and provides a unified framework in which to study diverse computations that can be compared to electrophysiological recordings from behaving animals.
Recurrent neural networks trained to navigate and infer latent states exhibit strikingly similar remapping patterns to those observed in navigational brain areas, inspiring new analyses of published data and suggesting a possible function for spontaneous remapping to support context-dependent navigation.
Ching Fang, Dmitriy Aronov ... Emily L Mackevicius
A recurrent network using a simple, biologically plausible learning rule can learn the successor representation, suggesting that long-horizon predictions are computations that are easily accessible in neural circuits.
The large range of timescales empirically observed in neural circuits can be naturally explained when neural assemblies of heterogeneous size are recurrently coupled, empowering the neural circuits to efficiently process complex time-varying input signals.