Research

I study how coordinated interactions of neurons give rise to complex population dynamics and computation in biological and artificial neural networks. Using tools from dynamical systems such as Lyapunov spectra, covariant vectors, and dynamic mean field theory, I connect single neuron biophysics to network level behavior and learning.

Areas of interest

Learning Dynamics. Theory for how activities, synapses, and representations evolve; links between stability and temporal credit assignment; geometry of error signals and task complexity.

Chaos and Stability. Full Lyapunov spectra in RNNs; sparse chaos in spiking circuits; controllability and entropy rates; using chaos for computation when it helps and reducing it when it hurts.

Spiking Networks and Efficient Algorithms. Event-based simulation and training of large spiking networks with SparseProp; million-neuron exact simulations on CPU; scaling laws; relevance for neuromorphic hardware.

Neural Manifolds and Replay. Low-dimensional structure in hippocampal population activity; manifold consistency across trials and days; sequential events during inter-trial intervals.