Rainer Engelken

Bridging dynamical systems and neuroscience to reveal principles of learning in neural circuits.

RainerEngelken.jpg

Welcome! We are a computational neuroscience lab working at the interface of dynamical systems, neuroscience, and machine learning. Our mission is to uncover the theoretical principles that govern how neural circuits learn, process information, and compute.


PhD Recruitment: Fall 2026

I am recruiting 1โ€“3 fully funded PhD students to join my lab at UIUC.

Application paths and deadlines:

  • ECE (Electrical & Computer Engineering):
    Priority deadline December 15, 2025 ยท Final deadline January 15, 2026
  • CS (Siebel School of Computing and Data Science):
    Deadline December 15, 2025

I particularly encourage applications from students with strong backgrounds in physics, mathematics, computer science, or engineering, who enjoy thinking deeply with equations and code.

Details on how to apply and what Iโ€™m looking for โ†’ Join the Lab


We bridge biophysics to collective dynamics, developing tools from statistical physics and dynamical systems to probe learning in biological and artificial networks.

Our research bridges from the biophysics of single neurons to the collective behavior of large-scale circuits. Using and developing tools from statistical physics and dynamical systems theory, we investigate how neural circuits learn, process information, and maintain stability.

Research Focus Areas

  • The Dynamics of Learning: How do neural representations and synaptic weights evolve during learning, and how does network stability enable credit assignment over long time scales?
  • The Role of Chaos and Stability in Computation: Characterizing the rich dynamics of recurrent networks (RNNs), from sparse chaos in spiking circuits to the full Lyapunov spectra of rate networks. We seek to understand when chaos is beneficial for computation and when it must be tamed for stable learning.

  • Building Brain-Inspired, Efficient Algorithms: Developing highly scalable, event-based simulation and training algorithms (like SparseProp) to enable the study of large, brain-inspired systems.

selected publications

  1. sparse-chaos.png
    Sparse chaos in cortical circuits
    Rainer Engelken, Michael Monteforte, and Fred Wolf
    2024
  2. gradient-flossing.png
    Gradient Flossing: Improving Gradient Descent through Dynamic Control of Jacobians
    Rainer Engelken
    In Advances in Neural Information Processing Systems, 2023
    NeurIPS 2023, Poster
  3. sparse-prop3.svg
    SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks
    Rainer Engelken
    In Advances in Neural Information Processing Systems, 2023
    NeurIPS 2023, Poster
  4. rate-chaos2.png
    Lyapunov spectra of chaotic recurrent neural networks
    Rainer Engelken, Fred Wolf, and L. F. Abbott
    Physical Review Research, 2023