Deep Asymmetric Recurrent Networks in JAX.
darnax is a research library for building and experimenting with asymmetric recurrent neural networks and their learning dynamics. Inspired by recent work on local plasticity and representational manifolds, it offers a lightweight, composable toolkit for studying distributed, gradient-free learning in deep recurrent models.
- Composable modules built on Equinox, with support for sparse and structured connectivity.
- Orchestrators for sequential or parallel recurrent dynamics.
- Local update rules implementing gradient-free plasticity mechanisms.
- Optax integration for optimization, even without explicit gradients.
- Pure JAX pytrees: everything is transparent and functional.
pip install git+https://github.com/Willinki/darnax.git📖 Full documentation and tutorials are available at: 👉 dbadalotti.com/darnax
This project is a work in progress — contributions, issues, and discussions are welcome!
If you use darnax in your research, please cite the following work:
Davide Badalotti, Carlo Baldassi, Marc Mézard, Mattia Scardecchia, Riccardo Zecchina. Dynamical Learning in Deep Asymmetric Recurrent Neural Networks. arXiv:2509.05041 (2025).
@article{badalotti2025darnax,
title={Dynamical Learning in Deep Asymmetric Recurrent Neural Networks},
author={Badalotti, Davide and Baldassi, Carlo and Mézard, Marc and Scardecchia, Mattia and Zecchina, Riccardo},
journal={arXiv preprint arXiv:2509.05041},
year={2025}
}