The repo is the official implementation for the paper: Few-Trial Neural Adaptation with Stable Latent Dynamics.
We introduce Flow-Based Distribution Alignment (FDA), a novel framework that leverages flow matching to align neural activity across sessions with minimal data. The codebase is organized as follows:
FDA/
├── align/ # Alignment loss functions (MMD and MLA)
├── condition/ # Conditional feature extractors
├── config/ # Model, pre-training, and alignment configurations
├── experiment/ # Pre-training and fine-tuning functions
├── flow/
│ ├── models/ # Flow-based backbone of FDA
│ └── transport/ # Diffusion and ODE-based sampling for flow matching
├── layers/ # Layers for conditional feature extractors
├── ldns/ # Simulated neural data generation (Lorenz attractor)
├── preprocess/ # Dataloader generation
├── utils/ # Utility functions (attention masks, spike preprocessing)
└── xds/ # Standard preprocessing for CO-C, CO-M, and RT-M datasets
The CO-C, CO-M and RT-M datasets used here are available at Dryad. They correspond to the following files:
Chewie_CO_2016.7z(CO-C)Mihili_CO_2014.7z(CO-M)Mihili_RT_2013_2014.7z(RT-M)
After downloading, extract the files and place them in the dataset/ folder.
We recommend setting up the environment using conda or miniconda. All required dependencies can be installed with:
git clone https://github.com/wangpuli/FDA.git
cd FDA
conda env create -f environment.yml-
Simulated Neural Data
The simulated neural data is generated from the Lorenz attractor, following the implementation in LDNS. Run FDA on the simulated neural data with:
python test_FDA_on_simulated_neural_data.py
-
Spiking Recordings
The experimental scripts for pre-training and fine-tuning on the RT-M dataset can be run as follows. In this example, fine-tuning is performed with a target ratio of
$r = 0.02$ .Pre-training:
python test_FDA_on_spikes_pretrain.py
Fine-tuning:
python test_FDA_on_spikes_ft.py
We ran test_FDA_on_spikes_pretrain.py and test_FDA_on_spikes_ft.py on an NVIDIA 1080Ti (11GB) and obtained the R2 values (%) for source and target sessions from the RT-M dataset, as summarized in the table below:
| Day | 0 | 1 | 38 | 39 | 40 | 52 | 53 | 67 | 69 | 77 | 79 |
|---|---|---|---|---|---|---|---|---|---|---|---|
| R2(%) | 87.05 | 71.84 | 67.73 | 56.05 | 50.79 | 46.20 | 53.46 | 48.05 | 25.85 | 14.64 | 40.21 |
The resulting pre-trained and fine-tuned FDA models on RT-M are publicly available on Huggingface under the pre_train and ft folders.
@inproceedings{wang2025FDA,
title = {Flow Matching for Few-Trial Neural Adaptation with Stable Latent Dynamics},
author = {Wang, Puli and Qi, Yu and Wang, Yueming and Pan, Gang},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning (ICML)},
year = {2025}
}We would like to thank the contributors of the SiT, LDNS, and iTransformer repositories for making their research openly available.