Releases: Routhleck/canns
v0.10.0
What's New
🧠 Brain-Inspired Learning & RNN Analysis - Major Update
- Five new biologically-inspired learning algorithms: Hebbian, Oja, Sanger, BCM, and STDP
- JIT-compiled trainers with 1.6-2x performance improvements
- Fixed point finder for RNN dynamical systems analysis
- Comprehensive examples and tutorials for neuromorphic computing
📖 Complete Documentation Overhaul
- New bilingual Quick Starts series (build → task → analyze → train workflow)
- Automated translation pipeline for Chinese ↔ English documentation
- Jupyter notebook-based tutorials with interactive examples
- Restructured API documentation with slow_points and brain_inspired modules
🎨 Enhanced Jupyter Integration
- Automatic HTML rendering for matplotlib animations in notebooks
- Autoplay support for interactive visualizations
- Improved animation display across all analyzer modules
Major Features / Key Changes
🧠 Brain-Inspired Learning Rules (PR #55)
Adds five classic neural learning algorithms with JIT compilation support:
- OjaTrainer: Normalized Hebbian learning for PCA extraction (rate-based)
- SangerTrainer: Generalized Hebbian Algorithm (GHA) for multiple orthogonal principal components
- BCMTrainer: Sliding threshold plasticity for receptive field development (rate-based)
- STDPTrainer: Spike-Timing-Dependent Plasticity for temporal learning (spike-based)
- HopfieldAnalyzer: Energy-based diagnostics for Hopfield networks
Performance: JIT compilation provides 1.6-2x speedup over uncompiled versions
from canns.trainer import OjaTrainer
from canns.models.brain_inspired import LinearLayer
# Extract first principal component with Oja's rule
model = LinearLayer(n_in=100, n_out=1)
trainer = OjaTrainer(model, lr=0.01, compiled=True) # JIT-compiled
for epoch in range(50):
trainer.train_epoch(data)New Brain-Inspired Models:
LinearLayer: Rate-based neurons for Hebbian learningSpikingLayer: LIF (Leaky Integrate-and-Fire) neurons for STDP
Examples Added (examples/brain_inspired/):
oja_pca_extraction.py: PCA with Oja's ruleoja_vs_sanger_comparison.py: Comparing single vs multi-PC extractionbcm_receptive_fields.py: Receptive field development with BCMstdp_temporal_learning.py: Spike-timing plasticityhopfield_energy_diagnostics.py: Energy landscape analysis
🔬 Fixed Point Finder for RNN Analysis (PR #42)
Implements gradient-based fixed point optimization for analyzing RNN dynamics:
- FixedPointFinder: Joint optimization with Jacobian analysis
- Stability Analysis: Eigenvalue decomposition for fixed point classification
- Visualization Tools: 2D/3D PCA plots with trajectories
- Checkpoint System: Model save/load using BrainState msgpack
from canns.analyzer.slow_points import FixedPointFinder
# Find and analyze fixed points
finder = FixedPointFinder(rnn_model, n_inits=512)
fps = finder.find_fixed_points(initial_states, inputs)
# Stability analysis via Jacobian eigenvalues
fps.decompose_jacobians()
# Visualize in state space
from canns.analyzer.slow_points import plot_fixed_points_2d
plot_fixed_points_2d(fps, trajectories, config=PlotConfig())Examples Added (examples/slow_points_analysis/):
flipflop_fixed_points.py: Flip-flop RNN with fixed point analysissinewave_fixed_points.py: Sine wave generator analysis
📖 Documentation Restructure & Bilingual Support (PR #56, #60)
Complete overhaul of documentation structure with bilingual Quick Starts:
New Quick Starts Series (English & Chinese):
- Installation (
00_installation.rst): Environment setup - Build Model (
01_build_model.ipynb): CANN1D/2D construction - Generate Tasks (
02_generate_tasks.ipynb): Tracking and navigation - Analyze Model (
03_analyze_model.ipynb): Visualization and metrics - Analyze Data (
04_analyze_data.ipynb): Experimental data fitting - Train Brain-Inspired (
05_train_brain_inspired.ipynb): Learning algorithms
Infrastructure:
- Automated translation script (
scripts/translate_docs.py) using Claude 4.5 Haiku - Organized docs into
0_why_canns,1_quick_starts,2_core_concepts,3_full_detail_tutorials - Added Design Philosophy notebooks explaining CANN theory
API Documentation Updates:
- New modules:
slow_points,brain_inspired,data,utils - Updated autoapi references for all new analyzers and trainers
🎨 Jupyter Animation Auto-Rendering (PR #59, #61)
Automatic HTML display for matplotlib animations in Jupyter notebooks:
- Auto-Detection: Detects Jupyter environment and renders as interactive HTML/JS
- Autoplay Support: Animations start playing automatically on load
- No Double Display: Fixed duplicate rendering bug (interactive + static)
- Extended Coverage: Applied to all animation functions across analyzer modules
# In Jupyter notebooks, animations now auto-render as HTML
analyzer.animate_dynamics(cann, config=PlotConfig.for_animation())
# → Displays interactive HTML animation with autoplay, no code changes neededFunctions Updated:
energy_landscape_1d_animation(),energy_landscape_2d_animation()create_theta_sweep_place_cell_animation(),create_theta_sweep_grid_cell_animation()- Experimental data animations in
cann1d.pyandcann2d.py
🔧 Module Reorganization (PR #54)
Consolidated data and utility modules for cleaner API:
- New
data/module:data.datasets: Dataset utilities (moved from_datasets)data.loaders: Experimental data loaders (moved fromanalyzer.experimental_data)
- New
utils/module:utils.benchmark: Performance benchmarking (moved frommisc)
- Removed:
misc/module (empty after reorganization)
# New import paths
from canns.data import load_experimental_dataset
from canns.utils import benchmark_model🗺️ CohoMap 1.0 Visualization (PR #58)
Added CohoMap visualization option for circular coordinate decoding:
from canns.analyzer.experimental_data import decode_circular_coordinates
coords = decode_circular_coordinates(data, method='cohomap') # New optionNew Components Added
Trainers (src/canns/trainer/):
oja.py- Oja's normalized Hebbian learningsanger.py- Sanger's GHA for multiple PCsbcm.py- BCM sliding threshold plasticitystdp.py- Spike-timing-dependent plasticityutils.py- Shared utilities (running averages, spike buffers)
Models (src/canns/models/brain_inspired/):
linear.py- Linear layer for rate-based learningspiking.py- LIF spiking neurons for STDP
Analyzers (src/canns/analyzer/):
brain_inspired/hopfield.py- Hopfield network energy diagnosticsslow_points/finder.py- Fixed point optimizationslow_points/fixed_points.py- FixedPoints data containerslow_points/visualization.py- Fixed point plottingslow_points/checkpoint.py- Model save/load utilitiesplotting/jupyter_utils.py- Jupyter animation auto-rendering
Data Modules (src/canns/data/):
datasets.py- Dataset utilitiesloaders.py- Experimental data loaders
Utilities (src/canns/utils/):
benchmark.py- Performance benchmarking
Examples:
- 6 brain-inspired learning examples with comprehensive README
- 2 slow points analysis examples (FlipFlop, SineWave)
Technical Improvements
Performance
- JIT Compilation: BCMTrainer (1.6x), OjaTrainer (2x), STDPTrainer speedups
- Scan-Based Training: Used
brainstate.transform.scanfor stateful loops - Gradient Clipping: Global norm clipping in RNN training
Code Quality
- Type Annotations: Added to FixedPointFinder and trainer utilities
- NaN Detection: Warning in Jacobian decomposition for numerical issues
- Improved Sampling: Avoids duplicates when n_inits <= available samples
- Better Filtering: q-value threshold for low-quality fixed points
Documentation
- Security: Removed command-line API key parameters, enforce environment variables
- Timeout: Added 30s timeout to translation API requests
- Incremental Translation: Skip existing files for resume support
New Dependencies
None - all additions leverage existing JAX/BrainState infrastructure.
Breaking Changes
Module Reorganization (PR #54)
canns._datasets→canns.data.datasetscanns.misc.benchmark→canns.utils.benchmarkcanns.analyzer.experimental_data._datasets_utils→canns.data.loaders
Migration:
# Old imports
from canns._datasets import load_dataset
from canns.misc.benchmark import benchmark_model
# New imports
from canns.data.datasets import load_dataset
from canns.utils.benchmark import benchmark_modelNote: misc/ module removed entirely.
Technical Notes
- BrainState Version: Uses
brainstate.transforminstead of deprecatedbrainstate.compile - Fixed Point Tolerance: Default tolerance is 1e-3 for uniqueness detection
- STDP Traces: Exponential traces for LTP/LTD with configurable time constants
- Animation Format: HTML5 video with embedded JavaScript for autoplay
- Documentation Status: All tutorials marked as "under development" - validation ongoing
Use Cases
- Neuroscience Researchers: Analyze RNN dynamics with fixed point finder, study biological learning rules
- Machine Learning Engineers: Use brain-inspired trainers for interpretable feature learning
- Students & Educators: Interactive Jupyter tutorials for hands-on learning
- Computational Modelers: Fit experimental data with CANN models, visualize energy landscapes
Full Changelog: v0.9.3...v0.10.0
v0.9.3
Release v0.9.3
Release Date: 2025-10-29
Summary
This release consolidates the spatial/navigation stack on canns_lib, removes the legacy ratinabox dependency, and further speeds up the experimental analysis workflow.
Major Features
- Unified Spatial Backend: Open/closed-loop navigation tasks now rely exclusively on
canns_lib.spatialagents and environments, simplifying configuration and packaging (#52). - Integrated Ripser Wrapper: Experimental analysis modules use
canns_lib.ripser, keeping persistent-homology tooling within the same distributable (#52).
Improvements
- Analyzer Performance: Vectorised hot loops in the 1D/2D experimental analysis modules and streamlined Hebbian utilities, reducing preprocessing overhead before UMAP/TDA steps (#51).
- Headless Examples:
theta_sweep_grid_cell_network.pynow saves trajectory analyses by default, improving reproducibility in batch and CI environments (#52).
Bug Fixes
- None.
Breaking Changes
- Navigation/analyzer code paths no longer support
ratinaboxbackends. Consumers must installcanns-lib>=0.6.2, whoseAgent.update(...)implementation is fully compatible (#52).
Deprecations
- None.
Documentation
- Updated English and Chinese navigation/analyzer guides to reference the
canns_libspatial backend and Ripser bindings (#52).
Dependencies
- Added:
canns-lib>=0.6.2– supplies spatial agents, environments, and ripser bindings (#52). - Removed:
ratinabox– superseded by the consolidatedcanns_libimplementation (#52).
Internal Changes
v0.9.2
Release v0.9.2
Release Date: 2025-10-26
Summary
This release introduces anti-Hebbian learning capabilities for pattern decorrelation and selective forgetting, along with performance optimizations using JAX vectorization.
Major Features
- AntiHebbianTrainer: Implemented
AntiHebbianTrainerclass for pattern decorrelation and selective unlearning, enabling "neurons that fire together, wire apart" dynamics - Pattern Unlearning Example: Added comprehensive example
hopfield_hebbian_vs_antihebbian.pydemonstrating selective memory forgetting in Hopfield networks with visual comparison metrics - 1D Continuous Patterns: New example
hopfield_train_1d.pyshowing Hebbian learning with continuous-valued 1D patterns using tanh activation
Improvements
- Performance: Optimized Hebbian learning with JAX
vmapfor vectorized outer product computation, significantly faster on GPU/TPU - Code Quality: Reduced code duplication by ~60 lines through shared
_compute_weight_updatehelper method - API: Unified weight update logic between
HebbianTrainerandAntiHebbianTrainerwith configurable sign parameter
Bug Fixes
- Fixed: Zero diagonal enforcement now uses
.at[].set()to avoid floating point errors - Fixed: Vectorized pixel corruption function for better performance
Breaking Changes
None - all changes are backward compatible.
Documentation
- Added: Comprehensive docstrings for
AntiHebbianTrainerwith usage examples and applications - Added: Example demonstrating anti-Hebbian unlearning on real images (camera, astronaut, horse, coffee)
- Added: Example showing continuous Hopfield dynamics with 1D patterns
- Updated: Code review improvements addressing Sourcery AI feedback
Dependencies
No dependency changes in this release.
Internal Changes
- Refactored: Extracted common Hebbian logic into reusable helper methods
- Testing: Validated anti-Hebbian unlearning effectiveness with correlation metrics
- Code Quality: Applied vectorization best practices with JAX primitives
- CI/CD: All examples tested and verified working
Full Changelog: v0.9.1...v0.9.2
v0.9.1
Release v0.9.1
Release Date: 2025-10-23
Summary
This release enhances the hierarchical path integration models with comprehensive parameter configuration support, adds reusable spatial analysis utilities to the analyzer module, and includes citation metadata for academic use.
Major Features
- Configurable Hierarchical Models: Added extensive parameter configuration system to
HierarchicalNetworkandHierarchicalPathIntegrationModel, enabling fine-grained control over module spacing, BandCell, GridCell, GaussRecUnits, and NonRecUnits parameters (#48) - Spatial Analysis Module: New
canns.analyzer.spatialmodule with Numba-optimized spatial firing field computation and Gaussian smoothing utilities for high-performance neural activity analysis (#49) - Spatial Visualization: New
plot_firing_field_heatmap()function incanns.analyzer.plotting.spatialfor publication-quality spatial heatmap visualization with full PlotConfig integration (#49)
Improvements
- Parameter System: Hierarchical parameter flow from top-level network down to individual components with full backward compatibility
- Performance: Maintained Numba JIT optimization with
@njit(parallel=True)for spatial firing field computation - Code Organization: Extracted spatial analysis and visualization utilities from example code into reusable analyzer modules for better modularity
- Visualization Pipeline: Enhanced hierarchical path integration example with selective heatmap saving, progress tracking with tqdm, and configurable export controls
Documentation
- Added: CITATION.cff file for academic citation with DOI information
- Added: DOI badge to README files (English and Chinese)
- Updated: Citation information in README files with proper academic attribution
- Added: Comprehensive docstrings and usage examples for new spatial analysis functions
Breaking Changes
None - all changes are fully backward compatible.
Internal Changes
- Updated module exports in
canns.analyzer.__init__.pyandcanns.analyzer.plotting.__init__.py - Refactored example code to use analyzer utilities instead of inline implementations
- Updated dependency lock file (uv.lock)
Full Changelog
Commits:
v0.9.0
Summary
This major release adds place cell network capabilities with theta sweep visualization and fundamentally refactors the navigation task architecture to support complex environments through geodesic distance computation.
🎯 Major Features
Place Cell Network & Theta Sweep Animation
-
PlaceCellNetwork Model: Graph-based continuous-attractor place cell network using geodesic distances
- Connectivity based on shortest paths through the environment (not Euclidean distance)
- Supports arbitrary shapes: T-maze, complex polygons, holes, walls
- Continuous attractor dynamics with spike-frequency adaptation
-
Place Cell Animation: New
create_theta_sweep_place_cell_animation()- Two-panel animation: environment trajectory + population activity heatmap
- Grid-based activity overlay on environment
- Example:
examples/cann/theta_sweep_place_cell_network.py
Navigation Architecture Refactoring
-
BaseNavigationTask: Unified base class for all navigation tasks
- Centralizes environment setup, grid computation, and visualization
- Shared by
OpenLoopNavigationTaskandClosedLoopNavigationTask - Reduced code duplication by ~800 lines
-
Geodesic Distance Computation: Shortest-path distances in complex environments
MovementCostGrid: Grid-based representation of traversable/blocked cellsGeodesicDistanceResult: Pairwise distance matrix using Dijkstra's algorithm- Efficient O(1) position-to-grid-index mapping
-
T-Maze Variants: New task classes
TMazeOpenLoopNavigationTask/TMazeClosedLoopNavigationTaskTMazeRecessOpenLoopNavigationTask/TMazeRecessClosedLoopNavigationTask- Recesses at junctions for studying spatial decision-making
🔧 Improvements
- Enhanced Visualizations: Movement cost overlays, grid-based activity display, better environment rendering
- Animation System: Improved imageio backend, progress bars, better title layout
- Code Organization: Renamed
create_theta_sweep_animation()→create_theta_sweep_grid_cell_animation()for clarity - Documentation: Comprehensive API docs, updated task guides, Mermaid diagrams
🐛 Bug Fixes
- Fixed grid resolution validation
- Fixed trajectory analysis edge cases with zero angular velocity
- Fixed closed-loop navigation initialization
- Fixed import paths after module rename
💥 Breaking Changes
None - All changes are backward compatible.
📦 New API
# Navigation base classes
from canns.task.navigation_base import (
BaseNavigationTask, MovementCostGrid, GeodesicDistanceResult, INT32_MAX
)
# T-maze tasks
from canns.task.open_loop_navigation import TMazeRecessOpenLoopNavigationTask
from canns.task.closed_loop_navigation import TMazeRecessClosedLoopNavigationTask
# Place cell model
from canns.models.basic.theta_sweep_model import PlaceCellNetwork
from canns.analyzer.theta_sweep import create_theta_sweep_place_cell_animation📝 Example Usage
Place Cell Network with Geodesic Distances
from canns.task.open_loop_navigation import TMazeRecessOpenLoopNavigationTask
from canns.models.basic.theta_sweep_model import PlaceCellNetwork
# Create T-maze and compute geodesic distances
task = TMazeRecessOpenLoopNavigationTask(
duration=3.0, w=0.84, l_s=3.64, l_arm=2.36,
recess_width=0.2, recess_depth=0.2
)
task.get_data()
task.set_grid_resolution(0.05, 0.05)
geodesic_result = task.compute_geodesic_distance_matrix()
# Create place cell network
pc_net = PlaceCellNetwork(geodesic_result, tau=3.0, tau_v=150.0)
pc_net.init_state()
# Run simulation and visualize
create_theta_sweep_place_cell_animation(
position_data=position,
pc_activity_data=activity,
pc_network=pc_net,
navigation_task=task,
save_path="place_cells.gif"
)📊 Stats
- 3,374 insertions, 199 deletions
- 40 files changed
- 5 PRs merged since v0.8.3
🙏 References
This release implements models from:
- Ji, Z., Chu, T., Wu, S., & Burgess, N. (2025). Theta sequences in grid cell populations
- Chu, T., Ji, Z., et al. (2024). Theta sequences track learned routes in hippocampal place cells
Full Changelog: v0.8.3...v0.9.0
Release v0.8.3: Documentation Enhancement & Community Support
What's New
📖 Comprehensive Documentation Overhaul
- Complete docstring coverage for core models, tasks, pipeline, and trainer modules
- New bilingual guide documentation (English & Chinese) with autoapi integration
- Enhanced README with community support badges and improved project visibility
🌟 Community Engagement
- Added "Buy Me a Coffee" support badge for project sustainability
- Updated badges showcasing PyPI stats, Python version support, and project health
Major Features / Key Changes
📖 Core Module Documentation Enhancement (PR #40)
- BasicModel & BasicModelGroup: Expanded from 1-line to comprehensive docstrings with usage examples and cross-references
- DirectionCellNetwork: Documented all 11
__init__parameters and 6 methods with biological context - GridCellNetwork: Documented all 15
__init__parameters and 9 methods, including hexagonal lattice and twisted torus topology - Pipeline module: Enhanced all 10 private methods with detailed validation, setup, and simulation documentation
- Trainer module: Enhanced 5 private methods with state resolution logic and JAX compilation details
- Task module: Added detailed
get_data()docstrings explaining timing windows and noise models
Documentation Coverage Improvements:
- Base classes: 0% → 100% ✅
- DirectionCellNetwork: ~10% → 100% ✅
- GridCellNetwork: ~15% → 100% ✅
- Pipeline module: ~40% → 100% ✅
- Trainer module: ~50% → 90% ✅
- Total: 500+ lines, 30+ methods, 40+ parameters, 5 usage examples
📚 Bilingual Guide Documentation (PR #37, #38)
- 8 new guide files in both Chinese and English covering:
- Models (CANN networks and demonstrations)
- Pipeline (Theta sweep examples)
- Tasks (Navigation and population coding)
- Trainer (Hebbian memory and learning)
- Analyzer (Experimental data analysis)
- Workflows (Combination and customization)
- Architecture (Quick start and index)
Key Features:
- ✅ All class/function references link to autoapi documentation
- ✅ All example paths converted to GitHub URLs for easy navigation
- ✅ Consistent structure between Chinese and English versions
- ✅ Integrated into main documentation TOC
# Build and view documentation
make docs
open docs/_build/html/zh/guide/index.html # Chinese guide
open docs/_build/html/en/guide/index.html # English guide🌟 README & Community Enhancements (PR #39, #41)
- New badges:
- PyPI version and monthly downloads
- Python version support (3.10-3.13)
- Project status, maintenance, and license
- DeepWiki documentation link
- Community support: "Buy Me a Coffee" badge for project sustainability
- Improved visibility: Updated shields showcasing project health and adoption
Technical Improvements
Documentation System
- Autoapi integration: Automated API reference generation from docstrings
- Cross-referencing: Seamless navigation between guides and API docs
- Bilingual support: Parallel English and Chinese documentation structure
- GitHub integration: Direct links to example source code
Code Quality
- Enhanced maintainability: Comprehensive docstrings aid debugging and onboarding
- Reproducibility: Parameter descriptions ensure consistent research results
- API clarity: Clear descriptions reduce learning curve for new users
Files Added/Modified
New Documentation Files:
docs/zh/guide/*.rst- 8 Chinese guide filesdocs/en/guide/*.rst- 8 English guide files
Enhanced Modules:
src/canns/models/basic/_base.py- Base class docstringssrc/canns/models/basic/theta_sweep_model.py- Theta sweep modelssrc/canns/task/tracking.py- Task module methodssrc/canns/pipeline/theta_sweep.py- Pipeline methodssrc/canns/trainer/hebbian.py- Trainer methods
Updated Files:
README.md/README_zh.md- New badges and community supportdocs/zh/index.rst/docs/en/index.rst- Guide section integrationMakefile- Addeddocsbuild targetCLAUDE.md- Documentation build instructions
Use Cases
- New Users: Jump-start with bilingual guides and working examples
- Researchers: Reproduce results with comprehensive parameter documentation
- Developers: Navigate codebase efficiently with autoapi cross-references
- Contributors: Understand architecture through enhanced base class documentation
- Educators: Leverage bilingual resources for teaching computational neuroscience
Full Changelog: v0.8.2...v0.8.3
v0.8.2
What's New
🔧 Plotting API Restoration & Documentation Polish
- Restored missing plotting docstrings after module reorganization
- Enhanced visual documentation with updated branding and examples
- Improved dependency management and compatibility
Key Changes
📖 Documentation & Visual Improvements
- Updated CANN2D encoding GIF - Animation now matches current pipeline defaults
- Enhanced README visual gallery - Restored exploratory examples for better first impressions
- Project branding refresh - Updated logo for clearer visual identity
- Dependency alignment - Updated to require
canns-ripser>=0.4.4to prevent version conflicts
🎨 Plotting Module Fixes
- Restored plotting docstrings - All public plotting functions now have proper documentation
- Enhanced backward compatibility - Legacy plotting function signatures fully preserved
- Fixed configuration handling - Progress bar toggles, color scaling, and axis limits work as expected
# All these functions now have restored docstrings and full compatibility
analyzer.energy_landscape_1d(cann, figsize=(8, 6))
analyzer.raster_plot(spikes, show_progress=True)
analyzer.tuning_curve(responses, color_scale='viridis')Technical Notes
- Plotting module reorganization benefits maintained while restoring full API compatibility
- All plotting tests passing with comprehensive validation
- Zero breaking changes for existing user code
Files Modified
src/canns/analyzer/plotting/- Restored docstrings and API compatibilitydocs/_static/CANN2D_encoding.gif- Updated visualizationREADME.md&README_zh.md- Visual gallery and branding updatespyproject.toml- Dependency version requirements
Full Changelog: v0.8.1...v0.8.2
Release v0.8.1: Documentation Improvements, Trainer Base Class, and Module Reorganization
What's New
📚 Documentation & Code Organization Improvements
- Enhanced documentation with external HTML links
- New abstract base classes for trainers and pipelines
- Reorganized plotting module into specialized submodules
Key Changes
📖 Better Documentation (PR #35)
- README now links to online docs at
https://routhleck.com/canns/ - New design philosophy and quick start notebooks
- Added visual assets (GIFs, animations) to documentation
- Fixed
hybird→hybriddirectory naming
🎓 Trainer Base Class (PR #34)
- New abstract
Trainerbase class for consistent training interfaces HebbianTrainernow inherits from the base class- Makes it easier to create custom trainers
from canns.trainer import Trainer
class MyTrainer(Trainer):
def train(self, patterns, **kwargs):
# Your training logic
pass🎨 Plotting Module Reorganization (PR #33)
- Split large
visualize.pyinto focused submodules:plotting.config- Configuration managementplotting.energy- Energy landscapesplotting.spikes- Spike analysisplotting.tuning- Tuning curves
- Old imports still work for backward compatibility
🔧 Pipeline Base Class (PR #32)
- New abstract
Pipelinebase class for consistent pipeline interfaces - Makes it easier to create custom analysis pipelines
Technical Notes
- All changes maintain backward compatibility
- Enhanced type hints and testing coverage
- Cleaner import structure while preserving old paths
Files Modified
src/canns/trainer/_base.py- New Trainer base classsrc/canns/pipeline/_base.py- New Pipeline base classsrc/canns/analyzer/plotting/- Modular plotting structuredocs/- Documentation refresh and external linksREADME.md&README_zh.md- Updated documentation links
Full Changelog: v0.8.0...v0.8.1
Release v0.8.0: Theta Sweep Models, Pipeline, and Enhanced Spatial Navigation
What's New
🌊 Theta Sweep Models and Advanced Animation
- New theta sweep model system with DirectionCellNetwork and GridCellNetwork
- High-performance animation generation with multiple rendering backends
- Complete spatial navigation task system with theta modulation
- Enhanced brain-inspired Hopfield networks with threshold terms
🔬 High-Level Pipeline for Experimental Scientists
- Introduce
ThetaSweepPipelinefor analyzing experimental trajectory data - Memory-optimized animation generation with multiprocessing fixes
- External trajectory data import with comprehensive feature calculation
Major Features
🌊 Theta Sweep Model System (PR #29)
DirectionCellNetwork: Head direction cell population with adaptation and noiseGridCellNetwork: Grid cell network with spatial periodic firingcalculate_theta_modulation(): Unified theta rhythm calculation across networks- Advanced animation: Multi-backend rendering (matplotlib/imageio) with parallel processing
from canns.models.basic.theta_sweep_model import DirectionCellNetwork, GridCellNetwork
# Direction cells with theta modulation
dc_net = DirectionCellNetwork(num=100, adaptation_strength=15)
gc_net = GridCellNetwork(num_gc_x=100, mapping_ratio=5)
# Theta sweep animation with optimized rendering
from canns.analyzer.theta_sweep import create_theta_sweep_animation
animation = create_theta_sweep_animation(
position_data, direction_data, dc_activity, gc_activity,
gc_network=gc_net, render_backend="imageio", render_workers=4
)🧠 Enhanced Hopfield Networks (PR #28)
- Threshold term in energy: More accurate energy calculation
- Compiled prediction by default: Better performance and consistency
- MNIST example: New
hopfield_train_mnist.pyfor digit recall - JAX-friendly updates: Improved compatibility with compiled prediction
🗺️ Advanced Spatial Navigation (PR #29)
- Theta sweep integration: Built-in theta modulation calculation
- Trajectory analysis: Comprehensive visualization and analysis methods
- Enhanced task system: Better state management and progress tracking
from canns.task.spatial_navigation import SpatialNavigationTask
snt = SpatialNavigationTask(duration=10.0, width=2.0, height=2.0)
snt.get_data()
snt.calculate_theta_sweep_data() # New theta sweep calculation
snt.show_trajectory_analysis() # Enhanced visualization🚀 ThetaSweepPipeline (PR #31)
- Plug-and-play interface: Complete neural analysis from trajectory data
- Memory optimization: Fixed 6+ GB memory usage in multiprocessing
- Batch processing: Analyze multiple experimental sessions
- Full customization: Configure all network parameters
📊 External Data Import (PR #30)
import_data()method: Import experimental position coordinates- Feature calculation: Velocity, speed, movement direction, head direction
- Enhanced visualization: Time-colored trajectories with smoothing options
- Robust validation: Comprehensive input checking
New Components Added
Theta Sweep Models (src/canns/models/basic/theta_sweep_model.py)
- DirectionCellNetwork with circular connectivity and adaptation
- GridCellNetwork with spatial grid patterns and phase relationships
- Unified theta modulation calculation functions
Animation System (src/canns/analyzer/theta_sweep.py)
- Multi-backend rendering (matplotlib for interactive, imageio for files)
- Parallel frame generation with configurable workers
- Auto-detection of optimal rendering backend
- Sophisticated trajectory visualization with path effects
Enhanced Examples
examples/cann/theta_sweep_grid_cell_network.py: Complete theta sweep workflowexamples/brain_inspired/hopfield_train_mnist.py: MNIST digit recallexamples/pipeline/theta_sweep_from_external_data.py: Simple pipeline usageexamples/pipeline/advanced_theta_sweep_pipeline.py: Advanced customization
Technical Improvements
Performance Optimizations
- Memory efficient animation: Extract minimal data instead of copying full objects
- Compiled prediction: Default compiled mode for Hopfield networks
- Parallel rendering: Multi-core frame generation for large animations
- JAX compatibility: Better handling of JAX arrays in multiprocessing
Enhanced Spatial Navigation
- Theta sweep data calculation: Integrated theta modulation in navigation tasks
- Trajectory smoothing: Handle noisy experimental data gracefully
- Time-colored visualization: Temporal progression with viridis colormap
- Analysis methods: Built-in trajectory analysis and visualization
Brain-Inspired Model Improvements
- Threshold energy term: More accurate Hopfield network energy calculation
- Better state management: Improved initialization and reset procedures
- Enhanced progress tracking: Unified progress reporting across components
- Memory efficiency: Optimized trainer and prediction pipelines
New Dependencies
- seaborn (≥0.13.2): Statistical data visualization for enhanced plots
Code Examples
Complete Theta Sweep Workflow
# Generate trajectory and run theta sweep models
snt = SpatialNavigationTask(duration=10.0, width=2.0, height=2.0)
snt.get_data()
snt.calculate_theta_sweep_data()
# Create and run neural networks
dc_net = DirectionCellNetwork(num=100)
gc_net = GridCellNetwork(num_gc_x=100, mapping_ratio=5)
# Generate animation with optimized rendering
create_theta_sweep_animation(
position_data=snt.data.position,
direction_data=snt.data.hd_angle,
dc_activity_data=dc_results,
gc_activity_data=gc_results,
gc_network=gc_net,
render_backend="imageio",
output_dpi=150
)High-Level Pipeline Usage
from canns.pipeline import ThetaSweepPipeline
# Simple analysis
pipeline = ThetaSweepPipeline(trajectory_data=positions, times=times)
results = pipeline.run(output_dir="results")
# Advanced customization
pipeline = ThetaSweepPipeline(
trajectory_data=positions,
direction_cell_params={"num": 200, "adaptation_strength": 25},
grid_cell_params={"num_gc_x": 150, "mapping_ratio": 4},
theta_params={"theta_strength_hd": 1.8}
)External Data Import
# Import experimental trajectory data
task = SpatialNavigationTask(duration=10.0, width=2.0, height=2.0)
task.import_data(
position_data=positions,
times=times,
head_direction=directions
)
# Enhanced visualization with smoothing
task.show_trajectory_analysis(smooth_window=50)Breaking Changes
None - all additions are backward compatible.
Files Added/Modified
src/canns/models/basic/theta_sweep_model.py: New theta sweep modelssrc/canns/analyzer/theta_sweep.py: Advanced animation systemsrc/canns/pipeline/theta_sweep.py: High-level pipeline implementationsrc/canns/task/spatial_navigation.py: Enhanced with import and analysissrc/canns/models/brain_inspired/hopfield.py: Threshold term and compiled predictionsrc/canns/trainer/hebbian.py: Performance and consistency improvementsexamples/: New theta sweep and pipeline examples
Use Cases
- Experimental neuroscientists: Analyze recorded animal movement with neural models
- Computational researchers: Test spatial navigation theories with realistic data
- Students: Learn grid cell and place cell concepts with interactive examples
- Model validation: Compare theoretical predictions with experimental results
Full Changelog: v0.7.1...v0.8.0
Release v0.7.1: Unified Trainer, Generic Hebbian, and tqdm Progress
What's New
🎛️ Unified Trainer, Generic Hebbian, and Simpler Progress
- Centralize all training and prediction via
HebbianTrainer - Generic Hebbian learning that operates on model
weight_attr(defaultW) - Direct
tqdmprogress bars in train/predict/predict_batch (no reporter abstraction) - Optional dynamic resize to match pattern dimensionality
Key Changes
✨ Trainer Unification
- Prediction moves to
HebbianTrainer.predict(...)andpredict_batch(...) - Removes reliance on
model.predict(...)in examples/tests (now trainer-led) - Iteration-level progress available for uncompiled prediction
🧠 Generic Hebbian Learning
- Dataset-centered Hebbian rule with options:
subtract_mean,zero_diagonal,normalize_by_patterns- Configurable
weight_attr(defaultW)
- Progress during training shows mean-estimation and outer-product accumulation
📈 Progress Simplification (tqdm)
- Single implementation path using
tqdm(no Silent/Auto/Reporter types) - Sample-level bars for batch prediction; optional iteration bars for convergence
📐 Dynamic Resizing (Optional)
- New
AmariHopfieldNetwork.resize(num_neurons, preserve_submatrix=True)to adjustWands - Trainer auto-aligns model dimension with incoming patterns (configurable preservation)
Breaking Changes
- Removed: model-level
predictfromAmariHopfieldNetwork - Removed: model-specific
apply_hebbian_learningin favor of trainer’s generic path - Removed: progress reporter module (
src/canns/trainer/progress.py) and its public API
Updated Components
src/canns/trainer/hebbian.py: unified train/predict, tqdm progress, optional resize, batch predictionsrc/canns/models/brain_inspired/hopfield.py: removes predict/hebbian method; addsresizesrc/canns/models/brain_inspired/_base.py: clarifiesweight_attr,predict_state_attr,energy; optionalresizedocsexamples/brain_inspired/discrete_hopfield_train.py: uses scikit‑image (camera/astronaut/horse/coffee) with unified Trainer APIAGENTS.md: contributor guidelines and API policy (trainer-led, tqdm progress)
Code Examples
Trainer-Led Training and Prediction
from canns.models.brain_inspired import AmariHopfieldNetwork
from canns.trainer import HebbianTrainer
# Build model; dimension auto-aligned by trainer on first train/predict
model = AmariHopfieldNetwork(num_neurons=128, activation="sign")
model.init_state()
trainer = HebbianTrainer(
model,
compiled_prediction=True, # fast default
# subtract_mean=True, zero_diagonal=True, normalize_by_patterns=True,
)
# Train on binary {-1,+1} patterns
trainer.train(patterns)
# Predict a batch with sample-level progress
results = trainer.predict_batch(test_patterns, compiled=True, show_sample_progress=True)
# Show iteration-level convergence (uncompiled)
res = trainer.predict(test_patterns[0], compiled=False, show_progress=True)Dynamic Resize
# If the next dataset has a different dimensionality, trainer adjusts automatically
results = trainer.predict_batch(new_size_patterns, compiled=True)
# Or adjust explicitly
model.resize(32768, preserve_submatrix=True)Performance Notes
- Compiled prediction leverages
brainstate.compile.while_loopfor fast inference - Simple, consistent progress UX with
tqdmacross train and predict paths
Full Changelog: v0.7.0...v0.7.1