- Cambridge, UK
- javierantoran.github.io
Stars
Force-field-enhanced Neural Networks optimized library
Tile primitives for speedy kernels
Shows some minimal examples of how to call JAX (HLO/AOT-compiled) from C++.
Tools for building equivariant polynomials on reductive Lie groups.
A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.
cuEquivariance is a math library that is a collective of low-level primitives and tensor ops to accelerate widely-used models, like DiffDock, MACE, Allegro and NEQUIP, based on equivariant neural n…
Everything you want to know about Google Cloud TPU
jharrymoore / openmm-ml
Forked from openmm/openmm-mlHigh level API for using machine learning models in OpenMM simulations
jharrymoore / openmmtools
Forked from dominicrufa/openmmtoolsA batteries-included toolkit for the GPU-accelerated OpenMM molecular simulation engine.
Jax implementation for the paper "Sampling-based inference for large linear models, with application to linearised Laplace"
Pytorch code for "Improving Self-Supervised Learning by Characterizing Idealized Representations"
Fast and Easy Infinite Neural Networks in Python
Benchmarks for the Synbols project. Synbols is a ServiceNow Research project that was started at Element AI.
Code for the paper "Bayesian Neural Network Priors Revisited"
A Python-embedded modeling language for convex optimization problems.
Code for the Neural Processes website and replication of 4 papers on NPs. Pytorch implementation.
PyTorch-SSO: Scalable Second-Order methods in PyTorch
Code for "Depth Uncertainty in Neural Networks" (https://arxiv.org/abs/2006.08437)
Fault-tolerant, highly scalable GPU orchestration, and a machine learning framework designed for training models with billions to trillions of parameters
Model interpretability and understanding for PyTorch