flashinfer-ai / flashinfer
FlashInfer: Kernel Library for LLM Serving
See what the GitHub community is most excited about today.
FlashInfer: Kernel Library for LLM Serving
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.
NCCL Tests
how to optimize some algorithm in cuda.
Causal depthwise conv1d in CUDA, with a PyTorch interface
CUDA accelerated rasterization of gaussian splatting
Instant neural graphics primitives: lightning fast NeRF and more
CUDA Kernel Benchmarking Library
[ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl
RCCL Performance Benchmark Tests
DeepEP: an efficient expert-parallel communication library
DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling
Fast CUDA matrix multiplication from scratch
[ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.
GPU accelerated decision optimization
Tile primitives for speedy kernels
cuVS - a library for vector search and clustering on the GPU
LLM training in simple, raw C/CUDA