Stars
- All languages
- Assembly
- Blade
- C
- C#
- C++
- CSS
- Clojure
- CoffeeScript
- Common Lisp
- Cuda
- Dart
- Emacs Lisp
- Erlang
- Go
- HTML
- Haxe
- Java
- JavaScript
- Julia
- Jupyter Notebook
- Kotlin
- Lua
- MDX
- MLIR
- Makefile
- Markdown
- Nunjucks
- Objective-C
- Objective-C++
- PHP
- Perl
- Processing
- Python
- QML
- R
- Rich Text Format
- Ruby
- Rust
- SCSS
- SWIG
- Scala
- Scheme
- Shell
- Solidity
- Swift
- TLA
- TeX
- TypeScript
- Vim Script
- Vim Snippet
- Visual Basic 6.0
- Wikitext
MineContext is your proactive context-aware AI partner(Context-Engineering+ChatGPT Pulse)
Venus Collective Communication Library, supported by SII and Infrawaves.
Intelligent automation and multi-agent orchestration for Claude Code
Lightweight coding agent that runs in your terminal
ByteCheckpoint: An Unified Checkpointing Library for LFMs
A high-performance distributed file system designed to address the challenges of AI training and inference workloads.
DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling
DeepEP: an efficient expert-parallel communication library
Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation
verl: Volcano Engine Reinforcement Learning for LLMs
An open-source implementation of Regional Adaptive Sampling (RAS), a novel diffusion model sampling strategy that introduces regional variability in sampling steps
FlashInfer: Kernel Library for LLM Serving
DeepBattler - Your BEST LLM Battlegrounds Coach/Friend!
Jittor implementation of DiffPoseTalk(SIGGRAPH 2024)
Use LLMs to track and extract websites, RSS feeds, and social media
Convert any PDF into a podcast episode!
A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.
📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
Implementation of MagViT2 Tokenizer in Pytorch
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"
High-speed Large Language Model Serving for Local Deployment
T3Bench: Benchmarking Current Progress in Text-to-3D Generation
[ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads