Highlights
- Pro
Stars
- All languages
- Agda
- Arc
- Assembly
- C
- C#
- C++
- CMake
- CSS
- Clojure
- CodeQL
- CoffeeScript
- Coq
- Cuda
- Dart
- Dhall
- Dockerfile
- Elixir
- Emacs Lisp
- F*
- Gherkin
- Go
- Groovy
- HTML
- Handlebars
- Haskell
- Isabelle
- JSON
- Java
- JavaScript
- Jsonnet
- Julia
- Jupyter Notebook
- KiCad Layout
- Koka
- Kotlin
- LLVM
- Lean
- Lua
- MATLAB
- MDX
- MLIR
- Makefile
- Markdown
- Meson
- Mustache
- OCaml
- Objective-C
- Objective-C++
- OpenSCAD
- PHP
- Pascal
- Perl
- PowerShell
- Python
- R
- Racket
- Rocq Prover
- Ruby
- Rust
- SCSS
- Scala
- Scheme
- Shell
- Smarty
- Standard ML
- Starlark
- Svelte
- Swift
- SystemVerilog
- TLA
- TSQL
- TeX
- TypeScript
- Typst
- VHDL
- Verilog
- Vim Script
- Vue
- WebAssembly
- Zig
Codespaces but open-source, client-only and unopinionated: Works with any IDE and lets you use any cloud, kubernetes or just localhost docker.
Open deep learning compiler stack for cpu, gpu and specialized accelerators
An open-source, next-generation "runc" that empowers rootless containers to run workloads such as Systemd, Docker, Kubernetes, just like VMs.
Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
A machine learning compiler for GPUs, CPUs, and ML accelerators
Sccache is a ccache-like tool. It is used as a compiler wrapper and avoids compilation when possible. Sccache has the capability to utilize caching in remote storage environments, including various…
contaiNERD CTL - Docker-compatible CLI for containerd, with support for Compose, Rootless, eStargz, OCIcrypt, IPFS, ...
A high-performance JavaScript runtime for Flutter applications, built with Rust and powered by QuickJS.
Train speculative decoding models effortlessly and port them smoothly to SGLang serving.
Settings management using pydantic
Kimi K2 is the large language model series developed by Moonshot AI team
Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serving systems.
Technical report of Kimina-Prover Preview.
Official Repo for Open-Reasoner-Zero
Kimina Lean server (+ client SDK)
Mirage Persistent Kernel: Compiling LLMs into a MegaKernel
slime is an LLM post-training framework for RL Scaling.
Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.