Lists (1)
Sort Name ascending (A-Z)
Stars
Fastest LLM gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.
A library that provides an embeddable, persistent key-value store for fast storage.
Universal Python binding for the LMDB 'Lightning' Database
Python fast on-disk dictionary / RocksDB & SpeeDB Python binding
OCI registry client - managing content like artifacts, images, packages
Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.
A secure local sandbox to run LLM-generated code using Apple containers
An Obsidian theme inspired by the beautifully-designed app, Things.
A high-throughput and memory-efficient inference and serving engine for LLMs
Chat with your Kubernetes Cluster using AI tools and IDEs like Claude and Cursor!
The AI Browser Automation Framework
Scalene: a high-performance, high-precision CPU, GPU, and memory profiler for Python with AI-powered optimization proposals
Use Claude Code as the foundation for coding infrastructure, allowing you to decide how to interact with the model while enjoying updates from Anthropic.
The 100 line AI agent that solves GitHub issues or helps you in your command line. Radically simple, no huge configs, no giant monorepo—but scores >74% on SWE-bench verified!
Collection of publicly available libraries
A plugin to edit and view Excalidraw drawings in Obsidian
an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM
Replace 'hub' with 'ingest' in any GitHub URL to get a prompt-friendly extract of a codebase
🦎 a tool to build and deploy software on many servers 🦎