Stars
Easily train a good VC model with voice data <= 10 mins!
💾 Self-hosted online file converter. Supports 1000+ formats ⚙️
LangChain for Go, the easiest way to write LLM-based programs in Go
Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc
fastllm是后端无依赖的高性能大模型推理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tps,多并发可达60+。
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source, speech, and multimodal models on cloud, on-prem, or your laptop — all through one unified, production-re…
Run frontier LLMs and VLMs with day-0 model support across GPU, NPU, and CPU, with comprehensive runtime coverage for PC (Python/C++), mobile (Android & iOS), and Linux/IoT (Arm64 & x86 Docker). Su…
A simple node that can dynamically adjust the reserved memory of a workflow in real-time, used to avoid the utilization of shared memory.
Prodigy and Schedule-Free, together at last.
Quickly and securely sync clipboard, transfer files and directories between devices. 快速安全的同步剪切板,传输文件或文件夹
🚀 An open-source API debugging and stress testing tool inspired by Postman and a simplified JMeter, optimized for developers with a clean UI and powerful features.
A pipeline parallel training script for diffusion models.
A GUI to quickly manage your WSL2 instances
Use Claude Code as the foundation for coding infrastructure, allowing you to decide how to interact with the model while enjoying updates from Anthropic.
AI模型聚合管理中转分发系统,一个应用管理您的所有AI模型,支持将多种大模型转为统一格式调用,支持OpenAI、Claude、Gemini等格式,可供个人或者企业内部管理与分发渠道使用。🍥 A Unified AI Model Management & Distribution System. Aggregate all your LLMs into one app and access t…
Fair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.
Multilingual Document Layout Parsing in a Single Vision-Language Model
Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https://discord.gg/5xXzkMu8Zk