Stars
Collection of awesome LLM apps with AI Agents and RAG using OpenAI, Anthropic, Gemini and opensource models.
MiniCPM4 & MiniCPM4.1: Ultra-Efficient LLMs on End Devices, achieving 3+ generation speedup on reasoning tasks
The easiest way to use Agentic RAG in any enterprise
RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs
Open source AI terminal and SSH Client for EC2, Database and Kubernetes.
PromptX · 领先的AI 智能体上下文平台 | PromptX · Leading AI Agent Context Platform
FULL Augment Code, Claude Code, Cluely, CodeBuddy, Comet, Cursor, Devin AI, Junie, Kiro, Leap.new, Lovable, Manus Agent Tools, NotionAI, Orchids.app, Perplexity, Poke, Qoder, Replit, Same.dev, Trae…
Rules and Knowledge to work better with agents such as Claude Code or Cursor
Experimental toolkit for auto-generating codebase documentation using LLMs
Pocket Flow: Codebase to Tutorial
Pocket Flow: 100-line LLM framework. Let Agents build Agents!
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
Train a Language Model with GRPO to create a schedule from a list of events and priorities
An Open-Source Framework for Prompt-Learning.
Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
Code AI platform with Code Search & Cody
A framework helps you quickly build AI Native IDE products. MCP Client, supports Model Context Protocol (MCP) tools via MCP server.
High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs. This work has been accepted by KDD 2024.
An intelligent assistant serving the entire software development lifecycle, powered by a Multi-Agent Framework, working with DevOps Toolkits, Code&Doc Repo RAG, etc.
Model Context Protocol Servers
Fair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.
No fortress, purely open ground. OpenManus is Coming.
A high-throughput and memory-efficient inference and serving engine for LLMs
A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations