- San Franscisco
- in/zihong-chen
Lists (12)
Sort Name ascending (A-Z)
Starred repositories
What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?
Open Source framework for voice and multimodal conversational AI
Qwen3-Coder is the code version of Qwen3, the large language model series developed by Qwen team, Alibaba Cloud.
Qwen Code is a coding agent that lives in the digital world.
The AI coding agent built for the terminal.
A powerful AI coding agent. Built for the terminal.
Agent2Agent (A2A) – awesome A2A agents, tools, servers & clients, all in one place.
Official implementation of X-Master, a general-purpose tool-augmented reasoning agent.
Universal memory layer for AI Agents; Announcing OpenMemory MCP - local and secure memory management.
The TypeScript AI agent framework. ⚡ Assistants, RAG, observability. Supports any LLM: GPT-4, Claude, Gemini, Llama.
RooCodeInc / Roo-Code
Forked from cline/clineRoo Code gives you a whole dev team of AI agents in your code editor.
🦉 OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation
A live stream development of RL tunning for LLM agents
DeepEP: an efficient expert-parallel communication library
🧙♀️ Move Fast and Break Nothing. End-to-end typesafe APIs made easy.
This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?"
Clean and simple starter repo using the T3 Stack along with Expo React Native
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflo…
📱 A template for your next React Native project: Expo, PNPM, TypeScript, TailwindCSS, Husky, EAS, GitHub Actions, Env Vars, expo-router, react-query, react-hook-form.
Janus-Series: Unified Multimodal Understanding and Generation Models
MoBA: Mixture of Block Attention for Long-Context LLMs
Align Anything: Training All-modality Model with Feedback
RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.
Minimal reproduction of DeepSeek R1-Zero
A fork to add multimodal model training to open-r1
Collect every awesome work about r1!