Stars
Multilingual Document Layout Parsing in a Single Vision-Language Model
verl: Volcano Engine Reinforcement Learning for LLMs
Model Context Protocol Servers
PDF craft can convert PDF files into various other formats. This project will focus on processing PDF files of scanned books.
Generative Agents: Interactive Simulacra of Human Behavior
TweetedAt tells the time of a tweet based on its tweet id
Claude中国镜像站上线!这一平台不仅提供了1:1还原官网的体验,而且实现了国内直连,为广大用户提供了更加便捷和流畅的使用体验。
最新Claude Pro订阅教程:如何注册Claude账号?如何订阅Claude Pro会员?如何购买Claude Pro原生独立账号?如何为你现有的Claude充值?(含国内使用Claude Code教程)
Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and…
CodeGeeX4-ALL-9B, a versatile model for all AI software development scenarios, including code completion, code interpreter, web search, function calling, repository-level Q&A and much more.
text and image to video generation: CogVideoX (2024) and CogVideo (ICLR 2023)
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
[AAAI'25] SPRING: Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models
Implementation for the paper: BotDGT: Dynamicity-aware Social Network Bot Detection with Dynamic Graph Transformers.
中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Llama中文社区,实时汇总最新Llama学习资料,构建最好的中文Llama大模型开源生态,完全开源可商用
FinGLM: 致力于构建一个开放的、公益的、持久的金融大模型项目,利用开源开放来促进「AI+金融」。
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Examples and guides for using the GLM APIs
Retrieval and Retrieval-augmented LLMs
📄 适合中文的简历模板收集(LaTeX,HTML/JS and so on)由 @hoochanlon 维护
RUCAIBox / EulerFormer
Forked from Ethan-TZ/EulerFormer[SIGIR 2024] This is the official PyTorch implementation for the paper: "EulerFormer: Sequential User Behavior Modeling with Complex Vector Attention".