Skip to content
@ipex-llm

ipex-llm

Popular repositories Loading

  1. ipex-llm ipex-llm Public

    Forked from intel/ipex-llm

    Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr…

    Python 66 1

Repositories

Showing 1 of 1 repositories
  • ipex-llm Public Forked from intel/ipex-llm

    Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.

    ipex-llm/ipex-llm’s past year of commit activity
    Python 66 Apache-2.0 1,441 0 0 Updated Apr 27, 2025

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…