Run LLM/VLM models natively in ComfyUI based on llama.cpp
[📃中文版]
- Initial release, added support for Qwen3-VL
cd ComfyUI/custom_nodes
git clone https://github.com/lihaoyun6/ComfyUI-llama-cpp.git
python -m pip install -r ComfyUI-llama-cpp/requirements.txt- Install a prebuilt wheel from https://github.com/JamePeng/llama-cpp-python/releases, or build it from source according to your system.
- Place your model files in the
ComfyUI/models/LLMfolder.
If you need a VLM model to process image input, don't forget to download the
mmprojweights.
- llama-cpp-python @JamePeng
- ComfyUI-llama-cpp @kijai
- ComfyUI @comfyanonymous