Skip to content

lihaoyun6/ComfyUI-llama-cpp_vllm

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ComfyUI-llama-cpp

Run LLM/VLM models natively in ComfyUI based on llama.cpp
[📃中文版]

Changelog

2025-11-03

  • Initial release, added support for Qwen3-VL

Preview

Installation

Install the node:

cd ComfyUI/custom_nodes
git clone https://github.com/lihaoyun6/ComfyUI-llama-cpp.git
python -m pip install -r ComfyUI-llama-cpp/requirements.txt

Install llama.cpp

Download models:

  • Place your model files in the ComfyUI/models/LLM folder.

If you need a VLM model to process image input, don't forget to download the mmproj weights.

Credits

About

Run LLM/Vision LLM models natively in ComfyUI based on llama.cpp

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%