If the video cannot be displayed, please visit YouTube directly to watch it
- Install ComfyUI:commit_id=82c53085616252e483a78abceac3b7e23495e019;
conda create -n comfyui python=3.10
- Download SDXL model realcartoonXL_v7.safetensors and sdxl-动漫二次元_2.0.safetensors in 'models/checkpoints', sdxl_img2img_inpainting_api.json uses 'sdxl-动漫二次元_2.0.safetensors' by default;
- Download SDXL ControlNet model xinsir_controlnet_union_sdxl_promax.safetensors, rename as 'xinsir_controlnet_union_sdxl_promax.safetensors' in 'models/controlnet';
- Install the missing custom nodes to 'custom_nodes' ensure that the workflow can be fully run, If requirements.txt exists on each node, it needs to be installed;
- ComfyUI-Advanced-ControlNet:commit_id=172543b7252db3f15d9bebfa763abb59769624e5
- comfyui-tooling-nodes:commit_id=50c3ffdf649bd55a0b985d775e79cfe62bae1379
- Comfyui_CXH_joy_caption:commit_id=894b66159ddc0cd146dc913d27ee6c82ace80491
- ComfyUI_essentials:commit_id=64e38fd0f3b2e925573684f4a43727be80dc7d5b
- comfyui-art-venture:commit_id=50abaace756b96f5f5dc2c9d72826ef371afd45e
- ComfyUI_Custom_Nodes_AlekPet:commit_id=7b3d6f190aeca261422bdfb74b5af37937e5bf68
- comfyui_controlnet_aux:commit_id=5a049bde9cc117dafc327cded156459289097ea1
- comfyui-mixlab-nodes:commit_id=868c6085a8dcb9bdb2dc6d171d471abfafcc2794
cd ComfyUI
python main.py --listen
- Record the IP and port of the ComfyUI service and fill it into assets/model_configuration.json "comfyui_ip_port".
- Install Conda environment;
conda create -n textoon python=3.11
conda activate textoon
pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cu121
# if CPU Only
pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt
- Configure the translation service. If you can access Google, set ‘translation_services’ to ‘google’. Otherwise, it is recommended to use Alibaba Cloud Translation Service. Activate the service to obtain AK&SK and fill in the environment variables Translate_AK&Translate_SK;
- Download TextoonPromptParsing(Based on Qwen2.5-1.5B-Instruct) and place it in the model folder "checkpoints";
- Run the main.py;
python main.py --text_prompt "她将长发扎成两条俏皮的双马尾,中发修剪得干净利落,长刘海微微遮住额头,露出一双灵动的蓝色眼睛。她穿着一件圆领的浅紫色短袖上衣,袖口处有荷叶边设计,下身搭配一条高腰的白色短裤,脚上穿着一双白色运动鞋"
- The generated Live2D model will be saved in the output folder, and you can preview it in the Live2D viewer;
- We also provide a gradio page for easy use. If the textoon service and ComfyUI service are on the same machine, you can enter the ComnfyUI checkpoints path to retrieve and change different base models;
python app/gradio_demo.py
- Install nodejs and npm;
- Move generated Live2D model to live2d-chatbot-demo/public/assets/;
- Modify the model path;
const cubism4Model_gen ="assets/20250417-200114_model/female_01Arkit_6.model3.json";
fetch('assets/20250417-200114_model/config.json')
- Start Live2D web rendering;
cd live2d-chatbot-demo
sh scripts/run_live2d.sh
- Use mediapipe to drive live2d models in real time.
python scripts/mediapipe_live2d.py
Many thanks to the following for their great work:
@article{he2025textoon,
title={Textoon: Generating Vivid 2D Cartoon Characters from Text Descriptions},
author={Chao He and Jianqiang Ren and Yuan Dong and Jianjing Xiang and Xiejie Shen and Weihao Yuan and Liefeng Bo},
journal={arXiv preprint arXiv:2501.10020},
year={2025}
}