-
Shanghai Jiao Tong University
- ShangHai (Wuhan during Winter and summer vacation)
- fangtiancheng.github.io
Starred repositories
Wan: Open and Advanced Large-Scale Video Generative Models
Wan: Open and Advanced Large-Scale Video Generative Models
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
HunyuanImage-3.0: A Powerful Native Multimodal Model for Image Generation
Qwen-Image is a powerful image generation foundation model capable of complex text rendering and precise image editing.
Official inference repo for FLUX.2 models
Native and Compact Structured Latents for 3D Generation
Official Torch/CUDA Implementation of Faithful Contouring
Burrow-Wheeler Aligner for short-read alignment (see minimap2 for long-read alignment)
😎 A curated list of CVPR 2025 Oral paper. Total 96
Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, images, and video, as well as generating speech in real time.
ReconViaGen: Towards Accurate Multi-view 3D Object Reconstruction via Generation
Voyager is an interactive RGBD video generation model conditioned on camera input, and supports real-time 3D reconstruction.
Reference PyTorch implementation and models for DINOv3
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
Official implementation for "Explicitly Guided Information Interaction Network for Cross-modal Point Cloud Completion" (ECCV 2024)
C++ implementation for computing occupancy grids and signed distance functions (SDFs) from watertight meshes.
HoloPart: Generative 3D Part Amodal Segmentation
A project page template for academic papers. Demo at https://eliahuhorwitz.github.io/Academic-project-page-template/
assistant tools for attention visualization in deep learning
BertViz: Visualize Attention in Transformer Models
[AAAI2025] FedCFA: Alleviating Simpson’s Paradox in Model Aggregation with Counterfactual Federated Learning
[ICCV 2025 Highlight] OminiControl: Minimal and Universal Control for Diffusion Transformer
[NeurIPS 2025] Direct3D‑S2: Gigascale 3D Generation Made Easy with Spatial Sparse Attention