input image, aligned reconstruction, animation with various poses & expressions
This is the official Pytorch implementation of DECA.
DECA reconstructs a 3D head model with detailed facial geometry from a single input image. The resulting 3D head model can be easily animated. Please refer to the arXiv paper for more details.
The main features:
- Reconstruction: produces head pose, shape, detailed face geometry, and lighting information from a single image.
- Animation: animate the face with realistic wrinkle deformations.
- Robustness: tested on facial images in unconstrained conditions. Our method is robust to various poses, illuminations and occlusions.
- Accurate: state-of-the-art 3D face shape reconstruction on the NoW Challenge benchmark dataset.
Clone the repo:
git clone https://github.com/YadiraF/DECA
cd DECA- Python 3.7 (numpy, skimage, scipy, opencv)
- PyTorch >= 1.6 (pytorch3d)
- face-alignment (Optional for detecting face)
You can runOr use virtual environment by runingpip install -r requirements.txt
Then follow the instruction to install pytorch3d.bash install_pip.sh
-
Prepare data
a. download FLAME model, choose FLAME 2020 and unzip it, copy 'generic_model.pkl' into ./data
b. download DECA trained model, and put it in ./data
c. (Optional) follow the instructions for the Albedo model to get 'FLAME_albedo_from_BFM.npz', put it into ./data -
Run demos
a. reconstructionpython demos/demo_reconstruct.py -i TestSamples/examples --saveDepth True --saveObj True
to visualize the predicted 2D landmanks, 3D landmarks (red means non-visible points), coarse geometry, detailed geoemtry, and depth.
You can also generate an obj file (which can be opened with Meshlab) that includes extracted texture from the input image.Please run
python demos/demo_reconstruct.py --helpfor more details.b. expression transfer
python demos/demo_transfer.py
c. for the teaser gif (reposing and animation)
python demos/demo_teaser.py
More demos and training code coming soon.
DECA (ours) achieves 9% lower mean shape reconstruction error on the NoW Challenge dataset compared to the previous state-of-the-art method.
The left figure compares the cumulative error of our approach and other recent methods (RingNet and Deng et al. have nearly identitical performance, so their curves overlap each other). Here we use point-to-surface distance as the error metric, following the NoW Challenge.
For more details of the evaluation, please check our arXiv paper.
If you find our work useful to your research, please consider citing:
@inproceedings{deca2020,
title={Learning an Animatable Detailed {3D} Face Model from In-The-Wild Images},
author={Feng, Yao and Feng, Haiwen and Black, Michael J. and Bolkart, Timo},
booktitle = {arxiv},
month = {Dec},
year = {2020}
}
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.