๐ This is the official implementation and data release for FaceMap, to present in Siggraph Asia 2024
- โฉ Fast Forward: https://youtu.be/0WopIgwuR4o
- ๐บ Paper Video: https://youtu.be/019GLJOxvTE
- ๐ฐ Paper Manuscript: pdf
- ๐พ Presentation: stay tuned for SIGGRAPH Asia 2024 conference.
๐ Contributors: Zhongshi Jiang, Kishore Venkateshan, Giljoo Nam, Meixu Chen, Romain Bachy, Jean-Charles Bazin, Alexandre Chapiro. (Reality Labs, Meta)
๐ Abstract: Humans are uniquely sensitive to faces. Recognizing fine detail in faces plays an important role in social cognition, identity; and it is key to human interaction. In this work, we present the first quantitative study of the relative importance of face regions to human observers. We created a dataset of 960 unique models featuring localized geometry and texture distortions relevant to visual computing applications. We then conducted an extensive subjective study examining the perceptual saliency of facial regions through the lens of distortion visibility. Our study comprises over 18,000 comparisons and indicates non-trivial preferences across distortion types and facial areas. Our results provide relevant insights for algorithm design, and we demonstrate our dataโs value in model compression applications.
๐ก 11/01/2024: Initial Commit and Public
As part of our open source commitment, we release the data, analysis script, and script for distortion generation.
Our released data includes:
- Pre-generated distorted face model in the main user study
- User response data in the main study
- Rendered images and videos used in the follow up applications.
- User response data in the follow-up validation study (random landmarks, Gaussian Avatar evaluation, and Re-mesh application evaluation).
data
- main_study # Data and response in the main study
- MainStudyGeoms.zip # pre-generated distorted faces we used
- head*_Experiment_History.csv # recorded user study response
- processed/bs100 # scaled JOD results
- applications # GaussAvatar and Remesh validation applications in Section 6
- application_survey_data.zip # Rendered image and videos used for the validation user study.
- *_study.csv # Recorded response in the user study
- randomized_landmarks # Suppl. Section B
- head*_Experiment_History.csv # recorded user study response
As we described in the paper, please follow the semi-manual process with Wrap4D
(Video Instruction) to transfer onto your new template.
Relevant data can be found in software/main_distortion_generation/wrap_transfer
analysis # a set of scripts to produce analysis and plots
- scaling # Matlab code to compute ASAP, pwcmp
- visualize
git clone --recurse-submodules [email protected]:facebookresearch/facemap.git
We collected pairwise comparison data in our user study, and then we use pwcmp to scale the comparisons into JOD scores.
# Populate the content inside pwcmp
git submodule update --init --recursive
% open Matlab in the analysis/scaling folder
cd analysis/scaling/
all_maps('../../data/main_study/head', 100, ["final"]) # compute the final JOD with 100 bootstrap sample, set BS sample to smaller (e.g. <10 for a quick test of the code)
- Install Python Depdencies
conda create -c conda-forge -n facemapenv python=3.9 pandas meshplot jupyterlab -y
conda activate facemapenv
pip install -r requirements.txt
# Launch jupyter notebook.
jupyter lab
- Then open the following notebook
analysis/visualize/visualizing_csv.ipynb
you should see a list of visualizations as follows.
Reproduce Figure 5 and 6, using pre-scaled data (tested with Matlab R2023a)
% open Matlab in the repository root path
addpath('analysis/visualize/')
plot_facemap_aggregated
The following plots will be reproduced.
software
- main_distortion_generation # Script to generate the distorted head geom/texture for user study
Please contact Zhongshi Jiang ([email protected]) for further details about the code release.
See CONTRIBUTNG.md and CODE_OF_CONDUCT.md.
Our code and data follows Creative Commons NonCommercial license (CC BY-NC)
We are grateful to Laura Trutoiu and Lina Chan and the Quantified Wearability Team at Meta for help identifying and sourcing the face scans used in this work, and the subjects for allowing us to use their likeness. We thank John Doublestein for the help on face templates.