Skip to content

FACSvatar

Yvo02 edited this page Dec 18, 2025 · 1 revision

How to use FACSvatar with SEE

This guide explains how to use FACSvatar inside SEE to drive facial animation using FACS (Facial Action Coding System) data. FACSvatar allows manual or automated streaming of FACS values (for example from CSV files or OpenFace) into SEE.

FACSvatar documentation reference: FACSvatar

Overview

FACSvatar in SEE consists of several modules located under: Tools/FACSvatar/.

These modules allow you to:

  • Stream FACS data via a bridge
  • Convert FACS Action Units (AUs) to blendshape values
  • Send data manually (CSV, custom scripts)
  • Use a modified version of OpenFace for live FACS extraction

To function correctly, two FACSvatar modules must always be running:

  • process_bridge
  • process_facstoblend

Python Environment Setup

FACSvatar requires a dedicated Python environment.

An environment.yml file is provided under Tools/FACSvatar. You can use this with Anaconda. Contains all Python Packages you need for running FACSvatar.

Required Runtime Components

The following components are required at runtime:

  • SEE running
  • An active FACS input source
  • process_bridge/main.py
  • process_facstoblend/main.py

Facial Animation Conflicts

FACSvatar directly controls facial blendshapes. If FACSvatar is used, other systems that override facial animation should be disabled, for example:

  • SALSA Lip Sync
  • Other facial animation or blendshape controllers

Leaving these enabled may result in overridden, unstable, or reset facial animations.

Startup Workflow

1. Launch SEE as usual.

2. Prepare an Input Source:

  • Option A: Modified OpenFace (Live Capture)

    • Use the FACSvatar-modified OpenFace version
    • Enables real-time facial motion capture
  • Option B: Offline CSV Files

    • Use pre-recorded CSV files containing FACS AU values compatible with FACSvatar
    • example can be found here Tools\FACSvatar\input_facsfromcsv
  • Option C: Custom Script

    • Implement a custom sender using ZeroMQ
    • Stream FACS AU values directly to FACSvatar

3. Start the FACSvatar Bridge

  • The bridge module handles communication between input sources and SEE.
    • Path: Tools/FACSvatar/process_bridge/.
    • Run: main.py.

4. Start FACS-to-Blendshape Conversion

  • This module converts FACS Action Units into avatar blendshape values.
    • Path: Tools/FACSvatar/process_facstoblend/
    • Run: main.py

5. Send FACS Data

Once both required modules are running, FACS data can be streamed:

  • Live from modified OpenFace
  • From CSV files
  • From a custom ZeroMQ script

Clone this wiki locally