Real-time, flexible and extensible face-swapping framework powered by Rixy Ai
This deepfake software is designed to be a productive tool for the AI-generated media industry. It can assist artists in animating custom characters, creating engaging content, and even using models for clothing design.
We are aware of the potential for unethical applications and are committed to preventative measures. A built-in check prevents the program from processing inappropriate media (nudity, graphic content, sensitive material like war footage, etc.). We will continue to develop this project responsibly, adhering to the law and ethics. We may shut down the project or add watermarks if legally required.
-
Ethical Use: Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online.
-
Content Restrictions: The software includes built-in checks to prevent processing inappropriate media, such as nudity, graphic content, or sensitive material.
-
Legal Compliance: We adhere to all relevant laws and ethical guidelines. If legally required, we may shut down the project or add watermarks to the output.
-
User Responsibility: We are not responsible for end-user actions. Users must ensure their use of the software aligns with ethical standards and legal requirements.
By using this software, you agree to these terms and commit to using it in a manner that respects the rights and dignity of others.
Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.
- Select a face
- Select which camera to use
- Press live!
Retain your original mouth for accurate movement using Mouth Mask
Use different faces on multiple subjects simultaneously
Watch movies with any face in real-time
Run Live shows and performances
Create Your Most Viral Meme Yet
Created using Many Faces feature in Synth
Surprise people on Omegle
ishowspeed.mp4
Please be aware that the installation requires technical skills and is not for beginners. Consider downloading the prebuilt version.
Click to see the process
This is more likely to work on your computer but will be slower as it utilizes the CPU.
1. Set up Your Platform
- Python (3.10 recommended)
- pip
- git
- ffmpeg -
iex (irm ffmpeg.tc.ht)
- Visual Studio 2022 Runtimes (Windows)
2. Clone the Repository
git clone https://github.com/hacksider/Synth.git
cd Synth
3. Download the Models
Place these files in the "models" folder.
4. Install Dependencies
We highly recommend using a venv
to avoid issues.
For Windows:
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
For Linux:
# Ensure you use the installed Python 3.10
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
For macOS:
Apple Silicon (M1/M2/M3) requires specific setup:
# Install Python 3.10 (specific version is important)
brew install [email protected]
# Install tkinter package (required for the GUI)
brew install [email protected]
# Create and activate virtual environment with Python 3.10
python3.10 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
** In case something goes wrong and you need to reinstall the virtual environment **
# Deactivate the virtual environment
rm -rf venv
# Reinstall the virtual environment
python -m venv venv
source venv/bin/activate
# install the dependencies again
pip install -r requirements.txt
Run: If you don't have a GPU, you can run Synth using python run.py
. Note that initial execution will download models (~300MB).
CUDA Execution Provider (Nvidia)
- Install CUDA Toolkit 11.8.0
- Install dependencies:
pip uninstall onnxruntime onnxruntime-gpu
pip install onnxruntime-gpu==1.16.3
- Usage:
python run.py --execution-provider cuda
CoreML Execution Provider (Apple Silicon)
Apple Silicon (M1/M2/M3) specific installation:
- Make sure you've completed the macOS setup above using Python 3.10.
- Install dependencies:
pip uninstall onnxruntime onnxruntime-silicon
pip install onnxruntime-silicon==1.13.1
- Usage (important: specify Python 3.10):
python3.10 run.py --execution-provider coreml
Important Notes for macOS:
- You must use Python 3.10, not newer versions like 3.11 or 3.13
- Always run with
python3.10
command not justpython
if you have multiple Python versions installed - If you get error about
_tkinter
missing, reinstall the tkinter package:brew reinstall [email protected]
- If you get model loading errors, check that your models are in the correct folder
- If you encounter conflicts with other Python versions, consider uninstalling them:
# List all installed Python versions brew list | grep python # Uninstall conflicting versions if needed brew uninstall --ignore-dependencies [email protected] [email protected] # Keep only Python 3.10 brew cleanup
CoreML Execution Provider (Apple Legacy)
- Install dependencies:
pip uninstall onnxruntime onnxruntime-coreml
pip install onnxruntime-coreml==1.13.1
- Usage:
python run.py --execution-provider coreml
DirectML Execution Provider (Windows)
- Install dependencies:
pip uninstall onnxruntime onnxruntime-directml
pip install onnxruntime-directml==1.15.1
- Usage:
python run.py --execution-provider directml
OpenVINO™ Execution Provider (Intel)
- Install dependencies:
pip uninstall onnxruntime onnxruntime-openvino
pip install onnxruntime-openvino==1.15.0
- Usage:
python run.py --execution-provider openvino
1. Image/Video Mode
- Execute
python run.py
. - Choose a source face image and a target image/video.
- Click "Start".
- The output will be saved in a directory named after the target video.
2. Webcam Mode
- Execute
python run.py
. - Select a source face image.
- Click "Live".
- Wait for the preview to appear (10-30 seconds).
- Use a screen capture tool like OBS to stream.
- To change the face, select a new source image.
options:
-h, --help show this help message and exit
-s SOURCE_PATH, --source SOURCE_PATH select a source image
-t TARGET_PATH, --target TARGET_PATH select a target image or video
-o OUTPUT_PATH, --output OUTPUT_PATH select output file or directory
--frame-processor FRAME_PROCESSOR [FRAME_PROCESSOR ...] frame processors (choices: face_swapper, face_enhancer, ...)
--keep-fps keep original fps
--keep-audio keep original audio
--keep-frames keep temporary frames
--many-faces process every face
--map-faces map source target faces
--mouth-mask mask the mouth region
--video-encoder {libx264,libx265,libvpx-vp9} adjust output video encoder
--video-quality [0-51] adjust output video quality
--live-mirror the live camera display as you see it in the front-facing camera frame
--live-resizable the live camera frame is resizable
--max-memory MAX_MEMORY maximum amount of RAM in GB
--execution-provider {cpu} [{cpu} ...] available execution provider (choices: cpu, ...)
--execution-threads EXECUTION_THREADS number of execution threads
-v, --version show program's version number and exit
Looking for a CLI mode? Using the -s/--source argument will make the run program in cli mode.
- Deep-Live-Cam: project which provided a great face-swapping framework foundation to build from
- ffmpeg: for making video-related operations easy
- deepinsight: for their insightface project which provided a well-made library and models. Please be reminded that the use of the model is for non-commercial research purposes only.
- havok2-htwo: for sharing the code for webcam
- GosuDRM: for the open version of roop
- pereiraroland26: Multiple faces support
- vic4key: For supporting/contributing to this project
- kier007: for improving the user experience
- qitianai: for multi-lingual support
- Justin Malonson for overseeing technology and innovation strategy
- Rixy-Ai): for overseeing AI Architecture and development