- Once you've followed the setup instructions below you can run the application
- Run the following from repository's root
python -m t3 'Rekorde im Tierreich.gme' workdir
- This will translate
Rekorde im Tierreich.gme
and store the translated GME and intermediate files in theworkdir
- Run
python -m t3 -h
to see all available options
- Alternatively run the application from a Docker container (see instructions below)
- Clone this repo with submodules:
git clone --recurse-submodules [email protected]:jtomori/t3.git
sudo apt install sox ffmpeg
pip install numpy typing_extensions
pip install -r requirements.txt
- Store SeamlessExpressive models in the
SeamlessExpressive
folder in repository's root - Compile
libtiptoi.c
:gcc tip-toi-reveng/libtiptoi.c -o libtiptoi
- GPU inference requires NVIDIA Container Toolkit
- Build image with
docker build -t t3 .
- Run container with
docker run --runtime=nvidia --gpus all --volume ./SeamlessExpressive:/app/SeamlessExpressive --volume ./gme:/app/gme --volume ./workdir:/app/workdir --rm --name t3 t3 gme/name_of_file.gme workdir
- Make sure that
gme, SeamlessExpressive, workdir
directories are present in your current directory workdir
will contain translated GME file along with intermediate files, CSV report- Omit
--runtime=nvidia --gpus all
for performing a CPU inference
- Make sure that
python tests.py
./checks.sh
- Finished setup for running GPU (or CPU) inference from a Docker container
- Initial release