This repository was archived by the owner on Jun 19, 2025. It is now read-only.
This repository was archived by the owner on Jun 19, 2025. It is now read-only.
Anyone know how to set per_process_gpu_memory_fraction ? #3795
Open
Description
I want to set the following configuration option for tensorflow, I forked this repo and I can see that gpu options are being set on transcribe.py this is fine however... how do I compile from source if I modify these settings? I want to be able to run 2 processes concurrently , i have enough vram if i allocate each one to use 8gb of vram max.
# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
I was able to find this discourse forum, https://discourse.mozilla.org/t/how-to-restrict-transcribe-py-from-consuming-whole-gpu-memory/75555/5
this gives the code i need, but how do I compile from source once I change the code?
Metadata
Metadata
Assignees
Labels
No labels