You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
Changes
"export to onnx" button has been replaced with the new "Convert to ONNX" setting in Advanced setting. The behavior of this setting has also changed. When this option is checked, uploaded or selected models (if not converted before) will be automatically converted to ONNX in the same slot. When unchecked, the original PyTorch model will be used. Note that this option does not replace the original model, but adds an ONNX variant to it. If uploaded model is already in ONNX format, it will be loaded as is.
Performance monitor now shows the model type including the used runtime (f.e., onnxRVC for ONNX RVC models and pyTorchRVCv2 for original RVC v2 models).
Improvements
WASAPI no longer requires matching the sample rate of input and output devices, the audio will be automatically resampled by the system audio mixer. However, it's still recommended if possible.
Fixes
ASIO channel selection now correctly selects an input/output channel.
"Operation in progress" dialog will now appear when changing long-running options in Advanced settings.
Fixed a potential bug when FP16 ONNX model could fail to load if it was generated previously and removed.
Model settings (Pitch, Index, etc.) no longer reset to last saved settings after changing GPU.
Experimental
When using inference on CPU, the contentvec embedder model will be quantized using INT8 precision. This significantly reduces RAM usage and slightly reduces CPU usage. Currently, there's no option to opt out from this behavior.
Miscellaneous
Updated WebUI npm dependencies.
Known issues
WDM-KS and true ASIO devices produce crackling audio. For the lowest delay, the current workaround is to use WASAPI or FlexASIO.