Skip to content

fix: update XTTS/Tortoise GPT code for HF transformers 4.52+ #414

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jun 12, 2025
Merged

Conversation

eginhard
Copy link
Member

@eginhard eginhard commented Jun 3, 2025

  • adapt to HF changes that moved a lot of code from generate() into separate helper methods, so we don't need to duplicate that code in stream_generator.py anymore and can just call these methods
  • remove some obsolete code to fall back to official HF implementations that have e.g. improved caching support
  • remove branches like beam-search in xtts/stream_generator.py that are never used to shorten that code
  • minimum transformers version increased to 4.52

Fixes #400

@eginhard eginhard requested review from KarlHajal and removed request for KarlHajal June 3, 2025 19:07
@eginhard eginhard marked this pull request as draft June 3, 2025 22:58
@eginhard eginhard changed the title fix(xtts): remove now obsolete code for transformer 4.52 compatibility fix: update XTTS/Tortoise GPT code for HF transformers 4.50+ Jun 4, 2025
@eginhard eginhard requested a review from KarlHajal June 5, 2025 08:10
@eginhard eginhard marked this pull request as ready for review June 5, 2025 08:10
@eginhard eginhard changed the title fix: update XTTS/Tortoise GPT code for HF transformers 4.50+ fix: update XTTS/Tortoise GPT code for HF transformers 4.52+ Jun 5, 2025
eginhard added 8 commits June 12, 2025 01:07
- adapt to HF changes that moved a lot of code from generate() into separate
helper methods, so we don't need to duplicate that code in stream_generator.py
anymore
- removes some obsolete code to fall back to official HF implementations that
have e.g. improved caching support
- remove branches like beam-search in stream_generator.py that are never used to
shorten that code
scaled_dot_product_attention is now available in all Pytorch versions we support
This reverts commit 75cc42a.

Obsolete because 1.5.0 has been yanked
It doesn't seem possible to reliably do this with Numpy>2 because of
np._core.multiarray.scalar
@eginhard eginhard merged commit d59aca3 into dev Jun 12, 2025
30 checks passed
@eginhard eginhard deleted the tf452 branch June 12, 2025 07:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

XTTS compatibility with transformers>=4.52
2 participants