I made this Index TTS2 Portable 1 click install for Windows that uses Nvidia GTX 10XX, 16XX, RTX Quadro, 20XX, 30XX, 40XX, 50XX. During installation it will update Index TTS2, install Torch 2.8.0+CU128. Creates Launch Index TTS2 & Voices Desktop Shortcuts. All Index TTS2 updates comes directly from the original index-tts/index-tts Repository.
Click here to jump to Install ๐ Installation ๐
IndexTTS2: A Breakthrough in Emotionally Expressive and Duration-Controlled Auto-Regressive Zero-Shot Text-to-Speech
Existing autoregressive large-scale text-to-speech (TTS) models have advantages in speech naturalness, but their token-by-token generation mechanism makes it difficult to precisely control the duration of synthesized speech. This becomes a significant limitation in applications requiring strict audio-visual synchronization, such as video dubbing.
This paper introduces IndexTTS2, which proposes a novel, general, and autoregressive model-friendly method for speech duration control.
The method supports two generation modes: one explicitly specifies the number of generated tokens to precisely control speech duration; the other freely generates speech in an autoregressive manner without specifying the number of tokens, while faithfully reproducing the prosodic features of the input prompt.
Furthermore, IndexTTS2 achieves disentanglement between emotional expression and speaker identity, enabling independent control over timbre and emotion. In the zero-shot setting, the model can accurately reconstruct the target timbre (from the timbre prompt) while perfectly reproducing the specified emotional tone (from the style prompt).
To enhance speech clarity in highly emotional expressions, we incorporate GPT latent representations and design a novel three-stage training paradigm to improve the stability of the generated speech. Additionally, to lower the barrier for emotional control, we designed a soft instruction mechanism based on text descriptions by fine-tuning Qwen3, effectively guiding the generation of speech with the desired emotional orientation.
Finally, experimental results on multiple datasets show that IndexTTS2 outperforms state-of-the-art zero-shot TTS models in terms of word error rate, speaker similarity, and emotional fidelity. Audio samples are available at: IndexTTS2 demo page.
Tips: Please contact the authors for more detailed information. For commercial usage and cooperation, please contact [email protected].
IndexTTS2: The Future of Voice, Now Generating
Click the image to watch the IndexTTS2 introduction video.
QQ Group๏ผ663272642(No.4) 1013410623(No.5)
Discord๏ผhttps://discord.gg/uT32E7KDmy
Email๏ผ[email protected]
You are welcome to join our community! ๐
ๆฌข่ฟๅคงๅฎถๆฅไบคๆต่ฎจ่ฎบ๏ผ
Caution
Thank you for your support of the bilibili indextts project! Please note that the only official channel maintained by the core team is: https://github.com/index-tts/index-tts. Any other websites or services are not official, and we cannot guarantee their security, accuracy, or timeliness. For the latest updates, please always refer to this official repository.
2025/09/08๐ฅ๐ฅ๐ฅ We release IndexTTS-2 to the world!- The first autoregressive TTS model with precise synthesis duration control, supporting both controllable and uncontrollable modes. This functionality is not yet enabled in this release.
- The model achieves highly expressive emotional speech synthesis, with emotion-controllable capabilities enabled through multiple input modalities.
2025/05/14๐ฅ๐ฅ We release IndexTTS-1.5, significantly improving the model's stability and its performance in the English language.2025/03/25๐ฅ We release IndexTTS-1.0 with model weights and inference code.2025/02/12๐ฅ We submitted our paper to arXiv, and released our demos and test sets.
Architectural overview of IndexTTS2, our state-of-the art speech model:
The key contributions of IndexTTS2 are summarized as follows:
- We propose a duration adaptation scheme for autoregressive TTS models. IndexTTS2 is the first autoregressive zero-shot TTS model to combine precise duration control with natural duration generation, and the method is scalable for any autoregressive large-scale TTS model.
- The emotional and speaker-related features are decoupled from the prompts, and a feature fusion strategy is designed to maintain semantic fluency and pronunciation clarity during emotionally rich expressions. Furthermore, a tool was developed for emotion control, utilizing natural language descriptions for the benefit of users.
- To address the lack of highly expressive speech data, we propose an effective training strategy, significantly enhancing the emotional expressiveness of zeroshot TTS to State-of-the-Art (SOTA) level.
- We will publicly release the code and pre-trained weights to facilitate future research and practical applications.
| HuggingFace | ModelScope |
|---|---|
| ๐ IndexTTS-2 | IndexTTS-2 |
| IndexTTS-1.5 | IndexTTS-1.5 |
| IndexTTS | IndexTTS |
-
Make sure you have Git installed as it will be needed to update Index TTS, if not download the Git Standalone Installer and click on Git for Windows/x64 Setup. ๐ Git Standalone Installer Download ๐ To install Git, double click Git.exe and just keep clicking next until it's installed, you don't need to change anything.
-
Make sure your Nvidia graphics drivers are up-to-date. If they are not or if your not sure, please click on the following link to download Nvidia graphics drivers. ๐ Nvidia Drivers ๐
-
Make sure that you have NVIDIA's CUDA Toolkit version 12.8 (or newer) installed on your system.
-
Now after you have made sure Nvidia GPU drivers are up to date and Git is installed, download index_tts.exe from here ๐ Index TTS2 Windows 1 Click Install ๐ or from the Releases section at the top right of this page.
-
After downloading, double click index_tts.exe and pick where you would like to extract the zip files too.
-
Then open Index TTS main folder and you will see this in the root
-
Then double click on the Install_Index_TTS.bat to start the installation. It will install everything and download the models via Huggingface. If for some reason Huggingface doesn't work in your Country, I included a Download_Models_Via_Modelscope.bat. After installation is finished, slowly scroll back up to the top to make sure everything installed correctly.
-
To launch Index TTS you can use either the Launch_Index_TTS.bat for normal VRAM or Launch_Index_TTS_LOW_VRAM.bat for Low VRAM in the current folder or the Desktop shortcut but this will Launch Normal VRAM.
Use the Download_Models_Via_Modelscope.bat to download from Modelscope, it will automatically download the models into checkpoints folder.
Important
It can be very helpful to use Argument --fp16 FP16 (half-precision) inference. It is faster and uses less VRAM, with a very small quality loss.
For more detailed information, see README_INDEXTTS_1_5, or visit the IndexTTS1 repository at index-tts:v1.5.0.
IndexTTS2: [Paper]; [Demo]; [ModelScope]; [HuggingFace]
IndexTTS1: [Paper]; [Demo]; [ModelScope]; [HuggingFace]
We sincerely thank colleagues from different roles at Bilibili, whose combined efforts made the IndexTTS series possible.
- Wei Deng - Core author; Initiated the IndexTTS project, led the development of the IndexTTS1 data pipeline, model architecture design and training, as well as iterative optimization of the IndexTTS series of models, focusing on fundamental capability building and performance optimization.
- Siyi Zhou โ Core author; in IndexTTS2, led model architecture design and training pipeline optimization, focusing on key features such as multilingual and emotional synthesis.
- Jingchen Shu - Core author; worked on overall architecture design, cross-lingual modeling solutions, and training strategy optimization, driving model iteration.
- Xun Zhou - Core author; worked on cross-lingual data processing and experiments, explored multilingual training strategies, and contributed to audio quality improvement and stability evaluation.
- Jinchao Wang - Core author; worked on model development and deployment, building the inference framework and supporting system integration.
- Yiquan Zhou - Core author; contributed to model experiments and validation, and proposed and implemented text-based emotion control.
- Yi He - Core author; contributed to model experiments and validation.
- Lu Wang โ Core author; worked on data processing and model evaluation, supporting model training and performance verification.
- Yining Wang - Supporting contributor; contributed to open-source code implementation and maintenance, supporting feature adaptation and community release.
- Yong Wu - Supporting contributor; worked on data processing and experimental support, ensuring data quality and efficiency for model training and iteration.
- Yaqin Huang โ Supporting contributor; contributed to systematic model evaluation and effect tracking, providing feedback to support iterative improvements.
- Yunhan Xu โ Supporting contributor; provided guidance in recording and data collection, while also offering feedback from a product and operations perspective to improve usability and practical application.
- Yuelang Sun โ Supporting contributor; provided professional support in audio recording and data collection, ensuring high-quality data for model training and evaluation.
- Yihuang Liang - Supporting contributor; worked on systematic model evaluation and project promotion, helping IndexTTS expand its reach and engagement.
- Huyang Sun - Provided strong support for the IndexTTS project, ensuring strategic alignment and resource backing.
- Bin Xia - Contributed to the review, optimization, and follow-up of technical solutions, focusing on ensuring model effectiveness.
๐ If you find our work helpful, please leave us a star and cite our paper.
IndexTTS2:
@article{zhou2025indextts2,
title={IndexTTS2: A Breakthrough in Emotionally Expressive and Duration-Controlled Auto-Regressive Zero-Shot Text-to-Speech},
author={Siyi Zhou, Yiquan Zhou, Yi He, Xun Zhou, Jinchao Wang, Wei Deng, Jingchen Shu},
journal={arXiv preprint arXiv:2506.21619},
year={2025}
}
IndexTTS:
@article{deng2025indextts,
title={IndexTTS: An Industrial-Level Controllable and Efficient Zero-Shot Text-To-Speech System},
author={Wei Deng, Siyi Zhou, Jingchen Shu, Jinchao Wang, Lu Wang},
journal={arXiv preprint arXiv:2502.05512},
year={2025},
doi={10.48550/arXiv.2502.05512},
url={https://arxiv.org/abs/2502.05512}
}

