β T2S is the first domain-agnostic framework for text-to-time series generation.
π TSFragment-600K is the first well-aligned, fragment-level textβtime series multimodal dataset across 6 classical domains.
- π© April 2025: T2S accepted by IJCAI 2025!
- π© May 2025: TSFragment-600K is now available on π€ Hugging Face
- π© May 2025: Pretrained models T2S-LA-VAE and T2S-DiT released
T2S is the first domain-agnostic model that enables text-to-time series generation. It allows usersβboth non-experts and professionalsβto generate high-resolution, semantically aligned time series from natural language descriptions.
- Application Scenarios:
-
Inclusive Data Interaction
Non-experts can describe temporal behaviors and generate synthetic data, democratizing access to data-driven tools, encouraging broader participation in time series data analysis. -
Rapid Prototyping for Professionals
Experts can use simple textual descriptions to quickly simulate system temporal dynamics. This capability supports rapid prototyping and analysis of system evolution under different conditions. -
Stress Testing
Simulate edge cases (e.g., "an extreme surge in demand") to evaluate system robustnessβbeyond what traditional diffusion models can do. Note that traditional models struggle to model these extreme cases because they rely on stationary source data distributions.
-
Key Components
-
T2S-DiT: A diffusion-based transformer tailored for conditional generation from natural language.
-
LA-VAE: A pretrained Length-Adaptive Variational Autoencoder that supports generation of variable-length series.
-
Dataset: TSFragment-600K: A large-scale multi-modal dataset with 600K fragment-level text-time series pairs annotated with fine-grained morphological captions.
- TSFragment-600K dataset is available on π€ Hugging Face. You can follow the usage example to call TSFragment-600K dataset:
from datasets import load_dataset
ds = load_dataset("WinfredGe/TSFragment-600K")
- You have access to download all well pre-processed [three levels datasets](include TSFragment-600K dataset), then place them under
./Data
directory.
Note
We also open source the dataset construction and evaluation pipeline under ./Dataset_Construction_Pipeline/
.
- Dataset Structure:
Data
ββ TSFragment-600K
β ββ embedding_cleaned_airquality_24.csv
β ββ embedding_cleaned_airquality_48.csv
β ββ embedding_cleaned_airquality_96.csv
β ββ embedding_cleaned_electricity_24.csv
β ββ embedding_cleaned_electricity_48.csv
β ββ embedding_cleaned_electricity_96.csv
β β ...
β ββ embedding_cleaned_traffic_24.csv
β ββ embedding_cleaned_traffic_48.csv
β ββ embedding_cleaned_traffic_96.csv
ββ SUSHI
β ββ embedding_cleaned_SUSHI.csv
ββ MMD
ββ embedding_cleaned_Agriculture_24.csv
ββ embedding_cleaned_Agriculture_48.csv
ββ embedding_cleaned_Agriculture_96.csv
ββ embedding_cleaned_Climate_24.csv
ββ embedding_cleaned_Climate_48.csv
ββ embedding_cleaned_Climate_96.csv
β ...
ββ embedding_cleaned_SocialGood_24.csv
ββ embedding_cleaned_SocialGood_48.csv
ββ embedding_cleaned_SocialGood_96.csv
The code structure is as follows:
T2S-main
ββ pretrained_lavae_unified.py
ββ train.py
ββ infer.py
ββ evaluation.py
ββ datafactory
β ββ dataloader.py
β ββ dataset.py
ββ model
β ββ pretrained
β β ββ core.py
β β ββ vqvae.py
β ββ denoiser
β β ββ mlp.py
β β ββ transformer.py
β ββ backbone
β ββ DDPM.py
β ββ rectified_flow.py
ββ evaluate
ββ feature_based_measures.py
ββ ts2vec.py
ββ utils.py
- Install Python 3.10 from MiniConda, and then install the required dependencies:
pip install -r requirements.txt
Note: T2S requires torch==2.3.1
.
- You can access all well pre-processed three level datasets.
- You can also download our TSFragment-600K data only.
- You can access the well pretrained LA-VAE from T2S checkpoints in the folder
./results/saved_pretrained_models/
- Running the follow command to pretrain your own LA-VAE on different datasets. For example,
python pretrained_lavae_unified.py --dataset_name ETTh1 --save_path 'results/saved_pretrained_models/' --mix_train True
For the more detailed customize, please refer to the arg description of each hyperparameter in pretrained_lavae_unified.py
.
Note
LA-VAE use mix_train to convert arbitrary length data into the unified representation.
- We provide some train and inference experiment pipeline in
./script.sh
. - [Example] Running the following command to train and inference on ETTh1.
python train.py --dataset_name 'ETTh1'
python infer.py --dataset_name 'ETTh1_24' --cfg_scale 9.0 --total_step 10
python infer.py --dataset_name 'ETTh1_48' --cfg_scale 9.0 --total_step 10
python infer.py --dataset_name 'ETTh1_96' --cfg_scale 9.0 --total_step 10
Note
You can tune the hyperparameters to suit your needs, such as cfg_scale and total_step.
Please refer to train.py
and infer.py
for more detailed description of customized hyperparameter settings.
- You can evaluate the model using
./scripts_validation_only.sh
directly. - According to the configuration of
inferce.py
, set the corresponding hyperparameters ofevaluation
. - [Example] Running the following evaluation command to evaluate on ETTh1.
python evaluation.py --dataset_name 'ETTh1_24' --cfg_scale 9.0 --total_step 10
Note
If you want to evaluate on MRR metric, please set --run_multi True
in inferce.py
.
- Install Python 3.10, and then install the dependencies in
requirements.txt
. - Download the TSFragment-600K data and T2S checkpoints to
./
- Evaluate the model using
./scripts_validation_only.sh
directly.
1, Position Paper: What Can Large Language Models Tell Us about Time Series Analysis, in ICML 2024.
Authors: Ming Jin, Yifan Zhang, Wei Chen, Kexin Zhang, Yuxuan Liang, Bin Yang, Jindong Wang, Shirui Pan, Qingsong Wen
@inproceedings{jin2024position,
title={Position Paper: What Can Large Language Models Tell Us about Time Series Analysis},
author={Ming Jin and Yifan Zhang and Wei Chen and Kexin Zhang and Yuxuan Liang and Bin Yang and Jindong Wang and Shirui Pan and Qingsong Wen},
booktitle={International Conference on Machine Learning (ICML 2024)},
year={2024}
}
2, A Survey on Diffusion Models for Time Series and Spatio-Temporal Data, in arXiv 2024. [GitHub Repo]
Authors: Yiyuan Yang, Ming Jin, Haomin Wen, Chaoli Zhang, Yuxuan Liang, Lintao Ma, Yi Wang, Chenghao Liu, Bin Yang, Zenglin Xu, Jiang Bian, Shirui Pan, Qingsong Wen
@article{yang2024survey,
title={A survey on diffusion models for time series and spatio-temporal data},
author={Yang, Yiyuan and Jin, Ming and Wen, Haomin and Zhang, Chaoli and Liang, Yuxuan and Ma, Lintao and Wang, Yi and Liu, Chenghao and Yang, Bin and Xu, Zenglin and others},
journal={arXiv preprint arXiv:2404.18886},
year={2024}
}
3, Foundation Models for Spatio-Temporal Data Science: A Tutorial and Survey, in arXiv 2025.
Authors: Yuxuan Liang, Haomin Wen, Yutong Xia, Ming Jin, Bin Yang, Flora Salim, Qingsong Wen, Shirui Pan, Gao Cong
@article{liang2025foundation,
title={Foundation Models for Spatio-Temporal Data Science: A Tutorial and Survey},
author={Liang, Yuxuan and Wen, Haomin and Xia, Yutong and Jin, Ming and Yang, Bin and Salim, Flora and Wen, Qingsong and Pan, Shirui and Cong, Gao},
journal={arXiv preprint arXiv:2503.13502},
year={2025}
}
Please let us know if you find out a mistake or have any suggestions! If you find this resource helpful, please consider to star this repository and cite our research:
@inproceedings{ge2025t2s,
title={{T2S}: High-resolution Time Series Generation with Text-to-Series Diffusion Models},
author={Ge, Yunfeng and Li, Jiawei and Zhao, Yiji and Wen, Haomin and Li, Zhao and Qiu, Meikang and Li, Hongyan and Jin, Ming and Pan, Shirui},
booktitle={International Joint Conference on Artificial Intelligence (IJCAI)},
year={2025}
}
Our implementation adapts Time-Series-Library, TSGBench, TOTEM and Meta (Scalable Diffusion Models with Transformers) as the code base and have extensively modified it to our purposes. We thank the authors for sharing their implementations and related resources.
This project is licensed under the Apache-2.0 License.