Skip to content
@thu-ml

TSAIL group

Tsinghua Statistical Artificial Intelligence & Learning Group

Pinned Loading

  1. TurboDiffusion TurboDiffusion Public

    TurboDiffusion: 100–200× Acceleration for Video Diffusion Models

    Python 504 15

  2. unidiffuser unidiffuser Public

    Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"

    Python 1.5k 90

  3. SageAttention SageAttention Public

    [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

    Cuda 2.9k 286

  4. prolificdreamer prolificdreamer Public

    ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation (NeurIPS 2023 Spotlight)

    Python 1.6k 47

  5. ares ares Public

    A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.

    Python 519 93

  6. tianshou tianshou Public

    An elegant PyTorch deep reinforcement learning library.

    Python 9k 1.2k

Repositories

Showing 10 of 84 repositories
  • TurboDiffusion Public

    TurboDiffusion: 100–200× Acceleration for Video Diffusion Models

    thu-ml/TurboDiffusion’s past year of commit activity
    Python 504 Apache-2.0 14 11 0 Updated Dec 18, 2025
  • SpargeAttn Public

    [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.

    thu-ml/SpargeAttn’s past year of commit activity
    Cuda 834 Apache-2.0 69 48 3 Updated Dec 17, 2025
  • SLA Public

    SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse–Linear Attention

    thu-ml/SLA’s past year of commit activity
    Python 169 Apache-2.0 8 4 0 Updated Dec 16, 2025
  • Motus Public

    Official code of Motus

    thu-ml/Motus’s past year of commit activity
    Python 60 Apache-2.0 1 2 0 Updated Dec 16, 2025
  • SageAttention Public

    [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

    thu-ml/SageAttention’s past year of commit activity
    Cuda 2,859 Apache-2.0 285 140 16 Updated Dec 11, 2025
  • DiT-Extrapolation Public

    Official implementation for "RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion Transformers" (ICML 2025) and "UltraViCo: Breaking Extrapolation Limits in Video Diffusion Transformers"

    thu-ml/DiT-Extrapolation’s past year of commit activity
    Python 767 Apache-2.0 73 22 0 Updated Dec 4, 2025
  • thu-ml/ultraimage.github.io’s past year of commit activity
    JavaScript 0 0 0 0 Updated Dec 3, 2025
  • UltraViCo.github.io Public

    Project page for "UltraViCo"

    thu-ml/UltraViCo.github.io’s past year of commit activity
    JavaScript 0 0 0 0 Updated Dec 3, 2025
  • RDT2 Public

    Official code of RDT 2

    thu-ml/RDT2’s past year of commit activity
    Python 605 Apache-2.0 27 8 0 Updated Dec 3, 2025
  • tianshou Public

    An elegant PyTorch deep reinforcement learning library.

    thu-ml/tianshou’s past year of commit activity
    Python 8,998 MIT 1,198 130 (1 issue needs help) 1 Updated Dec 1, 2025