Change the repository type filter
All
Repositories list
77 repositories
- Quantized Attention achieves speedup of 2-5x and 3-11x compared to FlashAttention and xformers, without lossing end-to-end metrics across language, image, and video models.
- A toolbox for benchmarking Multimodal LLM Agents trustworthiness across truthfulness, controllability, safety and privacy dimensions through 34 interactive tasks
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)
- SpargeAttention: A training-free sparse attention that can accelerate any model inference.
FrameBridge
PublicUniCardio
PublicRIFLEx
PublicOfficial implementation for "RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion Transformers" (ICML 2025)cond-image-leakage
PublicDiffusionBridge
Publici-DODE
PublicSTAIR
PublicEffWRN-paddle
PublicRIFLEx.github.io
Publicoddefense
PublicCCA
PublicAdaptive-Sparse-Trainer
PublicReMoE
Publictianshou-docs-zh_CN
PublicCRM
Public[ECCV 2024] Single Image to 3D Textured Mesh in 10 seconds with Convolutional Reconstruction Model.HiDe-PET
PublicHiDe-Prompt
Public