Releases
v2025.06
Compare
Sorry, something went wrong.
No results found
🎉 Major Updates
Add support for image-text-to-text models (e.g., Llama3.2-Vision and UI-TARS)
Add support for additional text-to-text models (DeepAlignment, LlamaGuard3, and HarmBench Classifier)
Add example attack against LLaDa, a large language diffusion model
Add DataMapper abstraction to enable easy adaptation of existing datasets to models
🎈 Minor Updates
Add good_token_ids support to GCG optimizer
Save best attack to disk at last step and reduced save state for hard-token attacks
Output only continuation tokens and not full prompt in evaluation
Remove check for back-to-back tags in tokenizer
Enable command-line modification of response via response.prefix= and response.suffix=
TaggedTokenizer now supports returning input_map when return_tensors=None
🚧 Bug Fixes
Fix tokenizer prefix-space detection (e.g., Llama2's tokenizer)
Allow early stop with multi-sample datasets
All make commands now run in isolated virtual environments
max_new_tokens generates exactly that many tokens at test time regardless of eos_token
You can’t perform that action at this time.