Skip to content

Releases: huggingface/optimum-intel

v1.25.2: Patch release

13 Aug 09:35
Compare
Choose a tag to compare

Full Changelog: v1.25.1...v1.25.2

Compatible with transformers>=4.36,<=4.53

v1.25.1: Patch release

07 Aug 06:16
Compare
Choose a tag to compare

Full Changelog: v1.25.0...v1.25.1

Compatible with transformers>=4.36,<=4.53

v1.25.0: Text-to-Text generation models quantization

04 Aug 16:42
Compare
Choose a tag to compare

🚀 New Features & Enhancements

New Contributors

What's Changed

Compatible with transformers>=4.36,<=4.53

Full Changelog: v1.24.0...v1.25.0

v1.24.0: OVPipelineQuantizationConfig

01 Jul 12:47
Compare
Choose a tag to compare

🚀 New Features & Enhancements

Optimum 1.26 compatibility by @IlyasMoutawwakil in #1352

OpenVINO

IPEX

🔧 Key Fixes & Optimizations

New Contributors

What's Changed

Compatible with transformers>=4.36,<=4.52

Full Changelog: v1.23.0...v1.24.0

v1.23.1: Patch release

13 Jun 11:30
Compare
Choose a tag to compare

v1.23.0: DeepSeek, Llama 4, LTX-Video

15 May 13:31
Compare
Choose a tag to compare

🚀 New Features & Enhancements

OpenVINO

IPEX

Transformers compatibility

🔧 Key Fixes & Optimizations

What's Changed

Read more

v1.22.0: Qwen2-VL, Granite, Sana, Sentence Transformers

06 Feb 23:49
Compare
Choose a tag to compare

OpenVINO

IPEX

from optimum.intel import IPEXSentenceTransformer

model = IPEXSentenceTransformer.from_pretrained(model_id)
from optimum.intel import IPEXModelForSeq2SeqLM

model = IPEXModelForSeq2SeqLM.from_pretrained(model_id)

Compatible with transformers>=4.36,<=4.48

Full Changelog: v1.21.0...v1.22.0

v1.21.0: SD3, Flux, MiniCPM, NanoLlava, VLM Quantization, XPU, PagedAttention

06 Dec 12:53
Compare
Choose a tag to compare

What's Changed

OpenVINO

Diffusers

VLMs Modeling

NNCF

IPEX

  • Unified XPU/CPU modeling with custom PagedAttention cache for LLMs by @sywangyi in #1009

INC

New Contributors

Compatible with transformers>=4.36,<=4.46

Full Changelog: v1.20.0...v1.21.0

v1.20.1: Patch release

30 Oct 14:08
Compare
Choose a tag to compare
  • Fix lora unscaling in diffusion pipelines by @eaidova in #937
  • Fix compatibility with diffusers < 0.25.0 by @eaidova in #952
  • Allow to use SDPA in clip models by @eaidova in #941
  • Updated OVPipelinePart to have separate ov_config by @e-ddykim in #957
  • Symbol use in optimum: fix misprint by @jane-intel in #948
  • Fix temporary directory saving by @eaidova in #959
  • Disable warning about tokenizers version for ov tokenizers >= 2024.5 by @eaidova in #962
  • Restore original model_index.json after save_pretrained call by @eaidova in #961
  • Add v4.46 transformers support by @echarlaix in #960

v1.20.0: multi-modal and OpenCLIP models support, transformers v4.45

10 Oct 17:01
Compare
Choose a tag to compare

OpenVINO

Multi-modal models support

Adding OVModelForVisionCausalLM by @eaidova in #883

OpenCLIP models support

Adding OpenCLIP models support by @sbalandi in #857

from optimum.intel import OVModelCLIPVisual, OVModelCLIPText

visual_model = OVModelCLIPVisual.from_pretrained(model_name_or_path)
text_model  = OVModelCLIPText.from_pretrained(model_name_or_path)
image = processor(image).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat"])
image_features = visual_model(image).image_features
text_features = text_model(text).text_features

Diffusion pipeline

Adding OVDiffusionPipeline to simplify diffusers model loading by @IlyasMoutawwakil in #889

  model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id)
+ pipeline = OVDiffusionPipeline.from_pretrained(model_id)
  image = pipeline("sailing ship in storm by Leonardo da Vinci").images[0]

NNCF GPTQ support

GPTQ support by @nikita-savelyevv in #912

Transformers v4.45

Transformers v4.45 support by @echarlaix in #902

Subfolder

Remove the restriction for the model's config to be in the model's subfolder by @tomaarsen in #933

New Contributors

Compatible with transformers>=4.36,<=4.45

Full Changelog: v1.19.0...v1.20.0