-
Notifications
You must be signed in to change notification settings - Fork 245
Issues: openvinotoolkit/openvino.genai
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
LLM Bench mark Tool Fails or Silent Crash on MTL(Intel(R) Core(TM) Ultra 7 165H ) Windows11
#2348
opened Jun 13, 2025 by
shailesh837
Implement support of GGUF LoRA
good first issue
Good for newcomers
#2323
opened Jun 9, 2025 by
rkazants
Implement C bindings for Text-to-speech API and sample
good first issue
Good for newcomers
#2302
opened May 31, 2025 by
rkazants
Support Sana-Sprint for text-to-image task
question
Further information is requested
#2224
opened May 18, 2025 by
circuluspibo
Support for Tensor Parallelism in openvino.genai.benchmark?
question
Further information is requested
#2216
opened May 15, 2025 by
Dazui-Wang
[Question] GenAI Pipeline without tokenizer
category: tokenizers
Tokenizer class or submodule update
question
Further information is requested
#2215
opened May 15, 2025 by
DongChanS
[BUG] heterogeneous_stable_diffusion.py can't seem to access NPU for big images
category: NPU
NPU related topics
#2200
opened May 12, 2025 by
helloyanis
Issue: Reshape error in VLMPipeline for LORA-Finetuned and quantized InternVL3-1B model
#2191
opened May 9, 2025 by
Omycron83
Issue with running Flux.1.dev on iGPU
category: image generation
Image generation pipelines
#2176
opened May 8, 2025 by
stsxxx
How to enable Intel GPU for llm bench + Pytorch .compile()?
PSE
#2175
opened May 7, 2025 by
raymondlo84Fork
Performance Comparison Inquiry Between OpenVINO GenAI, xFastTransformer, and IPEX
#2109
opened Apr 24, 2025 by
Dazui-Wang
Prefix caching documentation link points to google.com in the README.md
#2092
opened Apr 21, 2025 by
ialbrecht
[NPU][Llama] NPU is slower than CPU&GPU when running LLM
category: LLM
LLM pipeline (stateful, static)
category: NPU
NPU related topics
PSE
#1882
opened Mar 11, 2025 by
yang-ahuan
Previous Next
ProTip!
no:milestone will show everything without a milestone.