-
Notifications
You must be signed in to change notification settings - Fork 3k
Description
OpenVINO Version
2025.4.1
Operating System
Other (Please specify in description)
Device used for inference
CPU
Framework
ONNX
Model used
Issue description
OS: Ubuntu 24.04.3 LTS
Context
I am attempting inference using an OpenVINO model converted from the original campplus_sv_zh_en model (originally in PyTorch, exported to ONNX). a PyTorch implementation of the CAMPPlus model.
CPU Inference - fails with the error
GPU Inference – completes successfully.
What is the cause of this behavior?
Step-by-step reproduction
#Python 3.10.17
import numpy as np
from openvino import Core
dummy_input = np.random.randn(1, 148, 80).astype(np.float32)
core = Core()
model = core.read_model("models/diar/onnx/3dspeaker_speech_campplus_sv_zh_en_16k-common_advanced.onnx")
compiled_model = core.compile_model(model, "CPU")
output_layer = compiled_model.output(0)
result = compiled_model([dummy_input])[output_layer]
Relevant log output
RuntimeError: Exception from src/inference/src/cpp/infer_request.cpp:223:
Exception from src/plugins/intel_cpu/src/node.cpp:725:
[CPU] AvgPool node with name '/xvector/block1/tdnnd1/cam_layer/AveragePool' Check 'cmp::le(kernel, dim.get_length())' failed at src/core/shape_inference/include/pooling_shape_inference_util.hpp:145:
While validating node 'opset1::AvgPool /xvector/block1/tdnnd1/cam_layer/AveragePool (opset1::Relu /xvector/block1/tdnnd1/nonlinear2/relu/Relu[0]:f32[?,128,1..]) -> (f32[?,128,1..])' with friendly_name '/xvector/block1/tdnnd1/cam_layer/AveragePool':
Kernel after dilation has size (dim: 100) larger than the data shape after padding (dim: 74) at axis 0.Issue submission checklist
- I'm reporting an issue. It's not a question.
- I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
- There is reproducer code and related data files such as images, videos, models, etc.