-
Notifications
You must be signed in to change notification settings - Fork 110
Description
I've tried several solutions, but I still struggle and it ends up with one Exception:
Global exception slow_conv2d_forward_mps: input(device='cpu') and weight(device=mps:0') must be on the same device
I set providers to only use CPU:
providers = ['CPUExecutionProvider']
If i try to use onnxruntime-silicon using default
providers = onnxruntime.get_available_providers()
I get other error:
Non-zero status code returned while running CoreML_11777492068329204276_6 node. Name:'CoreMLExecutionProvider_CoreML_11777492068329204276_6_6' Status Message: Exception: /Users/runner/work/1/s/onnxruntime/core/providers/coreml/model/model.mm:66 InlinedVector<int64_t> onnxruntime::coreml::(anonymous namespace)::GetStaticOutputShape(gsl::span, gsl::span, const logging::Logger &) inferred_shape.size() == coreml_static_shape.size() was false. CoreML static output shape ({1,1,1,800,1}) and inferred shape ({3200,1}) have different ranks.
Any solutions for this?