Skip to content

opencl: update upscale to support align corners #14488

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 2, 2025

Conversation

lhez
Copy link
Collaborator

@lhez lhez commented Jul 1, 2025

Commit 431b2c2 adds GGML_SCALE_FLAG_ALIGN_CORNERS to upscale, breaking the op in OpenCL. This PR fixes this.

Make sure to read the contributing guidelines before submitting a PR

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Jul 1, 2025
@lhez lhez marked this pull request as ready for review July 2, 2025 03:02
@CISC CISC added the OpenCL Issues specific to the OpenCL backend label Jul 2, 2025
@CISC CISC merged commit 603e43d into ggml-org:master Jul 2, 2025
48 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Jul 2, 2025
* origin/master:
llama : initial Mamba-2 support (ggml-org#9126)
sync : ggml
ggml : add version function to get lib version (ggml/1286)
Set RPATH to "@loader_path" / "$ORIGIN" to ensure executables and dynamic libraries search for dependencies in their origin directory. (ggml-org#14309)
CUDA: add softmax broadcast (ggml-org#14475)
CUDA: broadcasting for FlashAttention mask (ggml-org#14500)
vulkan: support softmax/FA batch and broadcast (ggml-org#14449)
ggml : support bcast ggml_soft_max_ext, ggml_flash_attn_ext (ggml-org#14435)
opencl : fix possible buffer overflow in dump_tensor (ggml-org#14490)
simple-chat : fix context-exceeded condition (ggml-org#14494)
opencl : skip empty nodes on cgraph compute (ggml-org#14491)
opencl : update upscale to support align corners (ggml-org#14488)
ci : add OpenCL to labeler workflow (ggml-org#14496)
github : add OpenCL backend to issue templates (ggml-org#14492)
ggml : Callback before abort (ggml-org#14481)
ci : disable fast-math for Metal GHA CI (ggml-org#14478)
Minh141120 pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 5, 2025
qnixsynapse pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 6, 2025
qnixsynapse pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning OpenCL Issues specific to the OpenCL backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants