Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add possibility to configure dynamic shared memory on CUDA backend. #767

Open
wants to merge 1 commit into
base: development
Choose a base branch
from

Conversation

pavlo-hilei
Copy link

@pavlo-hilei pavlo-hilei commented Oct 7, 2024

Dynamic shared memory might be required in some applications, since CUDA allows to allocate more shared memory per block with it.
In this PR It is proposed to add a possibility to configure dynamic shared memory for native CUDA kernels with 'sharedMemBytes' field of type int in the kernel properties.

Also an example of dynamic shared memory usage is added.

Similar thing can be done for HIP backend, but I couldn't get my hands on an AMD gpu right now.

It can be configured via 'sharedMemBytes' field of type int in the kernel properties.
Also example of dynamic shared memory usage is added.
Note that transpiler doesn't support this feature, so it is usable with only native cuda kernels for now.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant