-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Single precision mode is broken #234
Comments
denisalevi
added a commit
that referenced
this issue
Aug 12, 2021
Not sure if this makes sense? Are CUDA math functions without `f` suffix overloaded for float (are they call the float version for float input)? This needs to be extended for the other function implementations as well. See #234
Starte an implementation of appending |
See my personal notes from 6.4.21 for details on what is broken. |
This was referenced Feb 2, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I think when I updated the spikegenerator template, I didn't pay attention to float/double data types. The test suite is currently failing, needs fixing.
In this context, also check if single precision math functions are actually used correctly? Looking at this function implementation, shouldn't there be one function with an
f
suffix (likesinf
)? Same forfabs
in theexprel
implementation.And also check #148 and close if it is done.
The text was updated successfully, but these errors were encountered: