Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single precision mode is broken #234

Closed
denisalevi opened this issue Aug 11, 2021 · 3 comments
Closed

Single precision mode is broken #234

denisalevi opened this issue Aug 11, 2021 · 3 comments

Comments

@denisalevi
Copy link
Member

I think when I updated the spikegenerator template, I didn't pay attention to float/double data types. The test suite is currently failing, needs fixing.

In this context, also check if single precision math functions are actually used correctly? Looking at this function implementation, shouldn't there be one function with an f suffix (like sinf)? Same for fabs in the exprel implementation.

And also check #148 and close if it is done.

denisalevi added a commit that referenced this issue Aug 12, 2021
Not sure if this makes sense? Are CUDA math functions without `f` suffix
overloaded for float (are they call the float version for float input)?

This needs to be extended for the other function implementations as
well.

See #234
@denisalevi
Copy link
Member Author

denisalevi commented Aug 12, 2021

Starte an implementation of appending f suffixes in commit
4232a6c in branch make-function-conversions-type-safe (not merged).

@denisalevi
Copy link
Member Author

See my personal notes from 6.4.21 for details on what is broken.

@denisalevi
Copy link
Member Author

Closed in PR #265 and remaining issues collected in #264

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant