Skip to content

Round inputs for dense unrolled RNN tests to make pytests more stable #1284

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

JanFSchulte
Copy link
Contributor

We are still having problems with numerical stability of RNNs in the pytests for dense unrolled. This PR aims to fix that by implementing the rounding @jmitrevs added in #1215 for these tests as well. As I haven't been able to reproduce the failure locally, I couldn't check if this actually fixes it

Type of change

  • Bug fix (non-breaking change that fixes an issue)

Tests

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

@JanFSchulte JanFSchulte added the please test Trigger testing by creating local PR branch label Apr 29, 2025
@@ -107,6 +107,7 @@ def test_resource_unrolled_rnn(rnn_layer, backend, io_type, static, reuse_factor
# Subtract 0.5 to include negative values
input_shape = (12, 8)
X = np.random.rand(50, *input_shape) - 0.5
X = np.round(X * 2**16) * 2**-16 # make it exact ap_fixed<32,16>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is indeed ap_fixed<17, 0>. Though, as you mentioned that the issue is more prevalent with unrolled dense, is there anything compromising bit-exactness between implementations?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it might be the RNN, or maybe just LSTM, in general. Jovan did this change to get the tests to behave better in the pytorch parser case. It seemed to have worked there, so I'd be in favor of merging this now so get more meaningful test results, and look into why this is such an issue later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
please test Trigger testing by creating local PR branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants