Skip to content

Conversation

h-mayorquin
Copy link
Contributor

Same as #1524 but for Plexon 2.

Also, I am exposing the number of retries as that is non-deterministic in plexon2 and that has been failing in some our CI in neuroconv because the number is too small.


self.pl2reader = PyPL2FileReader(pl2_dll_file_path=pl2_dll_file_path)

reading_attempts = 10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible that you have a good attempt in the middle and bad attempt at the end of the range that then messes up the attempts such that you just have to guess when to end on a "good attempt". Ie is there a smarter way for us to check like

for i in range(num_attempts):
read
if read is good
break

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's how it is right now. If the handle is good you exist the loop:

if self.pl2reader._file_handle.value != 0:
# File handle is valid, exit the loop early
break

In other words, it is first success out.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry I missed that line. Sorry didn't expand the diff.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a good suggestion though!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And so it silently fails? We have yet to see any stochastic failures of our test files so far. It's just so weird for it to fail. Has plexon commented on this?

Copy link
Contributor Author

@h-mayorquin h-mayorquin Aug 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You did at the beginning before we made this patch:

#1495 (comment)

The re-attempt mechanism solves most of the case almost all of the time except when it does not. For some reason it has been failing for neuroconv's docstring tests.

The guy that I am discussing with is from Plexon.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant test failures since your PR. It seems like the 10 tries has always worked for us thus far. It seems weird that the flakiness is so flaky, but yeah based on the discussion with plexon there they also seem to be a bit confused by this issue. So I'm fine with just letting the person try as many times as they want.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it is kind of fishy that it always fails in our doctest. But if I discover that that indeed is the case that more attempst solve the problem and that different envs require different attemps I don't want to have to create a new PR every time so better exposed it.

It is unlikely that we find a root case solution if the plexon people also are suprised about it because it is probably a wine dllc related problem.

Copy link
Contributor

@zm711 zm711 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks pretty good to me. @samuelgarcia or @alejoe91 want to take a quick scan of this one?

signal_streams = np.array(signal_streams, dtype=_signal_stream_dtype)

self.stream_id_samples = {}
self.stream_index_to_stream_id = {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this ?
stream_index = header["signall_streams"]["id"][stream_index] should work no ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest I think Heberto and I like the slightly more explicit variable names in order to quickly know where to get things. it's tricky to remember the complicated header structure sometimes so the separate variables just make it a little more explicit for the steps.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think that could work as well it just did not occur to me. I don't have a strong preference here, Sam. As zach mentions I have a preference for explicit but in this case I just came with this idea first. If you prefer I can eliminate it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. No worry. Just curious.

@zm711
Copy link
Contributor

zm711 commented Sep 4, 2024

I think we should merge #1546 first and then incorporate it's bug fix into this PR.

@h-mayorquin
Copy link
Contributor Author

I think we should merge #1546 first and then incorporate it's bug fix into this PR.

Seems that there were no conflicts.

@zm711
Copy link
Contributor

zm711 commented Sep 4, 2024

Seems that there were no conflicts.

Cool. I didn't look too closely to see if you had touch those specific lines or not. So I just wanted to check that before. We can wait for Sam to decide if you should user the header instead of the dict you made and then we can merge after he decides on that.

@samuelgarcia
Copy link
Contributor

This is OK for me. Can I merge ?

Copy link
Contributor

@zm711 zm711 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good by me. Feel free to merge Sam or I will merge this afternoon :)

@zm711 zm711 merged commit 60b26d4 into NeuralEnsemble:master Sep 5, 2024
3 checks passed
@h-mayorquin h-mayorquin deleted the fixate_plexon2_streams branch September 5, 2024 16:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants