Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed failing test in HEASARC module, adding local test data #2250

Merged
merged 16 commits into from
Mar 25, 2022

Conversation

volodymyrss
Copy link
Contributor

The test fails with astropy==5.0. Which indicates correctly that most of the module functionality breaks.
There is some complexity in astropy with version check in 5.0, but I noticed everything works fine with the latest astropy main branch.
Meanwhile, and nevertheless, might be good to just always specify the unit, this will work with all relevant astropy versions.

I also update test according to some server-side change.

I wonder how to prevent this sort of problem from emerging at some point.
It would be good to have non-remote tests I guess. Only non-remotes ones are likely to continue passing?

I could just copy the entire astroquery/heasarc/tests/test_heasarc_remote_isdc.py to astroquery/heasarc/tests/test_heasarc_remote_isdc.py and add monkeypatches according to the doc, is this the recommended way?

@codecov
Copy link

codecov bot commented Dec 15, 2021

Codecov Report

Merging #2250 (d421fcb) into main (a03f301) will increase coverage by 0.45%.
The diff coverage is 81.81%.

@@            Coverage Diff             @@
##             main    #2250      +/-   ##
==========================================
+ Coverage   62.95%   63.41%   +0.45%     
==========================================
  Files         131      131              
  Lines       17038    17043       +5     
==========================================
+ Hits        10727    10808      +81     
+ Misses       6311     6235      -76     
Impacted Files Coverage Δ
astroquery/heasarc/core.py 73.79% <80.00%> (+54.64%) ⬆️
astroquery/utils/commons.py 78.39% <100.00%> (+0.10%) ⬆️

📣 Codecov can now indicate which changes are the most critical in Pull Requests. Learn more

@volodymyrss volodymyrss changed the title Failing test in HEASARC module Fixed failing test in HEASARC module Dec 15, 2021
@keflavich
Copy link
Contributor

Yes, I think copying over all the remote tests to a non-remote variant would be helpful - it would be great if you could do that!

The other thing we'd like to do is have remote tests that test that we get the right results but are robust against upstream changes. So in this case, we might test that the number of results is >=274 instead of ==274, but even that isn't guaranteed to stay consistent (sometimes entries are consolidated). Another option is to check that the data contain something useful - e.g., assert row_information in table['column'] - something like that.

@volodymyrss
Copy link
Contributor Author

Yes, I think copying over all the remote tests to a non-remote variant would be helpful - it would be great if you could do that!

It just seems like almost complete code repetition.
In fact, I would be using some other custom monkeypatch to collect the data to store anyway.
But ok, if there is no better way, I copy.

The other thing we'd like to do is have remote tests that test that we get the right results but are robust against upstream changes. So in this case, we might test that the number of results is >=274 instead of ==274, but even that isn't guaranteed to stay consistent (sometimes entries are consolidated).

Yeah, something like that, unfortunately. I could narrow-down the query a bit, but it's still not guaranteed, basically no way to be certain, unless the service promises this sort of consistency.

Another option is to check that the data contain something useful - e.g., assert row_information in table['column'] - something like that.

@keflavich
Copy link
Contributor

True, there would be some code repetition, but there is real value in storing the output data needed for the tests locally, and that's more than a simple copy-paste.

It would really be great if we could set up an automated system to do that (write only remote tests, but cache the results and run the tests locally), but I fear such a system, while elegant, would have too many failure corner cases to be worth the effort.

@pep8speaks
Copy link

pep8speaks commented Dec 16, 2021

Hello @volodymyrss! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2022-03-25 07:13:36 UTC

@volodymyrss volodymyrss force-pushed the heasarc-unit-issue branch 5 times, most recently from 0e470a9 to 27ad069 Compare December 16, 2021 17:40
@volodymyrss
Copy link
Contributor Author

True, there would be some code repetition, but there is real value in storing the output data needed for the tests locally, and that's more than a simple copy-paste.
It would really be great if we could set up an automated system to do that (write only remote tests, but cache the results and run the tests locally), but I fear such a system, while elegant, would have too many failure corner cases to be worth the effort.

As I understood it, it's still a complete repetition of the entire test code. I am be wrong.
I quite appreciate the value storing output data in the repository. But I was not sure if there is a way to achieve it efficiently without lot's of manual work.

So I made an attempt to keep the options for harvesting data, running tests remotely and locally by parallelization (i.e. keeping single test source, no repetition).
There is a description here and the attempt around: https://github.com/oda-hub/astroquery/blob/27ad069bc5dd9cb034a8bad531371d47c840bc7c/astroquery/heasarc/tests/parametrization.py#L9-L15

  • As long as the test data is there, it should run exactly as the standard astroquery approach (depending on the remote_data marker).
  • If test data is not there, and remote_data is allowed - the data will be generated. It must be copied and committed manually.

Hopefully the implementation is not too convoluted, actually it's hardly more than the advised astroquery monkeypatch.

Please let me know what you think!
If that's too convoluted, I will extract it into an external tool.

I was just curious to make an attempt, I do not really suggest it's completely generic, and I hope it's sufficiently noninvasive.
It will work for all my cases.

Meanwhile, while applying this, the CI tests for this module started to run for the first time, and fail... I will get to that.

@volodymyrss volodymyrss changed the title Fixed failing test in HEASARC module Fixed failing test in HEASARC module, adding local test data Dec 16, 2021
@keflavich
Copy link
Contributor

This is brilliant - thanks! Yes, I think if you can make this work (looks like you have, but I haven't reviewed carefully yet), this is the best solution!

Could you make the 'parametrization' code general and move it into the utils/ subpackage so we can use it for all modules? I think that would be best done as a separate PR. Or we can just finish up this one and then move it to the general case later.

@volodymyrss
Copy link
Contributor Author

This is brilliant - thanks! Yes, I think if you can make this work (looks like you have, but I haven't reviewed carefully yet), this is the best solution!

Good to hear!
I hope there is no hidden issue though. It works here, I will think more.

Could you make the 'parametrization' code general and move it into the utils/ subpackage so we can use it for all modules? I think that would be best done as a separate PR. Or we can just finish up this one and then move it to the general case later.

I think I rather prefer the second option: that way I test everything carefully on one case, while also fixing the case, and them move it in another PR.
Right now, few tests still fail, but I suspect it might be because they were not really run before.

@volodymyrss
Copy link
Contributor Author

Ok, I am done with tests, though I am not sure the approach is the best possible.

Concerning the parametrization, after this is done we can discuss more in another PR?

For example, maybe it's strange that the [save] test variation is going to be always skipped in CI. Maybe it's not a problem, since remote ones are also often skipped. Or there could be an option to deselect them. Or there could be a way to actually run them in another CI workflow, to rebuild all saved data when possible? Maybe that's too much.

A minor challenge I found here, is that sometimes, with different dependencies, for the same test the request was slightly different (some rounding/converting assumptions), even if the response is the same.

  • It could be addressed two variations of test data for these case. If [save] is only triggered when there are no test data at all, it's necessary to keep a bit more track of which data should be available. Then, there is a minor danger to store more data than is ever used. Maybe this can be inspected automatically - which files were used.
  • In this PR I just modified the request in one case (the same strange 0,0 request) which lead to this condition. Here this request was kind of pathological anyway.
  • Using an option like --remote-data could also have an effect on this process, but it's again more invasive.

Copy link
Member

@bsipocz bsipocz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get an error for a missing file, and there are a few fixable issues (and some nitpick comments). Overall looks great, thanks!

E           FileNotFoundError: [Errno 2] No such file or directory: '/private/var/folders/dc/hsm7tqpx2d57n7vb3k1l81xw0000gq/T/astroquery-test-sqy_voa0/lib/python3.9/site-packages/astroquery/heasarc/tests/data/8fe6c881.dat'

Comment on lines 123 to 126
),
],
),
],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My pet peeve nitpick, but it would be nice not to use black to avoid things like this, basically, all 10 lines could be condensed into ~3-4 and that would even improve readability.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally find the black's version here quite readable, for such a nested structure.
Of course no problem to adjust as you wish!
But I struggle to put it in 3-4 short and readable lines. I made it to 10.
Could you please make a suggestion?

return data_path(fileid, output=output)


# TODO: are get_mockreturn args up-to-date in example in https://astroquery.readthedocs.io/en/latest/testing.html ?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

possibly the docs in that link is outdated

Copy link
Contributor Author

@volodymyrss volodymyrss Dec 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I make a separate issuePR and delete this comment?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, and let's correct the get_mockreturn documentation at that URL at the same time (https://github.com/astropy/astroquery/blob/main/docs/testing.rst)

@@ -15,6 +16,14 @@
__all__ = ['Heasarc', 'HeasarcClass']


def Table_read(*args, **kwargs):
# why does if commons.ASTROPY_LT_5_0 not work on Windows?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should work, I can't recall running into issues with it for previous PRs. Though for unit_parse_strict I think it should be LT_5_1 as it's a new feature for the upcoming version rather than the already released 5.0

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if needed, do add an ASTROPY_LT_5_1 and use that for if/else rather than the try/except.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, sorry, I had a problem with some recent minversion elsewhere, so I misinterpreted this.

So you suggest I add ASTROPY_LT_5_1 in utils? It will not be satisfied before 5.1 release, right? Is it ok that the devastropy test will fail?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed as suggested and it does fail astropy dev, as expected.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I think we better hack it around temporarily until the 5.1 release is out.

As I see this is the only remaining issue, and that there are a few merge commits, so I go ahead, rebase the PR and a commit that address this variable and we're good to go. Ultimately we better fix the unit parsings as the issue came up recently in #2333, too

README.rst Outdated
@@ -1,196 +0,0 @@
`Documentation`_ | Blog_ | `View on Github`_ | `Download Stable ZIP`_ | `Download Stable TAR`_
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls do not remove this file. If possible stash the add back commit into the one that removed it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, file restored. I already squashed. I will squash at the and too, right? Then an extra commit now does not matter.

@bsipocz bsipocz added this to the v0.4.5 milestone Dec 17, 2021
@bsipocz
Copy link
Member

bsipocz commented Dec 17, 2021

@volodymyrss - to reflect to your question about local vs remote tests. Yes, currently we only fail builds on non-remote test failures. But we run the remote tests from cron (weekly) cron and also manually. E.g here is a list of modules with failing remotes #2203, as you see we fixed the wast majority already, but unfortunately for heasarc it wasn't clear whom to ping.

@volodymyrss
Copy link
Contributor Author

@volodymyrss - to reflect to your question about local vs remote tests. Yes, currently we only fail builds on non-remote test failures. But we run the remote tests from cron (weekly) cron and also manually.

Good to know! But only with one python, one platform, right?
Of course remote tests test that the assumptions about the services did not break.
So you repeat this sort of PR regularly, as services and dependencies change?
Maybe it's not always obvious how to disentangle breaking services from breaking dependencies. It was not completely trivial in this heasarc PR.

For me, it would be very useful for me to know if at heasarc (and some other modules) is usable with recent astropy on a variety of platforms.

So I suppose it's still quite useful to add all possible responses for multi-platform multi-version mock tests?

So what do you think about adding this parametrization to utils?
I think it might work (perhaps with small modifications) for most services. This will make the modules which use it more reliable, I hope.
For example, I suppose heasarc will no more break so easily as it did, right?

I could propose a change for some other ones for which I care. In part some of those from #2203.

E.g here is a list of modules with failing remotes #2203, as you see we fixed the wast majority already, but unfortunately for heasarc it wasn't clear whom to ping.

Thanks for the information!

I am not from HEASARC, so I can not be fully responsible for this.
But we supply data for one of the missions (INTEGRAL). Also, we maintain our own HEASARC site (already included in astroquery), which was a special kind of US contribution to INTEGRAL.
Distributing INTEGRAL data is our (ISDC) core function, and in the recent years we started to advise people to use astroquery to access it.
So perhaps I care about this module more than most users of HEASARC, and I would be happy to help when and if feasible.

@volodymyrss volodymyrss requested a review from bsipocz December 17, 2021 20:46
@volodymyrss
Copy link
Contributor Author

Should there be a changelog entry for heasarc? It's only tests - no functionality is added or changed.

As suggested by @bsipocz, I added ASTROPY_LT_5_1 since it was not there. Should there a changelog entry for that?

@bsipocz bsipocz modified the milestones: v4, v0.4.7 Mar 19, 2022
@bsipocz
Copy link
Member

bsipocz commented Mar 25, 2022

OK, so I've rebased this PR to remove the merge commits as well as to squash all the commits into one that add and remove and otherwise manipulate the added datasets to avoid unnecessary size up of the git repository. However, I'm unable to push back to the branch of the PR as it was opened from under an org (maintainers get write access to branches of the PRs if a tickbox is checked at the time of opening a PR, however as far as I know, it only works for the cases when the PR is opened for forks under personal accounts rather than from under orgs.)

@volodymyrss - there are two ways to work around, please let me know which one you prefer:

  • make another PR from a personal fork that I can edit. I'm happy to do this from the rebased brach, the commits will still have you as author.
  • force push the branch that I rebased back to this PR (https://github.com/bsipocz/astroquery/tree/heasarc-unit-issue_rebased), you or anyone with write access to the oda-hub org can do that.
  • (you could consider to add some write access to the three of us maintainers to the org's fork, that case we could push back code to the branches of the PRs if it ever comes back. Some orgs choose to do this, others switched to open PRs from the personal forks instead).

@volodymyrss
Copy link
Contributor Author

@bsipocz perfect, thank you! I force-pushed the branch, and also gave you access to oda-hub/astroquery.

@bsipocz bsipocz merged commit cf7fe2e into astropy:main Mar 25, 2022
@bsipocz
Copy link
Member

bsipocz commented Mar 25, 2022

Thanks @volodymyrss!

@bsipocz
Copy link
Member

bsipocz commented Mar 28, 2022

@volodymyrss - could you have a look at these failing CI builds? e.g. https://github.com/astropy/astroquery/runs/5726304297?check_suite_focus=true

It all was passing before merging this, but I suspect there maybe a bug with the hashing part as now a few tests are failing with missing files, all the while they are fine for most of the version combinations, as well as I'm unable to reproduce the failure locally.
Thanks!

@bsipocz bsipocz mentioned this pull request Mar 29, 2022
@volodymyrss
Copy link
Contributor Author

@volodymyrss - could you have a look at these failing CI builds? e.g. https://github.com/astropy/astroquery/runs/5726304297?check_suite_focus=true

It all was passing before merging this, but I suspect there maybe a bug with the hashing part as now a few tests are failing with missing files, all the while they are fine for most of the version combinations, as well as I'm unable to reproduce the failure locally. Thanks!

The problem is that the requests are actually different. The difference is in the rounding of floats when degrees are converted from astropy SkyCoord degrees. It might have some effect on actual response too. So perhaps it's good it's caught.

This should help:

#2347

It's tricky to reproduce, and also not obvious from the error.
So I suggest this #2348 which helps to understand cases when the request is subtly different.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants