-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dead batteries packages: standard-aifc, standard-sunau, standard-chunk, and audioop-lts #29400
Conversation
Hi! This is the staged-recipes linter and your PR looks excellent! 🚀 |
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
@conda-forge/help-python 👋 I could use some input on how this particular set of packages should work in conda-forge. Summary of the problem:
The major problem that I'm anticipating is that these packages collide with standard namespace packages in python <=3.12. (That is, To my understanding, we can do this in conda-forge as well, but not with noarch packages. This effectively means that any downstream package that depends on these would have to stop using noarch because we can't use selectors in the requirements spec. Any input on how to resolve this situation would be much appreciated. |
sha256: 64e249c7cb4b3daf2fdba4e95721f811bde8bdfc43ad9f936589b7bb2fae2e43 | ||
|
||
build: | ||
script: {{ PYTHON }} -m pip install . -vv --no-deps --no-build-isolation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one can probably be noarch, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In principle yes, but see my comment/question in the main PR thread about noarch vs selectors #29400 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, missed that, you need to set the min python variable, add
{% set python_min = "3.13" %}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This effectively means that any downstream package that depends on these would have to stop using noarch because we can't use selectors in the requirements spec.
Ah, I misread. OK. If this module is done "right" you probably can install it without clobbering, it will become a stub. If not, you will have to keep this as-is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this module is done "right" you probably can install it without clobbering
To clarify, do you mean the implementation of the module, or the packaging of the module?
My understanding is that the module code itself is a direct extraction from old standard lib code, and doesn't have the smarts to stub itself out. At the same time, I don't think the deadlib maintainers are open to contributions that would add the stub logic, since they want to keep the code essentially as archival.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quick follow-up:
- I checked the deadlib aifc source, and verify that it's not doing anything smart to detect collisions (ie that it's running on an old python)
- OTOH, I also installed it via pip in a python 3.7 environment, and it seems to work. That is,
import aifc
hits the standard lib module first and does not try to load the deadlib version. I expect that even if it did hit the deadlib version, it would still work, but I'd rather not chance it.
This then raises the question: is it legit to package these deadlibs as noarch that can be installed on any python version? Maybe that's a question for the deadlib maintainers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This then raises the question: is it legit to package these deadlibs as noarch that can be installed on any python version? Maybe that's a question for the deadlib maintainers?
Giving the fact that is clobbers existing files? No. It is better too sacrifice noarch advantages here. Let's keep these as is. Thanks for the clarifications.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Giving the fact that is clobbers existing files?
It's not clobbering the existing files, but "correct" behavior (ie using the standard lib implementation instead of the external package) depends on the module import search order. While this is (to my knowledge) well-defined, it does make me a little nervous.
It is better too sacrifice noarch advantages here.
For these packages, that's fine. I'm more concerned about the downstream packages that may want to stay noarch and have these deadlibs as dependencies if needed. This is where I'm stuck, and I don't see a simple path forward with the usual packaging pipeline.
I do wonder if it's possible to do something clever here: what if these deadlibs are not built as noarch, and we create stub packages for python <=3.12 that do not install anything? For 3.13+ the packages would install as we currently have them in this PR.
This would make it so that the downstream dependents could still be noarch and depend on these libs, but installing, say, standard-aifc
in a 3.12 environment would act as a no-op.
Is such a thing possible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this is (to my knowledge) well-defined, it does make me a little nervous.
Same.
This is where I'm stuck, and I don't see a simple path forward with the usual packaging pipeline.
Yeah, better this one "arched" and many other noarch.
Is such a thing possible?
Probably. Not sure if it is worth it though. IMO, we should fix the "semi-arch" concept so we can do things like this as noarch moving on. However, that is a fix on recipe format and conda itself.
sha256: b319a1ac95a09a2378a8442f403c66f4fd4b36616d6df6ae82b8e536ee790908 | ||
|
||
build: | ||
script: {{ PYTHON }} -m pip install . -vv --no-deps --no-build-isolation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this one can be noarch too.
Checklist
url
) rather than a repo (e.g.git_url
) is used in your recipe (see here for more details).