-
Notifications
You must be signed in to change notification settings - Fork 361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/devops ci #3799
base: develop2
Are you sure you want to change the base?
Feature/devops ci #3799
Conversation
devops/continuous_integration/packages_pipeline/single_configuration.rst
Outdated
Show resolved
Hide resolved
devops/continuous_integration/packages_pipeline/multi_configuration.rst
Outdated
Show resolved
Hide resolved
devops/continuous_integration/packages_pipeline/multi_configuration.rst
Outdated
Show resolved
Hide resolved
devops/continuous_integration/packages_pipeline/multi_configuration.rst
Outdated
Show resolved
Hide resolved
devops/continuous_integration/packages_pipeline/multi_configuration.rst
Outdated
Show resolved
Hide resolved
devops/continuous_integration/packages_pipeline/multi_configuration.rst
Outdated
Show resolved
Hide resolved
devops/continuous_integration/packages_pipeline/multi_configuration.rst
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some thoughts
$ conan remove "*" -c # Make sure no packages from last run | ||
$ conan remote remove "*" # Make sure no other remotes defined | ||
# Add develop repo, you might need to adjust this URL | ||
$ conan remote add develop http://localhost:8081/artifactory/api/conan/develop |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While for the purpose of the tutorial we need to run such commands individually, it could be useful to promote awareness of better organizational tools such as conan config install https://path-to-config.git
There are more such places further on, tweaking configuration at intermediary steps should be considered a bad practice.
|
||
$ cd ai | ||
$ conan create . --build="missing:ai/*" -s build_type=Release --format=json > graph.json | ||
$ conan list --graph=graph.json --graph-binaries=build --format=json > built.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Side note, if we do --build=missing
, it'd fail to pre-upload some of our dependencies if we fail during our root package build. It could be frustrating if dependencies are something like Qt or more.
.. code-block:: bash | ||
:caption: Debug build | ||
|
||
$ conan create . --build="missing:ai/*" -s build_type=Debug --format=json > graph.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In our practice, we've found that it's easier to export on one single machine before building since some local [mis]configuration may affect that.
.. code-block:: bash | ||
|
||
$ conan remove "*" -c # Make sure no packages from last run | ||
$ conan remote remove "*" # Make sure no other remotes defined |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cleanup routine in practice is an implementation detail of a specific CI setup. For a workshop, it'd be more handy to pack everything tutorial-related not something like source cleanup.sh
instead of explaining it everytime.
As we described before while presenting the different server binary repositories, the idea is that package builds | ||
will use by default the ``develop`` repo only, which is considered the stable one for developer and CI jobs. | ||
|
||
Let's make sure we start from a clean state: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a text tutorial, it'd be a better idea to cover this point at the beginning, where you could provide different implementation experiences, insights, and alternatives. That way you'd keep the examples simpler without confusing readers.
$ conan install --requires=engine/1.0 --build=engine/1.0 | ||
$ conan install --requires=game/1.0 --build=game/1.0 | ||
|
||
We are executing these commands manually, but in practice, it would be a ``for`` loop in CI executing over the json output. We will see some Python code later for this. At this point we wanted to focus on the ``conan graph build-order`` command, but we haven't really explained how the build is distributed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe some re-ordering of these parts is in order, since we name it distributed build
but do not in fact explain it
|
||
- It is not recommended to use other package reference fields, as the ``user`` and ``channel`` to represent changes in the source code, | ||
or other information like the git branch, as this becomes "viral" requiring changes in the ``requires`` of the consumers. Furthermore, | ||
they don't implement any logic in the build model with respect to which consumers need to be rebuilt. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A good idea would be to provide some insight for where user
and channel
can be used. After all these years, I still believe override is a useful dev tool which matches nicely with custom user/channel which helps making dev packages public without exposing dev remote.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job! I have a few suggestions, but otherwise this is mostly a wording review
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A magnificent read! Really glad that what I had in mind for our CI so far fits nicely with many points described here! Left couple of comments on something that I think the tutorial should explain a bit more.
Out of curiosity: how does a CCI fork fit into this picture? Does it essentially become just one more source of packages
that then get promoted into products
?
also conditioned to the same logic, and such logic also evolves and changes with every new revision and version. | ||
|
||
In C and C++ projects the "products" pipeline becomes more necessary and critical than in other languages due to the compilation model with headers textual inclusions becoming part of the consumers' binary artifacts and due to the native artifacts linkage models. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel that some conclusion is needed after everything that was said above. 🤔 I get that the advice is "don't make your CI deduce that if mathlib
changes then you need to rebuild ai
, and then rebuild engine
since ai
have changed, and then rebuild game
...", but... The "question" kinda remains "open".
Do I, as a CI engineer, just talk to the Important People and then proclaim "game
and mapviewer
has been declared as the main and only products, and thus we have CI only for them"?
Or should this be a more dynamic and programmatic approach, in essence determining all the root nodes in the dependency graph?
Or must I not even think about traversing the graph at all, since I will inevitably fall victim to the same issue of conditional dependencies?
This state of the ``develop`` repository will have the following behavior: | ||
|
||
- Developers installing ``game/1.0`` or ``engine/1.0`` will by default resolve to latest ``ai/1.1.0`` and use it. They will find pre-compiled binaries for the dependencies too, and they can continue developing using the latest set of dependencies. | ||
- Developers and CI that were using a lockfile that was locking ``ai/1.0`` version, will still be able to keep working with that dependency without anything breaking, as the new versions and package binaries do not break or invalidate the previous existing binaries. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the doc may also clarify what happens with the lockfile later in this workflow...
Is it supposed to be updated later, once the Important People decide that updating game
to use ai/1.1.0
is a Good Idea?
Should there be some scripting magic that would automatically bump the versions and revisions in the lockfile, once the products -> develop
promotion completes?
The **packages pipeline** will build, create and upload the package binaries for the different configurations and platforms, when some | ||
developer is submitting some changes to one of the organization repositories source code. For example if a developer is doing some changes | ||
to the ``ai`` package, improving some of the library functionality, and bumping the version to ``ai/1.1.0``. If the organization needs to | ||
support both Windows and Linux platforms, then the package pipeline will build the new ``ai/1.1.0`` both for Windows and Linux, before | ||
considering the changes are valid. If some of the configurations fail to build under a specific platform, it is common to consider the | ||
changes invalid and stop the processing of those changes, until the code is fixed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I'm missing in this workflow is a solution for 'nightly' and MR/PR builds. We're usually implementing new features on feature branches which then run MR pipelines to verify compilation and unit test integrity. Packages generated from these feature branches should be consumable by others for integration testing, etc. Also, after integrating an MR into the mainline branch we don't want to create a release yet. We want to have a nightly build and package that can also be consumed by other projects. In Conan 1 we used channels to distinguish between the different package types, but as far as I understood, these should no longer be used for this purpose. However, I don't see a way to do the same thing by leveraging multiple repositories, because in this scenario there is no way to mix dependencies from release, feature branch and nightly channels without creating temporary repositories and fiddling around with the remote order.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's also unclear how to version nightly and MR builds. At that point we haven't decided about a release version yet so we could either use
- prerelease suffixes like
pkg/1.2-pre.1
, but they can only be enabled/disabled in the consumer conanfile - which means changing the conanfile all the time - or globally for all packages, which we don't want, because it's not selective enough - 'nightly' as a version string, like
pkg/nightly
- this is difficult for the consumer because it's unclear what the current nightly revision is based on, the string is semver incompatible, older revision cannot be told apart easily and there's no automatic resolving possible against '*' or version ranges - user or channels, like
pkg/1.2@nightly
(discouraged)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And another open question is how to disambiguate between packages with the same name coming from different contexts. For example, we internally mirror Conan Center Index and make these recipe available for consumption in our recipes. However, there are some cases where we have existing company-specific recipes that happen to have the same name as recipes in the Conan Center Index. This would cause clashes when trying to use both the index mirror and the internal deployment remotes in the same configuration. I think it would be good to be able to specify a source remote for each of the requirements to make Conan consider only that remote. Other dependency managers use this solution (e.g. Cargo).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would cause clashes when trying to use both the index mirror and the internal deployment remotes in the same configuration. I think it would be good to be able to specify a source remote for each of the requirements to make Conan consider only that remote. Other dependency managers use this solution (e.g. Cargo).
There are a few ways of handling that with Conan currently - when solving a graph to generate a lockfile, you can specify which remotes are considered - so the lockfile would then only show the revisions that are available in the remote you want, and subsequent calls would only resolve those.
Alternatively, the remote configurations do support a filter of which packages are allowed to be considered from a remote, see the --allowed-packages
option in conan remote add
and conan remote update
.
You can restrict a remote to only serve certain packages, or to exclude some packages, for example the following would prevent the cmake package from being considered from the Conan Center remote.
conan remote update --allowed-packages='~cmake/*' conancenter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jcar87 Thanks for the quick answer. Both options could work but I think they're moving the decision to the wrong layer, from an architectural point of view. I'd rather have the recipe decide about the package sources because only the recipe knows what package it needs to be buildable. This should not be injected via environment or Conan config. It is good that it CAN be injected that way, because sometimes you want to influence the build from the pipeline for example, but it should not be the recommended way for handling package source selection for recipe dependencies.
Thanks very much for all the feedback! As this PR is very large and with deep topics I'll suggest the following:
Many thanks! |
|
||
- Developers installing ``game/1.0`` or ``engine/1.0`` will by default resolve to latest ``ai/1.1.0`` and use it. They will find pre-compiled binaries for the dependencies too, and they can continue developing using the latest set of dependencies. | ||
- Developers and CI that were using a lockfile that was locking ``ai/1.0`` version, will still be able to keep working with that dependency without anything breaking, as the new versions and package binaries do not break or invalidate the previous existing binaries. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be nice to have some commentary on what failure modes are likely to arise and how they should be handled.
e.g. what if an intermediate dependency build fails due to one of the changed packages during the product pipeline? In this case specifically, its unclear how a user would develop, test, and deploy the fix to the intermediate package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also relevant would be the workflow for a user to adjust the version ranges for a product. If the version ranges are updated and the resulting build fails, what is the workflow to develop, test, and deploy fixes for the dependent packages?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@memsharded More specific question/example about rolling out a major version change.
If packages pipeline only use the develop
repository, and never the products
repository, how are major versions rolled out?
e.g. if we create a breaking API change in ai/2.0.
Based on my read, this would be the order of events:
- ai/2.0 passes the packages pipeline, using mathlib/1.0 from develop
- ai/2.0 is promoted to the products repository
- products pipeline would run and pass, but not use ai/2.0 b/c engine depends on ai/[>=1.0 <2]
- so ai/2.0 is stuck in products repository until engine updates its version range
- engine updates its version range to ai/[>=2.0 <3], but its package pipeline won't be able to find ai/2.0 b/c it is only looking in develop
It seems like the packages pipeline needs to look at both develop and product repositories so changes like this can work up the chain.
My questions are:
- does this match your experience? should packages pipeline look at both develop and product repositories? (if so, probably good to mention that in the tutorial)
- What order should it look at develop/product repositories? Currently the product pipeline adds product repo first,then develop. This makes sense for the product pipeline that is trying to promote packages from product to develop. However, it may make sense for the packages pipeline to be reversed, first look at the stable develop, second look at the products repo only if a suitable stable package isn't found
Co-authored-by: Abril Rincón Blanco <[email protected]>
Co-authored-by: Abril Rincón Blanco <[email protected]>
Co-authored-by: Abril Rincón Blanco <[email protected]>
Co-authored-by: Abril Rincón Blanco <[email protected]>
Co-authored-by: Abril Rincón Blanco <[email protected]>
Co-authored-by: Abril Rincón Blanco <[email protected]>
Co-authored-by: Carlos Zoido <[email protected]>
Co-authored-by: Carlos Zoido <[email protected]>
Co-authored-by: Abril Rincón Blanco <[email protected]>
Co-authored-by: Abril Rincón Blanco <[email protected]>
|
||
These are the tasks that the above Python code is doing: | ||
|
||
- For every ``package`` in the build-order, a ``conan install --require=<pkg> --build=<pkg>`` is issued, and the result of this command is stored in a ``graph.json`` file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a special reason, why conan install
is used here, instead of conan create
?
I'm also asking because conan install
does not seem to execute test
...
Co-authored-by: Carlos Zoido <[email protected]> Co-authored-by: Michael Farrell <[email protected]> Co-authored-by: Abril Rincón Blanco <[email protected]> Co-authored-by: Artalus <[email protected]>
"testing" to the "release" repository. | ||
|
||
|
||
There are different ways to implement and execute a package promotion. Artifactoy has some APIs that can be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are different ways to implement and execute a package promotion. Artifactoy has some APIs that can be | |
There are different ways to implement and execute a package promotion. Artifactory has some APIs that can be |
Very initial draft of the first sections, to get team early feedback.
It is better reviewed generating the html docs, not in code.
Depends on conan-io/examples2#153
Close #3849
Close conan-io/conan#3003
Close conan-io/conan#6589
Close conan-io/conan#6045
Close conan-io/conan#8587
Close conan-io/conan#16309