Skip to content

Commit

Permalink
Fix broken links (#1066)
Browse files Browse the repository at this point in the history
* fix links

* fix links

* fix links

* fix links

* fix links

* fix links

* fix links

* fix links
  • Loading branch information
TessFerrandez authored Aug 26, 2024
1 parent ae79ac7 commit 03d60d2
Show file tree
Hide file tree
Showing 104 changed files with 286 additions and 429 deletions.
1 change: 0 additions & 1 deletion docs/.pages
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ nav:
- Engineering Fundamentals Checklist: engineering-fundamentals-checklist.md
- The First Week of an ISE Project: the-first-week-of-an-ise-project.md
- Who is ISE?: ISE.md
- Contributing: contributing.md
- Agile Development: agile-development
- Automated Testing: automated-testing
- CI/CD: CI-CD
Expand Down
4 changes: 2 additions & 2 deletions docs/CI-CD/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Continuous Integration and Continuous Delivery

[**Continuous Integration (CI)**](continuous-integration.md) is the engineering practice of frequently committing code in a shared repository, ideally several times a day, and performing an automated build on it. These changes are built with other simultaneous changes to the system, which enables early detection of integration issues between multiple developers working on a project. Build breaks due to integration failures are treated as the highest priority issue for all the developers on a team and generally work stops until they are fixed.
[**Continuous Integration (CI)**](./continuous-integration.md) is the engineering practice of frequently committing code in a shared repository, ideally several times a day, and performing an automated build on it. These changes are built with other simultaneous changes to the system, which enables early detection of integration issues between multiple developers working on a project. Build breaks due to integration failures are treated as the highest priority issue for all the developers on a team and generally work stops until they are fixed.

Paired with an automated testing approach, continuous integration also allows us to also test the integrated build such that we can verify that not only does the code base still build correctly, but also is still functionally correct. This is also a best practice for building robust and flexible software systems.

[**Continuous Delivery (CD)**](continuous-delivery.md) takes the **Continuous Integration (CI)** concept further to also test deployments of the integrated code base on a replica of the environment it will be ultimately deployed on. This enables us to learn early about any unforeseen operational issues that arise from our changes as quickly as possible and also learn about gaps in our test coverage.
[**Continuous Delivery (CD)**](./continuous-delivery.md) takes the **Continuous Integration (CI)** concept further to also test deployments of the integrated code base on a replica of the environment it will be ultimately deployed on. This enables us to learn early about any unforeseen operational issues that arise from our changes as quickly as possible and also learn about gaps in our test coverage.

The goal of all of this is to ensure that the main branch is always shippable, meaning that we could, if we needed to, take a build from the main branch of our code base and ship it on production.

Expand Down
9 changes: 4 additions & 5 deletions docs/CI-CD/continuous-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,8 @@ An automated build should encompass the following principles:
### Code Style Checks

- Code across an engineering team must be formatted to agreed coding standards. Such standards keep code consistent, and most importantly easy for the team and customer(s) to read and refactor. Code styling consistency encourages collective ownership for project scrum teams and our partners.
- There are several open source code style validation tools available to choose from ([code style checks](https://github.com/checkstyle/checkstyle), [StyleCop](https://en.wikipedia.org/wiki/StyleCop)). The [Code Review recipes section](../code-reviews/recipes/README.md) of the playbook has suggestions for linters and preferred styles for a number of languages.
- Your code and documentation should avoid the use of non-inclusive language wherever possible. Follow the [Inclusive Linting section](recipes/inclusive-linting.md) to ensure your project promotes an inclusive work environment for both the team and for customers.
- There are several open source code style validation tools available to choose from ([code style checks](https://github.com/checkstyle/checkstyle), [StyleCop](https://en.wikipedia.org/wiki/StyleCop)). The [Code Review recipes section](../code-reviews/recipes/) of the playbook has suggestions for linters and preferred styles for a number of languages.
- Your code and documentation should avoid the use of non-inclusive language wherever possible. Follow the [Inclusive Linting section](./recipes/inclusive-linting.md) to ensure your project promotes an inclusive work environment for both the team and for customers.
- We recommend incorporating security analysis tools within the build stage of your pipeline such as: code credential scanner, security risk detection, static analysis, etc. For Azure DevOps, you can add a security scan task to your pipeline by installing the [Microsoft Security Code Analysis Extension](https://secdevtools.azurewebsites.net/#pills-onboard). GitHub Actions supports a similar extension with the [RIPS security scan solution](https://github.com/marketplace/actions/rips-security-scan).
- Code standards are maintained within a single configuration file. There should be a step in your build pipeline that asserts code in the latest commit conforms to the known style definition.

Expand All @@ -59,7 +59,7 @@ An automated build should encompass the following principles:

### DevOps Security Checks

- Introduce security to your project at early stages. Follow the [DevSecOps section](dev-sec-ops/README.md) to introduce security practices, automation, tools and frameworks as part of the CI.
- Introduce security to your project at early stages. Follow the [DevSecOps section](./dev-sec-ops/README.md) to introduce security practices, automation, tools and frameworks as part of the CI.

## Build Environment Dependencies

Expand All @@ -68,7 +68,7 @@ An automated build should encompass the following principles:
- We encourage maintaining a consistent developer experience for all team members. There should be a central automated manifest / process that streamlines the installation and setup of any software dependencies. This way developers can replicate the same build environment locally as the one running on a CI server.
- Build automation scripts often require specific software packages and version pre-installed within the runtime environment of the OS. This presents some challenges as build processes typically version lock these dependencies.
- All developers on the team should be able to emulate the build environment from their local desktop regardless of their OS.
- For projects using VS Code, leveraging [Dev Containers](../developer-experience/devcontainers.md) can really help standardize the local developer experience across the team.
- For projects using VS Code, leveraging [Dev Containers](../developer-experience/devcontainers-getting-started.md) can really help standardize the local developer experience across the team.
- Well established software packaging tools like Docker, Maven, npm, etc should be considered when designing your build automation tool chain.

### Document Local Setup
Expand Down Expand Up @@ -172,7 +172,6 @@ Implementing schema validation is divided in two - the generation of the schemas
There are two options to generate a schema:

- [From code](https://json-schema.org/implementations.html#from-code) - we can leverage the existing models and objects in the code and generate a customized schema.

- [From data](https://json-schema.org/implementations.html#from-data) - we can take yaml/json samples which reflect the configuration in general and use the various online tools to generate a schema.

### Validation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Since Service Connections can have a lot of permissions in the external service,
To prevent accidental mis-use of Service Connections there are several checks that can be configured. These checks are configured on the Service Connection itself and therefore can only be configured by the owner or administrator of that Service Connection. A user of a certain YAML Pipeline cannot modify these checks since the checks are not defined in the YAML file itself.
Configuration can be done in the Approvals and Checks menu on the Service Connection.
![ApprovalsAndChecks](images/approvals-and-checks.png)
![ApprovalsAndChecks](./images/approvals-and-checks.png)
### Branch Control
Expand All @@ -64,4 +64,4 @@ With Branch Control in place, in combination with Branch Protections, it is not
> **Note:** When setting a wildcard for the Allowed Branches, anyone could still create a branch matching that wildcard and would be able to use the Service Connection. Using [git permissions](https://learn.microsoft.com/en-us/azure/devops/repos/git/require-branch-folders#enforce-permissions) it can be configured so only administrators are allowed to create certain branches, like release branches.*
![BranchControl](images/branch-control.png)
![BranchControl](./images/branch-control.png)
2 changes: 1 addition & 1 deletion docs/CI-CD/dev-sec-ops/secrets-management/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,4 +188,4 @@ The following steps lay out a clear pathway to creating new secrets and then uti

### Validation

Automated [credential scanning](credential_scanning.md) can be performed on the code regardless of the programming language.
Automated [credential scanning](./credential_scanning.md) can be performed on the code regardless of the programming language.
2 changes: 1 addition & 1 deletion docs/CI-CD/gitops/deploying-with-gitops.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
GitOps simply allows faster deployments by having git repositories in the center offering a clear audit trail via git commits and no direct environment access. Read more on [Why should I use GitOps?](https://www.gitops.tech/#why-should-i-use-gitops)

The below diagram compares traditional CI/CD vs GitOps workflow:
![push based vs pull based deployments](images/GitopsWorflowVsTraditionalPush.jpg)
![push based vs pull based deployments](./images/GitopsWorflowVsTraditionalPush.jpg)

## Tools for GitOps

Expand Down
6 changes: 3 additions & 3 deletions docs/CI-CD/gitops/github-workflows.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,19 @@

A workflow is a configurable automated process made up of one or more jobs where each of these jobs can be an action in GitHub. Currently, a YAML file format is supported for defining a workflow in GitHub.

Additional information on GitHub actions and GitHub Workflows in the links posted in the [references](#references) section below.
Additional information on GitHub actions and GitHub Workflows in the links posted in the [resources](#resources) section below.

## Workflow per Environment

The general approach is to have one pipeline, where the code is built, tested and deployed, and the artifact is then promoted to the next environment, eventually to be deployed into production.

There are multiple ways in GitHub that an environment setup can be achieved. One way it can be done is to have one workflow for multiple environments, but the complexity increases as additional processes and jobs are added to a workflow, which does not mean it cannot be done for small pipelines. The plus point of having one workflow is that, when an artifact flows from one environment to another the state and environment values between the deployment environments can be passed easily.

![Workflow-Designs-Dependent-Workflows](images/Workflow-Designs-Dependent-Workflows.png)
![Workflow-Designs-Dependent-Workflows](./images/Workflow-Designs-Dependent-Workflows.png)

One way to get around the complexity of a single workflow is to have separate workflows for different environments, making sure that only the artifacts created and validated are promoted from one environment to another, as well as, the workflow is small enough, to debug any issues seen in any of the workflows. In this case, the state and environment values need to be passed from one deployment environment to another. Multiple workflows also helps to keep the deployments to the environments independent thus reducing the time to deploy and find issues earlier than later in the process. Also, since the environments are independent of each other, any failures in deploying to one environment does not block deployments to other environments. One tradeoff in this method, is that with different workflows for each environment, the maintenance increases as the complexity of workflows increase over time.

![Workflow-Designs-Independent-Workflows](images/Workflow-Designs-Independent-Workflows.png)
![Workflow-Designs-Independent-Workflows](./images/Workflow-Designs-Independent-Workflows.png)

## Resources

Expand Down
2 changes: 1 addition & 1 deletion docs/CI-CD/gitops/secret-management/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ All the below tools share the following:
- Easily scalable with multi-cluster and larger teams
- Both solutions support either Azure Active Directory (Azure AD) [service principal](https://learn.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) or [managed identity](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview) for [authentication with the Key Vault](https://learn.microsoft.com/en-us/azure/key-vault/general/authentication).

For secret rotation ideas, see [Secrets Rotation on Environment Variables and Mounted Secrets](secret-rotation-in-pods.md)
For secret rotation ideas, see [Secrets Rotation on Environment Variables and Mounted Secrets](./secret-rotation-in-pods.md)

For how to authenticate private container registries with a service principal see: [Authenticated Private Container Registry](#authenticated-private-container-registry)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ When using [Azure DevOps Pipelines](https://azure.microsoft.com/en-us/services/d

- *Pipeline variables are global shared state.* This can lead to confusing situations and hard to debug problems when developers make concurrent changes to the pipeline variables which may override each other. Having a single global set of pipeline variables also makes it impossible for secrets to vary per environment (e.g. when using a branch-based deployment model where 'master' deploys using the production secrets, 'development' deploys using the staging secrets, and so forth).

A solution to these limitations is to manage secrets in the Git repository jointly with the project's source code. As described in [secrets management](README.md), don't check secrets into the repository in plain text. Instead we can add an encrypted version of our secrets to the repository and enable our CI/CD agents and developers to decrypt the secrets for local usage with some pre-shared key. This gives us the best of both worlds: a secure storage for secrets as well as side-by-side management of secrets and code.
A solution to these limitations is to manage secrets in the Git repository jointly with the project's source code. As described in [secrets management](./README.md), don't check secrets into the repository in plain text. Instead we can add an encrypted version of our secrets to the repository and enable our CI/CD agents and developers to decrypt the secrets for local usage with some pre-shared key. This gives us the best of both worlds: a secure storage for secrets as well as side-by-side management of secrets and code.

```sh
# first, make sure that we never commit our plain text secrets and generate a strong encryption key
Expand Down
2 changes: 0 additions & 2 deletions docs/CI-CD/recipes/ci-with-jupyter-notebooks.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ This document aims to automate this process in Azure DevOps, so the DSs don't ne
A Data Science repository has this folder structure:

```bash

.
├── notebooks
│   ├── Machine Learning Experiments - 00.ipynb
Expand All @@ -22,7 +21,6 @@ A Data Science repository has this folder structure:
   ├── Machine Learning Experiments - 01.py
   ├── Machine Learning Experiments - 02.py
   └── Machine Learning Experiments - 03.py

```

The python files are needed to allow Pull Request reviewers to add comments to the notebooks, they can add comments
Expand Down
16 changes: 6 additions & 10 deletions docs/CI-CD/recipes/github-actions/runtime-variables/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,13 @@ We assume that you, as a CI/CD engineer, want to inject environment variables or

Many integration or end-to-end workflows require specific environment variables that are only available at runtime. For example, a workflow might be doing the following:

![Workflow Diagram](images/workflow-diagram.png)
![Workflow Diagram](./images/workflow-diagram.png)

In this situation, testing the pipeline is extremely difficult without having to make external calls to the resource. In many cases, making external calls to the resource can be expensive or time-consuming, significantly slowing down inner loop development.

Azure DevOps, as an example, offers a way to define pipeline variables on a manual trigger:

![AzDo Example](images/AzDoExample.PNG)
![AzDo Example](./images/AzDoExample.PNG)

GitHub Actions does not do so yet.

Expand Down Expand Up @@ -83,8 +83,6 @@ jobs:
run: echo "Flag is available and true"
```
Available as a .YAML [here](examples/commit-example.yaml).
Code Explanation:
The first part of the code is setting up Push triggers on the working branch and checking out the repository, so we will not explore that in detail.
Expand Down Expand Up @@ -153,7 +151,7 @@ Including the Variable

2. This triggers the workflow (as will any push). As the `[commit var]` is in the commit message, the `${COMMIT_VAR}` variable in the workflow will be set to `true` and result in the following:

![Commit True Scenario](images/CommitTrue.PNG)
![Commit True Scenario](./images/CommitTrue.PNG)

Not Including the Variable

Expand All @@ -167,7 +165,7 @@ Not Including the Variable

2. This triggers the workflow (as will any push). As the `[commit var]` is **not** in the commit message, the `${COMMIT_VAR}` variable in the workflow will be set to `false` and result in the following:

![Commit False Scenario](images/CommitFalse.PNG)
![Commit False Scenario](./images/CommitFalse.PNG)

## PR Body Variables

Expand Down Expand Up @@ -211,8 +209,6 @@ jobs:
run: echo "Flag is available and true"
```

Available as a .YAML [here](examples/pr-example.yaml).

Code Explanation:

The first part of the YAML file simply sets up the Pull Request Trigger. The majority of the following code is identical, so we will only explain the differences.
Expand Down Expand Up @@ -256,15 +252,15 @@ There are many real world scenarios where controlling environment variables can

Developer A is in the process of writing and testing an integration pipeline. The integration pipeline needs to make a call to an external service such as Azure Data Factory or Databricks, wait for a result, and then echo that result. The workflow could look like this:

![Workflow A](images/DevAWorkflow.png)
![Workflow A](./images/DevAWorkflow.png)

The workflow inherently takes time and is expensive to run, as it involves maintaining a Databricks cluster while also waiting for the response. This external dependency can be removed by essentially mocking the response for the duration of writing and testing other parts of the workflow, and mocking the response in situations where the actual response either does not matter, or is not being directly tested.

### Skipping Long CI processes

Developer B is in the process of writing and testing a CI/CD pipeline. The pipeline has multiple CI stages, each of which runs sequentially. The workflow might look like this:

![Workflow B](images/DevBWorkflow.png)
![Workflow B](./images/DevBWorkflow.png)

In this case, each CI stage needs to run before the next one starts, and errors in the middle of the process can cause the entire pipeline to fail. While this might be intended behavior for the pipeline in some situations (Perhaps you don't want to run a more involved, longer build or run a time-consuming test coverage suite if the CI process is failing), it means that steps need to be commented out or deleted when testing the pipeline itself.

Expand Down

This file was deleted.

Loading

0 comments on commit 03d60d2

Please sign in to comment.