Nameless deploy tools are a set of tools to implement a true Infrastructure As Code workflow with various cloud infrastructure management tools. Currently supported tools are:
- CloudFormation
- AWS CDK
- Serverless Framework
- Terraform
- Azure Resource Manager (with YAML syntax)
- Bicep
A common analogy for cloud infrastructure has been to move from having pets with names that need lots of looking after, to cattle that has at most id's. It's time to move to the industrial age from the agrarian era. The infrastructure our applications runs now comes and goes, and we know at most some statistical information about the actual executions. Run times, memory usage, used bandwidth and the like. We no longer know even the id's of the things that actually run the code. Hence - nameless.
We at Nitor are software engineers with mostly a developer or architect background, but a lot of us have had to work closely with various Operations teams around the world. DevOps has a natural appeal to us and immediately "infrastructure as code" meant for us that we should apply the best development practices to infrastructure development. It starts with version control and continues with testing new features in isolation and a workflow that supports this. Our teams usually take into use a feature branch workflow if it is feasible, and we expect all the tools and practices to support this. For infrastructure this type of branching means that you should be able to spin up enough of the infrastructure to be able to verify the changes you want to implement in production. Also, the testing environment should be close enough to the target environment for the results to be valid. So the differences between testing and production environments should be minimized and reviewable.
With the popular tools like Ansible, Terraform, Chef etc. you need to come up with and implement the ways to achieve the goals above. As far as I know, no tool besides ndt has at its core a thought-out way of a branching infrastructure development model.
nameless-deploy-tools works by defining Amazon Machine Images, Docker containers, Serverless services, and deploying CloudFormation stacks of resources. CloudFormation stacks can also be defined with AWS CDK applications. All of the above can also be deployed using Terraform.
Requires Python 3.9 or newer.
Use pipx or uv to install it globally in an isolated environment. pipx is the older and stable tool, uv is a new, much faster version.
pipx install nameless-deploy-tools
# or
uv tool install nameless-deploy-tools
Directly installing with pip is no longer supported by most Python distributions.
To use nameless-deploy-tools you need to set up a project repository that describes the images you want to build, and the stacks you want to deploy them in. See ndt-project-template for an example.
Here are few commands you can use. All of these are run in your project repository root. You need to have AWS credentials for command line access set up.
- To bake a new version of an image:
ndt bake-image <image-name>
- To build a new Docker container image
ndt bake-docker <component> <docker-name>
- To deploy a stack:
- with a known AMI id:
ndt deploy-stack <image-name> <stack-name> <AMI-id>
- with the newest AMI id by a given bake job:
ndt deploy-stack <image-name> <stack-name> "" <bake-job-name>
- with a known AMI id:
- To undeploy a stack:
ndt undeploy-stack <image-name> <stack-name>
For full list of commands see here
You can additionally use a faster register-complete by running ./faster_register_complete.sh
.
This compiles C++ programs from the files
n_utils/nameless-dt-register-complete.cpp
and n_utils/nameless-dt-print-aws-profiles.cpp,
and replaces the Python versions of nameless-dt-register-complete
and nameless-dt-print-aws-profiles
with these much faster compiled binaries.
- Command Reference
- ndt workspace tooling
- Template Pre-Processing
- Multifactor Authentication
- Common parameters
This library uses a simplified semantic versioning scheme: major version change for changes that are not backwards compatible (not expecting these) and the minor version for all backwards compatible changes. We won't make the distinction between new functionality and bugfixes, since we don't think it matters and is not a thing worth wasting time on. We will release often and if we need changes that are not comptatible, we will fork the next major version and release alphas versions of that until we are happy to release the next major version and try and have a painless upgrade path.
uv is the recommended way to handle virtual environments for development.
Create a venv and install all dependencies:
uv sync --all-extras
You can then run commands directly with the venv using uv run
,
or activate the venv manually first.
The uv default venv location is .venv
.
source .venv/bin/activate
# or Windows
.venv\Scripts\activate
Python dependencies are specified in pyproject.toml.
The requirements.txt
file is generated by pip compile and should not be modified manually.
Use the provided shell script to update the requirements file.
First install uv (recommended),
or alternatively pip-tools
using pipx.
Then run:
./compile-requirements.sh
# See help
./compile-requirements.sh -h
uv run python -m pytest -v .
Install test requirements:
pip install -r dev-requirements.txt
Run tests with Pytest:
python -m pytest -v .
Code formatting and linting with ruff.
These are configured with a custom line length limit of 120. The configs can be found in pyproject.toml.
Usage:
ruff format
ruff check --fix
Using with pre-commit:
# setup to be run automatically on git commit
pre-commit install
# run manually
pre-commit run --all-files
Use the provided shell script. Note that you need to have a venv with the extra dependencies installed active when running the script.
./release.sh
# See help
./release.sh -h