Skip to content
This repository has been archived by the owner on Nov 14, 2024. It is now read-only.

Build Upgrade and Misc. #5

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

ambsw-technology
Copy link
Contributor

This PR includes several significant changes/upgrades:

  1. Passes ${shell basename ${PWD}} (i.e. folder name) throughout the system so the Lambda Function, Log Group, etc. are all named correctly.
  2. Refactors the build logic into build.sh. This file is now used by Dockerfile.lambda but can also be called "locally" inside a docker container built using Dockerfile.build (e.g. the build environment on my windows machine) or presumably your local system. For Dockerfile.build, you need to pass the PACKAGE as a build arg so the internal folder is named correctly.
  3. Explicitly constructs the log and limits log permissions so the CloudFormation stack for the custom resource will clean up those logs when deleted (e.g. this issue).
  4. Includes permissions so the Lambda can invoke itself to support the upgrades in cfn-resource-provider
  5. Allows users to override AWS_REGION and S3_BUCKET_PREFIX by setting them in the environment. This makes it easy to change the region and namespace globally before you start running commands.
  6. Finally, it improves the resource file with comments, TODOs, and stubs to make it more obvious what the best practices are.

If you object to any of these changes, I can revert them. I'll update the README to reflect any of the changes that are acceptable. I'm sorry for the mega PR, but this was going to be very messy to break out into non-conflicting commits.

@ambsw-technology
Copy link
Contributor Author

P.S. the new infrastructure is demonstrated in the in-process cfn-workspaces-provider which will eventually bridge the gap between the Directory Service and Workspace resources by registering a directory with workspaces (create/delete implemented) and creating workspace users (still WIP).

I even refactored the commits so the first is this repo with this PR included and the second (and subsequent) is an implementation on top of it.

@ambsw-technology
Copy link
Contributor Author

When testing my derived template, I realized that the bare error for an incorrect Custom:: name in provider.py wasn't returning a response. This commit resolves that by using the base Provider class to respond to this situation in (hopefully) sane ways.

Copy link
Member

@mvanholsteijn mvanholsteijn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not necessary. Just pass the request to the one provider you have. It will reject it for you.

@ambsw-technology
Copy link
Contributor Author

ambsw-technology commented Jul 17, 2020

That's how cfn-certificate-provider works and I found the error very confusing. I spent a lot of time trying to figure out what was wrong inside that provider when I actually had a simple typo in the Custom:: name. PR #1 (already merged) actually changed it to an exception. At the time it didn't occur to me that I had eliminated the return message to CF.

@ambsw-technology
Copy link
Contributor Author

P.S. I can certainly rip it out if you don't like it, but this works really well:

image

@mvanholsteijn
Copy link
Member

Thank you for your hard work on the template. My design philosophy is to minimize variations to improve the reproducibility of the artifacts and mental strain of software development. So things which do not need to be variable, I prefer to have fixed. For every variable setting may produce a different result.

  1. Passes ${shell basename ${PWD}} (i.e. folder name) throughout the system so the Lambda Function, Log Group, etc. are all named correctly.

The design choice is to hardcode it, as I do not want the name of the component to change just because somebody different checkout or changed the name of the root directory.

I would prefer a cookiecutter template. checkout -> https://github.com/binxio/cookiecutter-sam-custom-resource-provider

  1. Refactors the build logic into build.sh. This file is now used by Dockerfile.lambda but can also be called "locally" inside a docker container built using Dockerfile.build (e.g. the build environment on my windows machine) or presumably your local system.

There is no need for a separate build.sh; The provider can be compiled and unit tested using pipenv. The zip file you create locally may not have the correct binaries in there. The reason for the Dockerfile.lambda is to create a zip file for thi

2a. For Dockerfile.build, you need to pass the PACKAGE as a build arg so the internal folder is named correctly.

The dockerfile works for any provider, it is blissfully unaware and just makes a zip for you. I do not see the need to pass in the name. I also saw that cloudformation template was parameterized.

I would prefer a cookiecutter template. checkout -> https://github.com/binxio/cookiecutter-sam-custom-resource-provider

  1. Explicitly constructs the log and limits log permissions so the CloudFormation stack for the custom resource will clean up those logs when deleted (e.g. this issue).

ok.

  1. Includes permissions so the Lambda can invoke itself to support the upgrades in cfn-resource-provider

ok.

  1. Allows users to override AWS_REGION and S3_BUCKET_PREFIX by setting them in the environment. This makes it easy to change the region and namespace globally before you start running commands.

It is already easy if you really wanted that: make AWS_REGION= S3_BUCKET_PREFIX=; This will make it a deliberable choice and not an accidental one.

  1. Finally, it improves the resource file with comments, TODOs, and stubs to make it more obvious what the best practices are.

Check the the review comments.

As you mentioned your PR is too big; The changes to the build system I will not merge.

@ambsw-technology
Copy link
Contributor Author

ambsw-technology commented Jul 19, 2020

  1. I understand the cookie cutter logic, but it looked like it was going to create collision and confusion inside the AWS deployment process. For example, everything was going to get put at cfn-custom-resource -- the zip file, the provider stack, the lambda, and the Log Group. How are you supposed to create a second resource? A global find-and-replace? That's not automation and, if you miss e.g. the log group, everything will deploy fine but logs for all your providers will end up in the same Log Group.

    You actually use shell basename in the current package to name the Docker image. I was inspired to generalize it so a user didn't have to touch a Makefile (or figure out how .release works -- as I did) and still get good deployment defaults.

    In my experience, the default behavior for git is to name the folder after the repo name. Once the base package is renamed, you should get a very reliable experience -- and a hard fork that gets renamed would automatically namespace itself. A good alternative (in the spirit of Build Upgrade and Misc. #5) would be to force the user to provide a single, explicit value in e.g. a .package file, but I found the status quo untenable.

  2. Dockerfile.build was developed as a windows shim, but it'd be even better if there was a single cross-platform build strategy. I'm pretty sure build.sh demonstrates this possibility -- run locally w/o docker, in Dockerfile.build or used by Dockerfile.lambda. In theory, the manual step goes away by wrapping it in e.g. a build_env.bat (and/or equivalent linux script) that populates this argument automatically. I just wasn't going to invest energy making it work better if it was going to get rejected.

  3. (this is auto-renumbering 5) If you prefer no default, that's fine. My goal, once again, was to ensure the user did not need to touch the Makefile to get well-behaved and package-specific results. If they don't have to do anything to the build files, it would also make it easy/possible to pull/merge in upstream fixes/changes to these files.

@ambsw-technology
Copy link
Contributor Author

ambsw-technology commented Jul 19, 2020

P.S. and (2) also goes away if the repo uses a .package (or similar) file because the Dockerfile would just ADD that file and it wouldn't matter what the internal directory structure was.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants