What's the best pattern for creating your prod container given your devcontainer? #4
-
Hi all (first discussion?!) I've been using devcontainers for about a year at this point and have found them very effective for development/ci. I'm now wondering what's the best way of building a prod container which is consistent/compatible with my dev container. e.g. should we use the builder pattern and copy certain dirs out of our dev container to ship them to prod? For ref: I work with Python/ML tools and need to rely on Python/Kubernetes/other system binaries being in place, then adding python packages to ship to production. Ideas/links to docs would be great. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hey @alex-treebeard, yes - we're in the process of getting feedback channels ironed out. We also spun up a Slack channel (see https://github.com/orgs/devcontainers/discussions/3). Many conversations have ended up in previous VS Code or GitHub specific spots just based on history while we start to route people to the open spec. Anyway, generally the way I've been doing this is with a combination of multi-stage Dockerfiles and, recently, Dev Container Features. Dev Container Features provide a quick way to add needed development and CI specific layers that you would not actually use in production. Each line in the "green" above is effectively an image layer (or set of layers). So, you'd typically start with your prod image, then add to it. For python, the pre-built I do tend to like the builder pattern as you describe - but there's not a strict requirement that you have to do that either since you can do things inside a running container you can't in an image build. Let's look at both variations. The builder patternConsider this scaffold of a Dockerfile: # Base that everything uses - add any libraries, dependencies, etc that are common here - you can also do global pip installs
FROM python-slim as base
RUN apt-get update \
&& apt-get install -y --no-install-recommends ca-certificates curl netbase wget tzdata libxml2 libyaml-0-2 \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# The non-slim python images include more libraries but extend from python-slim, and you can add more to it if needed.
# You can also do "FROM base" and install all needed packages for building in this stage instead. This is just a shortcut.
FROM python as build-base
RUN apt-get update \
&& apt-get install -y build-essential cmake cppcheck valgrind clang lldb llvm gdb \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# NOTE: For the Dev Container, we'll take advantage of Dev Container Features to add contents. But
# we'll add a dedicated stage in this example since you can add other contents this way if too.
FROM build-base as devcontainer
RUN groupadd devcontainer --gid 1000 \
&& useradd --uid 1000--gid 1000 -m -s /bin/bash devcontainer
# The actual build - via a script called "your-build-script-goes-here.sh" that copies into a folder called "/app"
FROM build-base as build
RUN --mount=target=/src,source=./,type=bind,ro \
/src/scripts/your-build-script-goes-here.sh /app
FROM base as production
COPY --from=build /app /app
CMD [ "/app/start.sh" ] In this case, Now, let's add additional tools to the devcontainer.json: {
"build": {
"dockerfile": "Dockerfile",
"target": "devcontainer",
},
"features": {
"ghcr.io/devcontainers/features/common-utils:1": { "username": "devcontainer" },
"ghcr.io/devcontainers/features/docker-in-docker:1": {},
"ghcr.io/devcontainers/features/kubectl-helm-minikube:1": {},
"ghcr.io/devcontainers/features/python:1": { "version": "none", "installTools": "true" }
}
} Any Dev Container you spin up will have the "devcontainer" layer plus the above features added. Also, FYI, to improve perf, you can use the dev container CLI or GitHub Action to build a devcontainer image and push it to a registry. e.g. via the CLI:
You can then add it to the Using the dev container for the build processThe process is bit different and when a running dev container is used for the actual build process. In this case, there's a simplified Dockerfile: FROM python-slim as base
RUN apt-get update \
&& apt-get install -y --no-install-recommends ca-certificates curl netbase wget tzdata libxml2 libyaml-0-2 \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
FROM python as devcontainer
RUN apt-get update \
&& apt-get install -y build-essential cmake cppcheck valgrind clang lldb llvm gdb \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
RUN groupadd devcontainer --gid 1000 \
&& useradd --uid 1000--gid 1000 -m -s /bin/bash devcontainer
FROM base as production
COPY ./out /app
CMD [ "/app/start.sh" ] The Now you can use the dev container CLI to execute the build and put the result in a folder called
FWIW, since this example installs "docker-in-docker" via a dev container feature, you can opt to do the prod image build in the dev container as well. This model just works even if you haven't put your Docker in your dev container. That help? |
Beta Was this translation helpful? Give feedback.
Hey @alex-treebeard, yes - we're in the process of getting feedback channels ironed out. We also spun up a Slack channel (see https://github.com/orgs/devcontainers/discussions/3). Many conversations have ended up in previous VS Code or GitHub specific spots just based on history while we start to route people to the open spec.
Anyway, generally the way I've been doing this is with a combination of multi-stage Dockerfiles and, recently, Dev Container Features. Dev Container Features provide a quick way to add needed development and CI specific layers that you would not actually use in production. Each line in the "green" above is effectively an image layer (or set of layers). So, you'd typ…