Containerizing the CDK Quick Look

I personally think there are few greater evils than language-based virtual environments. My CDK work is usually in python, and the recommended method is to have a pipenv step, which I find inelegant and icky.

So instead, why not just use a container for your CDK work? Here’s how I did it.

First, you need to have your docker file. This example assumes that you have a directory named cdk that houses your CDK app. I also install some other stuff into this container than what is strictly required so I can exec into it later on to do diagnostic things.

FROM alpine

RUN pwd && \
    apk update && \
    apk add docker python3 nodejs npm && \
    ln -sf python3 /usr/bin/python && \
    python --version && \
    node --version && \
    npm --version && \
    apk update && \
    apk add py-pip && \
    python -m pip install awscli && \
    aws --version && \
    npm install -g aws-cdk@1.101.0 && \
    cdk --version && \
    mkdir /opt/tool && \
    mkdir /opt/tool/scripts && \
    mkdir /opt/tool/tmp && \
    echo "BASE COMPLETE"

COPY cdk /opt/tool/cdk

RUN pwd && \
    cd /opt/tool/cdk && \
    python -m pip install -r requirements.txt && \

You can now docker build -t cdk-app:local from that file and have a handy container. My docker run commands look something like this:

docker run -t --rm \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v "${HOME}/.aws":/root/.aws:ro \
    cdk-app:local cdk deploy --force --require-approval never

(The docker socket mount is so you can build docker images inside the container, which is needed with certain CDK processes. And the .aws mount is so I don’t have to pass in credentials as envvars, which really only makes sense in my local use-case.)

I actually have a bunch of wrapper scripts I use to make this even easier, but that’s the gist of things. This way my local disk never gets mucked up with the virtual environment, and it is more portable and scriptable this way as there are fewer dependencies required locally.