Automating Python Lambda Deployments with Docker

I recently started working more with AWS Lambda functions, some of them with external dependencies, and I quickly ran into an issue where dependencies that are built on MacOS sometimes don’t play nicely with Amazon Linux on Lambda.

The solution, back in the early days of Lambda, was that you would have to spin up an EC2 box, and then build and zip up your dependencies there. Luckily, things are much easier now because Amazon offers a Docker image for Amazon Linux that you can use to build your dependencies.

I found a blog post by Quilt that discusses automating the process of creating your Lambda deployment package, but their solution stops at building the zip file you need. I wanted to take it a bit further and get the entire deployment process down to just one command, so I made some changes and additions.

First up, you have your Dockerfile. This is basically identical to Quilt’s, except that I’ve added the awscli package to be installed.

FROM amazonlinux:2017.03
RUN yum -y install git \
    python36 \
    python36-pip \
    zip \
    && yum clean all
RUN python3 -m pip install --upgrade pip \
    && python3 -m pip install boto3 awscli

Then you need your package.sh file, this is the script that’ll get run inside your Docker container once it’s built. Note the $LAMBDA_FUNC environment variable. We’ll be setting that when we run the container via the Makefile.

#!/bin/bash

mkdir tmp666
[ -f /io/requirements.txt ] && python3 -m pip install -r /io/requirements.txt -t tmp666
rm -f /io/lambda.zip
cp -r /io/* tmp666
cd tmp666
zip -r /io/lambda.zip *

cd /io
aws lambda update-function-code --function-name $LAMBDA_FUNC --zip-file fileb://lambda.zip

rm -f /io/lambda.zip

Before we tie it all together with the Makefile, I should clarify that I wanted this automation to be set up so that I could use it to deploy multiple different Lamdba functions to different places, so the directory structure is pretty important. In my setup, the directory structure looks like this:

lambda
├── hash-function
│   └── lambda_function.py
├── preview-function
│   ├── lambda_function.py
│   └── requirements.txt
├── Dockerfile
├── Makefile
└── package.sh

You’ll notice that a function may or may not have dependencies. Even without dependencies, this automation can still make developing in VSCode and deploying to Lambda slightly easier and faster (than copying, pasting, and saving). The main trick to this is that my folder names exactly match the name of the function in Lambda. So hash-function will deploy to hash-function in Lambda, and so on.

So our final step is to create our Makefile.

deploy-lambda:
ifdef p
	cp -f package.sh $(shell pwd)/$(p)/package.sh
	docker build -t lambda .
	docker run --rm -e LAMBDA_FUNC=$(p) -e AWS_DEFAULT_REGION=us-east-1 -e AWS_ACCESS_KEY_ID=$(shell aws --profile default configure get aws_access_key_id) -e AWS_SECRET_ACCESS_KEY=$(shell aws --profile default configure get aws_secret_access_key) -v $(shell pwd)/$(p):/io -t lambda bash /io/package.sh
	rm -f $(shell pwd)/$(p)/package.sh
endif

Note that we’re automatically grabbing the necessary AWS access keys from our local machine and injecting them into the docker container, so that it can deploy the package to Lambda for us.

Then to deploy to lambda it’s as simple as writing…

$ make deploy-lambda p=preview-function

That’s about it! Doing this has saved me a lot of time that would have otherwise been spent running a couple different commands to deploy my Lambda functions. Now it’s fast, easy, and worry-free. Again, kudos to Quilt and their blog post for doing more than half the work for me 😉