Amazon Web Services (AWS) Lambda is a handy and scalable tool used to run code on demand. Originally, the only way to use Lambda was to submit a .zip file of the code a developer wanted to run to the AWS web console. This is convenient, but the .zip file needs to be submitted through the AWS management console, which is a manual process that can be error-prone and needs someone to remember to do it. Today, developers can use container images for Lambda code — a helpful way to run much more complex code in a familiar package. This process has a rich ecosystem of tools to help with automated building and deploying of the images, thus avoiding manual errors and saving developers the chore of zipping up the code and navigating through the AWS console every time they need to change the code bundle.
However, there are some non-obvious differences from normal containers which can make building Python app images for AWS Lambda somewhat frustrating:
When building container images for their apps, developers tend to use a common base image such as debian:buster
, ubuntu:20.04
— or language-targeted base images such as python:3.10.0rc1-slim-buster
. A container image intended for use with AWS Lambda can use these base images, but there are very specific assumptions that the Lambda service makes which need to be taken into account. Complying with these specifications can be annoyingly difficult to get right.
Fortunately, there are base images made specifically for Lambda deployments. Lambda-specific base images are slimmed-down derivatives of Amazon Linux. Additionally, they have a common prefix of public.ecr.aws/lambda
, as well as the targeted language version, such as public.ecr.aws/lambda/python:3.8
.
It’s important to note that this process is only supported by certain versions of Python, since certifying every possible iteration of language development would be prohibitively expensive for even Amazon's budget.
Once a base image has been selected, the normal path (and the one offered in the AWS documentation) is to use pip to install the Python packages your code depends on from a requirements file:
FROM public.ecr.aws/lambda/python:3.8 # Create function directory WORKDIR /app # Install the function's dependencies # Copy file requirements.txt from your project folder and install # the requirements in the app directory. COPY requirements.txt . RUN pip3 install -r requirements.txt # Copy handler function (from the local app directory) COPY app.py . # Overwrite the command by providing a different command directly in the template. CMD ["/app/app.handler"]
Using pip
to install the Python packages will often work just fine, and if your dependencies are pure Python, you’re good to go with this method.
However, many Python packages require more integrated support with the operating system (e.g., needing to compile one or more C extensions for performance or needing to interface with a system library). Compiling these C extensions requires a C compiler and its support infrastructure. Unfortunately, the optimized Lambda images don’t support installing this infrastructure, in the interest of efficiency.
The solution to this dilemma is to use a full Amazon Linux image (which the optimized Lambda images are based on) to build dependencies. Then developers can use Docker's multi-stage build capability to copy only the parts needed to the final Lambda-specific layer, which uses the slimmed-down image:
FROM amazonlinux:2 as build RUN yum groupinstall -y "Development Tools" RUN amazon-linux-extras enable python3.8 RUN yum clean metadata RUN yum install -y python38 python38-devel # We create an /app directory with a virtual environment in it to store our # application in. RUN set -x \ && python3.8 -m venv /app # Setting the PATH ensures that our pip commands below use the pip inside the virtual environment, # adding the compiled wheels to the collection we will later copy to the final image. ENV PATH="/app/bin:${PATH}" RUN mkdir /app/wheels # Next, we want to update pip, setuptools, and wheel inside of this virtual # environment to ensure that we have the latest versions of them. RUN pip --no-cache-dir --disable-pip-version-check install --upgrade pip setuptools wheel # We now grab the requirements files (the specification of which packages we depend on) COPY requirements /tmp/requirements # This installs the packages into the venv we created above, but as a side effect # it puts all of the wheel files into the /app/wheels directory, and this directory # and its contents are what we will copy to the final image. RUN pip install --no-cache-dir --disable-pip-version-check --no-deps \ -r /tmp/requirements/requirements.txt FROM public.ecr.aws/lambda/python:3.8 ENV PYTHONUNBUFFERED 1 ENV PYTHONPATH /app ENV PATH="/app/bin:${PATH}" ENV BOGUS 1 WORKDIR /app ARG DEVEL=no COPY requirements /tmp/requirements # We now copy all the wheels we created in the build phase COPY --from=build /app/wheels /app/wheels # And now we install them (into the image's system Python), specifying that only these wheels # should be used, not any external downloads such as from PyPI. RUN pip install --no-cache-dir --disable-pip-version-check --no-index \ -f /app/wheels -r /tmp/requirements/requirements.in COPY app/app.py /app # Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile) CMD [ "app.handler" ]
In conclusion, although there are complexities involved in using the container image method to run code on AWS Lambda, the well-defined development pathway and rich ecosystem of tools available for working with container images ultimately make this method a much more streamlined approach than the earlier method using ZIP files. Ultimately this saves the developer time, money and effort.
Want more information? Explore Six Feet Up’s AWS case studies, and sign up for our newsletters to get tech tips in your inbox.