<< ALL BLOG POSTS

5 Tips to Get Started with Docker

Table of Contents

When configuring applications on fresh and clean developer laptops, it’s common to constantly run into issues getting the configurations and dependencies just right time after time.

Everybody's computer is configured differently. Meet Joe and Sarah, two developers sitting next to each other. Joe has been developing for years on the same laptop, and Sarah just got a brand new laptop. She will clone the code, and try to run it according to the instructions written by Joe, and she’ll run into roadblock after roadblock. She might be missing dependencies. Joe’s machine may be relying on specific libraries and binaries that Sarah simply doesn’t have installed yet. Hours later Sarah might have the setup right, but it may never reproduce Joe’s laptop, which may also not match the environment that the application will ultimately be released onto.

Enter Docker.

Docker is a packaging software that allows you to produce a single container image that can be downloaded and run anywhere, on any platform - so long as that server or workstation can run Docker. 

In the example above, you would create a single file - a Dockerfile, which describes to Docker everything it needs to do in order to run and package your Django, Plone, or other application across multiple platforms consistently. You will always have the right document conversion libraries, image conversion libraries, JSON tools, or any of the libraries that Python may not be handling for you out of the box within this Dockerfile

How does it work?

Docker builds an image that will run in a container on Linux, so, whether you’re running on Mac, Windows or Linux, you’re always running this container in its own sandbox. And if you are using Linux or WSL2 on Windows, it will be able to run your containers without the overhead of a virtual machine (VM). For other platforms such as MacOS and Windows, it will run your containers inside of a VM for you.  

Essentially, it downloads the container image onto your machine, then they launch into Docker Engine as a container. We like to think of it as a jail, or a chrooted environment that can operate securely and independently of any other applications running on the same server. 

5 tips on how to get started with Docker

There’s a little bit of overhead that goes into putting a Docker container image together properly. Expect to get a little turned around on your first go of implementing Docker into your workflow. Here’s to get you started:

1. Read. The. Docs.

Someone took the time to create fantastic documentation about Docker and how you can leverage their tools. So don’t try to reinvent the wheel. Docker has a lot of features and functionality built in, so we highly recommend taking the time to read and understand the full power Docker can offer you.

2. Start on a greenfield application and start small

There is an understanding of Docker that needs to happen before you can take your giant monolithic enterprise application and turn it into a fully Dockerized app. Take it slow and don’t shoehorn your app in there: it becomes a pain to maintain and service, just like it is in your current app’s state.

 Instead, remember the Unix philosophy that originated with Ken Thompson and Dennis Ritchie: “make each program do one thing well”. This definitely applies here: do build your application (or image) to do ONE thing. After that, you can compose your full application out of many of these building blocks. This allows you to split up the release cycle, track dependencies more easily, and enjoy fewer highly coupled dependencies overall. You are going to be less likely to break the entire application this way, as opposed to building one giant monolithic tool.

3. Go small

Don’t waste space placing a full blown Ubuntu image (with your application inside) into a Docker container. Instead, you’ll want to try deploying an Alpine Linux image or a Slim Edition of Debian, add your application into it, and then deploy. That’ll put you in the 10s to 100s of MB instead of the GBs. Leverage intermediate build images, copy the result of that build onto a new image to keep image sizes down and deploy/scale quickly across the network.

4. Clean up your build tools

This goes along with #3, but is worth mentioning again. Don’t be sloppy and leave all of the build tools in a container. From a security standpoint, if your image doesn’t contain all the build tools, it will be more difficult to exploit and build malicious applications into your container because those tools aren’t present. Keeping your images small also means that deploy times will be faster as your application scales or is released many times a day.

5. Do Cheat!

It’s a good idea to copy what others have done. A reference application that uses Docker that we highly recommend checking out is the Python Packaging Authority’s Warehouse project (it powers PyPI!) They’ve truly put a lot of thought into the components, files and information in their Docker files. They have a great example using Docker Compose, which basically says: you’ve got 11 different services, here’s how they line up, here’s how they talk to each other. 

This is a well-documented application, so it is simple to get started. They have enabled anyone to easily contribute to this open source project. You can get started by cloning the repository, type a couple of make commands and you’re off to the races. Doing this work by hand would have resulted in days of setting up and getting everything just right on your workstation to even get started on the Warehouse project. 

 

Have a question? Need help? Let us know; we are happy to help.

Related Posts
How can we assist you?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.