Source originale du contenu
With Docker you can easily deploy a web application along with it’s dependencies, environment variables, and configuration settings – everything you need to recreate your environment quickly and efficiently.
This tutorial looks at just that.
Updated 02/28/2015: Added Docker Compose and upgraded Docker and boot2docker to the latest versions.
We’ll start by creating a Docker container for running a Python Flask application. From there, we’ll look at a nice development workflow to manage the local development of an app as well as continuous integration and delivery, step by step …
I (Michael Herman) originally presented this workflow at PyTennessee on February 8th, 2015. You can view the slides here, if interested.
Workflow
- Code locally on a feature branch
- Open a pull request on Github against the master branch
- Run automated tests against the Docker container
- If the tests pass, manually merge the pull request into master
- Once merged, the automated tests run again
- If the second round of tests pass, a build is created on Docker Hub
- Once the build is created, it’s then automatically (err, automagically) deployed to production
This tutorial is meant for Mac OS X users, and we’ll be utilizing the following tools/technologies – Python v2.7.9, Flask v0.10.1, Docker v1.5.0, Docker Compose, v1.1.0, boot2docker 1.5.0, Redis v2.8.19
Let’s get to it…
First, some Docker-specific terms:
- A Dockerfile is a file that contains a set of instructions used to create an image.
- An image is used to build and save snapshots (the state) of an environment.
- A container is an instantiated, live image that runs a collection of processes.
Be sure to check out the Docker documentation for more info on Dockerfiles, images, and containers.
Why Docker?
You can truly mimic your production environment on your local machine. No more having to debug environment specific bugs or worrying that your app will perform differently in production.
- Version control for infrastructure
- Easily distribute/recreate your entire development environment
- Build once, run anywhere – aka The Holy Grail!
Docker Setup
Since Darwin (the kernel for OS X) does not have the Linux kernel features required to run Docker containers, we need to install boot2docker – which is a lightweightt Linux distribution designed specifically to run Docker. In essence, it starts a small VM that’s configured to run Docker containers.
Create a new directory called “fitter-happier-docker” to house your Flask project.
Next follow the instructions from the guide Installing Docker on Mac OS X to install both Docker and the official boot2docker package.
Test the install:
1 2 3 |
|
Compose Up!
Docker Compose is an orchestration framework that handles the building and running of multiple services (via separate containers) using a simple .yml file. It makes it super easy to link services together running in different containers.
Following along with me? Grab the code in a pre-Compose state from the repository.
Start by installing the requirements via pip and then make sure Compose is installed:
1 2 3 |
|
Now let’s get our Flask application up and running along with Redis.
Add a new file called docker-compose.yml to the root directory:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Here we add the services that make up our stack:
- web: First, we build the image from the “web” directory and then mount that directory to the “code” directory within the Docker container. The Flask app is ran via the
python app.py
command. This exposes port 5000 on the container, which is forwarded to port 80 on the host environment. - redis: Next, the Redis service is built from the Docker Hub “Redis” image. Port 6379 is exposed and forwarded.
Did you notice the Dockerfile in the “web” directory? This file is used to build our image, starting with an Ubuntu base, the required dependencies are installed and the app is built.
Build and run
With one simple command we can build the image and run the container:
This command builds an image for our Flask app, pulls the Redis image, and then starts everything up.
Grab a cup of coffee. Or two. This will take some time the first time you build the container. That said, since Docker caches each step (or layer) of the build process from the Dockerfile, rebuilding will happen much quicker because only the steps that have changed since the last build are rebuilt.
If you do change a line/step/layer in your Dockerfile, it will recreate/rebuild everything in that line – so be mindful of this when you structure your Dockerfile.
Docker Compose brings each container up at once in parallel. Each container also has a unique name and each process within the stack trace/log is color-coded for readability.
Ready to test?
Open your web browser and navigate to the IP address associated with the DOCKER_HOST
variable – i.e., http://192.168.59.103/, in this example. (Run boot2docker ip
to get the address.)
You should see the text, “Hello! This page has been seen 1 times.” in your browser:
Refresh. The page counter should have incremented.
Kill the processes (Ctrl-C), and then run the following command to run the process in the background.
Want to view the currently running processes?
1 2 3 4 5 |
|
Both processes are running in a different container, connected via Docker Compose!
Next Steps
Once done, kill the processes via docker-compose stop
, then run boot2docker down
to gracefully shutdown the VM. Commit your changes locally, and then push to Github.
So, what did we accomplish?
We set up our local environment, detailing the basic process of building an image from a Dockerfile and then creating an instance of the image called a container. We tied everything together with Docker Compose to build and connect different containers for both the Flask app and Redis process.
Now, let’s look at a nice continuous integration workflow powered by CircleCI.
Still with me? You can grab the updated code from the repository.
Docker Hub
Thus far we’ve worked with Dockerfiles, images, and containers (abstracted by Docker Compose, of course).
Are you familiar with the Git workflow? Images are like Git repositories while containers are similar to a cloned repository. Sticking with that metaphor, Docker Hub, which is repository of Docker images, is akin to Github.
- Signup here, using your Github credentials.
- Then add a new automated build. And add your Github repo that you created and pushed to. Just accept all the default options, except for the “Dockerfile Location” – change this to “/web”.
Once added, this will trigger an initial build. Make sure the build is successful.
Docker Hub for CI
Docker Hub, in itself, acts as a continuous integration server since you can configure it to create an automated build every time you push a new commit to Github. In other words, it ensures you do not cause a regression that completely breaks the build process when the code base is updated.
There are some drawbacks to this approach – namely that you cannot push (via
docker push
) updated images directly to Docker Hub. Docker Hub must pull in changes from your repo and create the images itself to ensure that their are no errors. Keep this in mind as you go through this workflow. The Docker documentation is not clear with regard to this matter.
Let’s test this out. Add an assert to the test suite:
1
|
|
Commit and push to Github to generate a new build on Docker Hub. Success?
Bottom-line: It’s good to know that if a commit does cause a regression that Docker Hub will catch it, but since this is the last line of defense before deploying (to either staging or production) you ideally want to catch any breaks before generating a new build on Docker Hub. Plus, you also want to run your unit and integration tests from a true continuous integration server – which is exactly where CircleCI comes into play.
CircleCI
CircleCI is a continuous integration and delivery platform that supports testing within Docker containers. Given a Dockerfile, CircleCI builds an image, starts a new container, and then runs tests inside that container.
Remember the workflow we want? Link.
Let’s take a look at how to achieve just that…
Setup
The best place to start is the excellent Getting started with CircleCI guide…
Sign up with your Github account, then add the Github repo to create a new project. This will automatically add a webhook to the repo so that anytime you push to Github a new build is triggered. You should receive an email once the hook is added.
Next we need to add a configuration file to the root folder of repo so that CircleCI can properly create the build.
circle.yml
Add the following build commands/steps:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Essentially, we create a new image, run the container, then run your unit tests.
Notice how we’re using the command
docker-compose run -d --no-deps web
, to run the web process, instead ofdocker-compose up
. This is because CircleCI already has Redis running and available to us for our tests. So, we just need to run the web process.
With the circle.yml file created, push the changes to Github to trigger a new build. Remember: this will also trigger a new build on Docker Hub.
Success?
Before moving on, we need to change our workflow since we won’t be pushing directly to the master branch anymore.
Feature Branch Workflow
For these unfamiliar with the Feature Branch workflow, check out this excellent introduction.
Let’s run through a quick example…
Create the feature branch
1 2 |
|
Update the app
Add a new assert in tests.py:
1
|
|
Issue a Pull Request
1 2 3 |
|
Even before you create the actual pull request, CircleCI starts creating the build. Go ahead and create the pull request, then once the tests pass on CircleCI, press the Merge button. Once merged, the build is triggered on Docker Hub.
Refactoring the workflow
If you jump back to the overall workflow at the top of this post, you’ll see that we don’t actually want to trigger a new build on Docker Hub until the tests pass against the master branch. So, let’s make some quick changes to the workflow:
- Open your repository on Docker Hub, and then under Settings click Automated Build.
- Uncheck the Active box: “When active we will build when new pushes occur”.
- Save.
- Click Build Triggers under Settings
- Change the status to on.
- Copy the example curl command – i.e.,
$ curl --data "build=true" -X POST https://registry.hub.docker.com/u/mjhea0/fitter-happier-docker/trigger/84957124-2b85-410d-b602-b48193853b66/
Now add the following code to the bottom of your circle.yml file:
1 2 3 4 5 |
|
Here we fire the $DEPLOY
variable after we merge to master and the tests pass. We’ll add the actual value of this variable as an environment variable on CircleCI:
- Open up the Project Settings, and click Environment variables.
- Add a new variable with the name “DEPLOY” and paste the example curl command (that you copied) from Docker Hub as the value.
Now let’s test.
1 2 3 |
|
Open a new pull request, and then once the tests pass on Circle CI, merge to master. Another build is trigged. Then once the tests pass again, a new build will be triggered on Docker Hub via the curl command. Nice.
Remember how I said that I configured Docker Hub to pull updated code to create a new image? Well, you could also set it up to where you can push images directly to Docker Hub. So once these test pass, you could simply push the image to update Docker Hub and then deploy to staging or production, directly from CircleCI. Since I have it set up differently, I handle the push to production from Docker Hub, not CircleCI. There’s positives and negatives to both approaches, as you will soon find out.
Conclusion
So, we went over a nice development workflow that included setting up a local environment coupled with continuous integration via CircleCI (steps 1 through 6):
- Code locally on a feature branch
- Open a pull request on Github against the master branch
- Run automated tests against the Docker container
- If the tests pass, manually merge the pull request into master
- Once merged, the automated tests run again
- If the second round of tests pass, a build is created on Docker Hub
- Once the build is created, it’s then automatically (err, automagically) deployed to production
What about the final piece – delivering this app to the production environment (step 7)? You can actually follow another one of my Docker blog posts to extend this workflow to include delivery.
Comment below if you have questions. Grab the final code here. Cheers!
If you have a workflow of your own, please let us know. I am currently experimenting with Salt as well as Tutum to better handle orchestration and delivery on Digital Ocean and Linode.