Deployment Demo

Contributors: Alicia Wang, Conner Swenberg, Alanna Zhou

1. Setup Google Cloud Server

After successfully claiming your Google Cloud credits, make your way over to the console to begin setting up a server instance to deploy our code onto.

We begin on the left side-menu by clicking Compute Engine -> VM Instaces. Here are the settings we used for our setup.

  • Name: demo

  • Region: us-east-1

  • Machine configuration

    • Machine family: General purpose

    • Machine type: g1-small

  • Boot disk: Ubuntu 18.04

  • Firewall: Allow HTTP traffic

After setting up your machine, you should see it in your VM Instances console page as a singular row.

2. Open SSH Web Interface

On the right side of your newly created VM Instance row (within the dashboard), there should be a button titled SSH; Click it. Doing so will prompt a new window pop-up and a command line interface. You are now "inside" your cloud server and can treat its command line analogously to your own computer. Now that we have control within the server, we can begin to set up our machine to run your application code, starting with installing Docker and Docker Compose.

3. Install Docker and Docker Compose

Docker

Follow Step 1 of this guide.

Continuing with Step 2 is optional and only gives us the convenience of running commands without preprending the phrase sudo before everything relating to Docker. It increases your permissions on the server to that of a root user, which can do things like run Docker containers without manual override on each command. We generally recommend just skipping this step and instead make use of the sudo keyword on relevant commands, which grants you the ability to just run that singular command under root privileges.

Docker Compose

Follow Step 1 of this guide.

4. Setup Docker Compose File

Just as we demonstrated in the previous chapter, we can streamline our use of Docker containers by specifying a docker-compose.yml file. We can use the same file used in the last demo which is already pointing to our image pushed to DockerHub.

To create a new file from the SSH'd command line, we will use Linux's built in nano tool. To create our new file, type:

>>> nano docker-compose.yml

This will pull up a new interface for you to edit the file. Specify the file the same as the one used previously.

version: "3"
services:  
    demo:    
        image: username/image    
        ports:    
            - "80:5000"

Take note that now our ports mapping goes from 80 to 5000. We want HTTP requests that hit our server's IP address to be routed into our container and the reserved port for HTTP is port 80. This is not necessarily a requirement because we could just as easily build our frontend to send requests to xxx.xx.xx.xxx:5000 but this is just more verbose than simply sending requests directly to the IP.

5. Run Docker Container

As demonstrated last chapter, we can run our Docker containers with:

>>> sudo docker-compose up -d

Note that now we need to include the sudo flag to utilize root permissions. Running this command is functionally equivalent to:

>>> sudo docker run -it -p "80:5000" username/image

However, now we have the ability to run the container in a detached mode (the -d flag), which means quitting out of our SSH window will not terminate the running of our container. Because this server does not have our indicated image tag locally, Docker will automatically search DockerHub for a match and download it onto this machine to run. After pulling our image from DockerHub, you will see Docker downloading the layers of our application from the ground up (includes our base python installation for the container as an example).

You can confirm the active running containers on your server (and personal computer) with:

>>> sudo docker ps

This will output identifying information like a unique id and the image tag. This confirmation solidifies that our code is running on an external server. Our final test to see if our deployment was successful is to visit our IP address indicated in Google Cloud's VM Instances console to test our API with real requests.

Other Tips

Redeploying after updating code

Let’s say we want to make a local change to our code, such as changing a route response.

app.py
@app.route("/")
def hello_world(): 
    return "Hello World!", 200

We can rebuild the image and then push it to Docker Hub:

>>> docker build -t username/image .
>>> docker push username/image

Note that a lot of the image layers will say “Layer already exists”. Docker can recognize unchanged layers and won’t re-push them. Because the application includes a lot of libraries like Python and OS libraries, a lot of time could be wasted if Docker re-uploaded them.

Check that the image was successfully pushed on Docker Hub. The latest tag should say that it was updated a few seconds ago.

Go back to the GC server and repull the new image (similar to Git). You must kill the currently running container before redeploying the newer image because the port is already in use.

>>> sudo docker-compose down
>>> sudo docker pull username/image
>>> sudo docker-compose up -d

When you refresh at the server IP address, you should see Hello World!

Using Docker volumes

In our current setup, whenever we redeploy our application code, a new .db file comes along with it in the bundled image. This means we wipe our deployed .db file's contents with each redeployment which is a problem. To solve this, we need a way to maintain our sqlite3 database outside of the scope of the Docker container on our external server. We do this with a Docker volume. Just like how a port mapping within docker-compose.yml takes a port outside the container to be used as a different port inside the container, we can make a volume mapping to link a file outside of the container to be used as a file inside the container.

First we need to decide where to place this file: the same level as our docker-compose.yml will do just fine. Add a new file with:

>>> touch todo.db

Later we will need to know the exact path to this file, so you can discover by cd .. a couple times to see how the remote server's directories are structured. To help you get this path correct, here is the default directory structure for Google Cloud's VM Instances:

/
|-home/
  |-<username>/
    |-docker-compose.yml
    |-todo.db

Note that <username> will be your netid by default when making your own VM Instance. Therefore, our full exact path to our new database file is: /home/<netid>/todo.db. We also need to know the exact path to our database file within the container which should be /usr/app/todo.db given how our Dockerfile previously defined this new directory for our application code.

To connect the volume, update your compose file like so:

docker-compose.yml
version: '3'
services:
    demo:
        image: username/image
        ports:
            - "80:5000"
        volumes:
            - /home/<netid>/todo.db:/usr/app/todo.db

Now whenever we redeploy, the contents of our todo.db file locally have no impact on the remote server's todo.db because the file outside the container is untouched. Now we can safely update our application code without changing our database's contents.

Using environment variables

If you’re implementing authentication or you have some sort of client secrets, you may have a local environment (.env) file that you don’t want pushed to any publicly available repository like on Github. Let’s test using an environmental variable locally before we use it on deployment:

secrets.env
export SECRET_KEY=helloworld

Source secrets.env into the environment, i.e. load the variables into the command prompt. The export is necessary to make the environment variables available to subprocesses as well - otherwise an error would be raised in Python when the app tries to retrieve SECRET_KEY.

>>> source secrets.env

We can update our app.py to make use of this environment variable, such as returning the secret as a response. Don’t actually do this in any of your own apps!

app.py
import os

...

@app.route("/")
def hello_world():  
    return os.environ["SECRET_KEY"], 200

Now before we rebuild and push our code to Docker Hub, lets ignore any environment files from packaging using a .dockerignore file:

__pycache__
venv
*.env

Notice that before we were ignoring our python cache and virtual environment folder to limit pushing useless files. Another trick is to use *.env to encompass all files with the .env extension. Now we can safely rebuild and repush our image to Docker Hub:

>>> docker build -t username/image .
>>> docker push username/image

We can then modify our docker-compose.yml on the GC server to include a corresponding secrets.env file:

docker-compose.yml
version: "3"
services:  
    demo:    
        image: username/image
        env_file:    
            - secrets.env    
        ports:    
            - "80:5000"

Create the same secrets.env file on the Google Cloud server without the export keyword. Docker uses a special format of env files to allow this.

secrets.env
SECRET_KEY=helloworld

Pull the newest image, and when you start the server, now the / path should return helloworld.

Last updated