Deployment Demo
Contributors: Alicia Wang, Conner Swenberg, Alanna Zhou
1. Setup Google Cloud Server
After successfully claiming your Google Cloud credits, make your way over to the console to begin setting up a server instance to deploy our code onto.
We begin on the left side-menu by clicking Compute Engine
-> VM Instaces
. Here are the settings we used for our setup.
Name: demo
Region: us-east-1
Machine configuration
Machine family: General purpose
Machine type: g1-small
Boot disk: Ubuntu 18.04
Firewall: Allow HTTP traffic
After setting up your machine, you should see it in your VM Instances
console page as a singular row.
2. Open SSH Web Interface
On the right side of your newly created VM Instance row (within the dashboard), there should be a button titled SSH
; Click it. Doing so will prompt a new window pop-up and a command line interface. You are now "inside" your cloud server and can treat its command line analogously to your own computer. Now that we have control within the server, we can begin to set up our machine to run your application code, starting with installing Docker and Docker Compose.
3. Install Docker and Docker Compose
Docker
Follow Step 1 of this guide.
Continuing with Step 2 is optional and only gives us the convenience of running commands without preprending the phrase sudo
before everything relating to Docker. It increases your permissions on the server to that of a root
user, which can do things like run Docker containers without manual override on each command. We generally recommend just skipping this step and instead make use of the sudo
keyword on relevant commands, which grants you the ability to just run that singular command under root privileges.
Docker Compose
Follow Step 1 of this guide.
4. Setup Docker Compose File
Just as we demonstrated in the previous chapter, we can streamline our use of Docker containers by specifying a docker-compose.yml
file. We can use the same file used in the last demo which is already pointing to our image pushed to DockerHub.
To create a new file from the SSH'd command line, we will use Linux's built in nano
tool. To create our new file, type:
This will pull up a new interface for you to edit the file. Specify the file the same as the one used previously.
Take note that now our ports
mapping goes from 80
to 5000
. We want HTTP requests that hit our server's IP address to be routed into our container and the reserved port for HTTP is port 80. This is not necessarily a requirement because we could just as easily build our frontend to send requests to xxx.xx.xx.xxx:5000
but this is just more verbose than simply sending requests directly to the IP.
5. Run Docker Container
As demonstrated last chapter, we can run our Docker containers with:
Note that now we need to include the sudo
flag to utilize root permissions. Running this command is functionally equivalent to:
However, now we have the ability to run the container in a detached mode (the -d
flag), which means quitting out of our SSH window will not terminate the running of our container. Because this server does not have our indicated image tag locally, Docker will automatically search DockerHub for a match and download it onto this machine to run. After pulling our image from DockerHub, you will see Docker downloading the layers of our application from the ground up (includes our base python installation for the container as an example).
You can confirm the active running containers on your server (and personal computer) with:
This will output identifying information like a unique id and the image tag. This confirmation solidifies that our code is running on an external server. Our final test to see if our deployment was successful is to visit our IP address indicated in Google Cloud's VM Instances console to test our API with real requests.
Other Tips
Redeploying after updating code
Let’s say we want to make a local change to our code, such as changing a route response.
We can rebuild the image and then push it to Docker Hub:
Note that a lot of the image layers will say “Layer already exists”. Docker can recognize unchanged layers and won’t re-push them. Because the application includes a lot of libraries like Python and OS libraries, a lot of time could be wasted if Docker re-uploaded them.
Check that the image was successfully pushed on Docker Hub. The latest tag should say that it was updated a few seconds ago.
Go back to the GC server and repull the new image (similar to Git). You must kill the currently running container before redeploying the newer image because the port is already in use.
When you refresh at the server IP address, you should see Hello World!
Using Docker volumes
In our current setup, whenever we redeploy our application code, a new .db
file comes along with it in the bundled image. This means we wipe our deployed .db
file's contents with each redeployment which is a problem. To solve this, we need a way to maintain our sqlite3 database outside of the scope of the Docker container on our external server. We do this with a Docker volume. Just like how a port mapping within docker-compose.yml
takes a port outside the container to be used as a different port inside the container, we can make a volume mapping to link a file outside of the container to be used as a file inside the container.
First we need to decide where to place this file: the same level as our docker-compose.yml
will do just fine. Add a new file with:
Later we will need to know the exact path to this file, so you can discover by cd ..
a couple times to see how the remote server's directories are structured. To help you get this path correct, here is the default directory structure for Google Cloud's VM Instances:
Note that <username>
will be your netid by default when making your own VM Instance. Therefore, our full exact path to our new database file is: /home/<netid>/todo.db
. We also need to know the exact path to our database file within the container which should be /usr/app/todo.db
given how our Dockerfile previously defined this new directory for our application code.
To connect the volume, update your compose file like so:
Now whenever we redeploy, the contents of our todo.db
file locally have no impact on the remote server's todo.db
because the file outside the container is untouched. Now we can safely update our application code without changing our database's contents.
Using environment variables
If you’re implementing authentication or you have some sort of client secrets, you may have a local environment (.env) file that you don’t want pushed to any publicly available repository like on Github. Let’s test using an environmental variable locally before we use it on deployment:
Source secrets.env
into the environment, i.e. load the variables into the command prompt. The export
is necessary to make the environment variables available to subprocesses as well - otherwise an error would be raised in Python when the app tries to retrieve SECRET_KEY
.
We can update our app.py
to make use of this environment variable, such as returning the secret as a response. Don’t actually do this in any of your own apps!
Now before we rebuild and push our code to Docker Hub, lets ignore any environment files from packaging using a .dockerignore
file:
Notice that before we were ignoring our python cache and virtual environment folder to limit pushing useless files. Another trick is to use *.env
to encompass all files with the .env
extension. Now we can safely rebuild and repush our image to Docker Hub:
We can then modify our docker-compose.yml
on the GC server to include a corresponding secrets.env
file:
Create the same secrets.env file on the Google Cloud server without the export keyword. Docker uses a special format of env files to allow this.
Pull the newest image, and when you start the server, now the /
path should return helloworld
.
Last updated