In this guide i will describe how you can build a docker containerized web server from scratch using the individual concepts we have covered. Once this is complete, we will add services and commit the container to another base image we can use to further customize our web server instances.

To be able to understand this guide please look into my other tutorial for getting started with docker.

We first need to have docker install on our operating system, In my examples i am using a Centos 7 machine, but the commands will work on an Ubuntu or Debian e.t.c

First we pull our base image, which will be a cento6 in my case

docker pull centos:centos6

then we run the image

 docker run -it centos:centos6 /bin/bash

Whilst inside the container we run the following command in order to install the necessary package and we modify /root/.bashrc so that we start the services every time the container is started. Also note that we are installing the epel package which are additional packages for Enterprise LinuxGroup from a fedora group that creates, maintains, and manages a high quality set of packages.

 

 yum update
 yum install -y wget
 yum install epel-release
 yum update
 yum install which sudo httpd php openssh-server

The modify .bashrc with the following contents

vi /root/.bashrc

# .bashrc
# User specific aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
 . /etc/bashrc
fi
##add services to start
/sbin/service httpd start
/sbin/service sshd start

We then exit our containe rand commit the container to a new image(note the id 555bacc2c40d was the one we saw inside our container or can be retrieves by running docker ps -l

docker commit 555bacc2c40d centos:baseweb

 

Now that we have a base image to inherit a web server instance from, we need to grab some web site code from Open Source Web Development (oswd.org). We will create a static working directory on our local host and spin up a container from that that mounts the content within the container. We will show how browse that instance from our local host and what the mounts look like.

Now we need to create a directory on our host operatingsystem

mkdir dockerwww

we can then use a website sample from open source web designs site that we will download and unzip in the drecory we just created

wget http://static.oswd.org/designs/3681/bluefreedom2.zip
 unzip bluefreedom2.zip

Then we simply start our container and point the directly we just created to the dfault directory apache reads from.

 docker run --name=webtest -it -v /srv/dockerwww:/var/www/html centos:baseweb /bin/bash

 

In order to be even more realistic , if this was a development enviromnet you would commit the website you just created to git , for version control. You would run the following commands inside the docker www directory to create a git repository and commit your website development to git.

 apt-get install git
 yum install git
 git init .
 git add *
 git commit -m "initial commit"

We can install git within the container as well , so we can make changes to the code from within the container , inside /var/www/html.

We will now commit our container changes, add system services to start automatically and redirect the container port to a local port so that others outside the local host can see the website. We will talk about how to manage the static content base from within the container through git and then create our final base image for use in our web farm.

Firstly lets commit the container we have worked on to an image, so we can reuse it later on

docker commit webtest centos6:server1

Now we can start a container and forward host port 8081 to the container port 80 using the following command.Again not we are mounting the shared volume.

docker run --name=extranalweb -it -v /srv/dockerwww:/var/www/html -p 8081:80 centos6:server1 /bin/bash

In this final stage of our web farm, we will harden and protect our environment by changing our mount from a git repository to a git clone (protecting the source of record) and starting multiple container instances, each redirected to a different local host port. Finally, we will install and configure an instance of NginX web server to serve as a proxy redirect and load balancer for our environment and discuss how that can help us in testing locally and remotely.

First we start two containers sharing the same volume ,listening to port 80 , however for one we redirect port 8081 and to the other port 8082

docker run --name=devweb1 -itd -v /srv/dockerwww:/var/www/html -p 8081:80 centos6:server1 /bin/bash
 
docker run --name=devweb2 -itd -v /srv/dockerwww:/var/www/html -p 8082:80 centos6:server1 /bin/bash

If we navigate to the host ip i.e 192.168.0.16:8081 and 192.168.0.16:8082 we should be able to see our test website.

Now to do the load balancing we will use nginx .So we install nginx on the host operating system .

yum install nginx

And then we create a configuration file either inside the /etc/nignx/sites-enabled directory or if that doest exist inside /etc/nginx/conf.d/.

We can call that file default.conf and inside we add the following:

upstream containerapp {
 server 192.168.0.16:8081;
 server 192.168.0.16:8082;
}
server {
 listen *:80;
 server_name 192.168.0.16;
 index index.html indec.htm index.php;
 access_log /var/log/localweblog.log;
 error_log /var/log/localerrorlog.log;
 location /{
proxy_pass http://containerapp;
          }
}

in the 2nd and 3rd line we add the host ip's with the addition of the port that each container has forwarded to it. This configuration does the load balancing for your nginx.There are different methods for doing load balancing such us based on client ip , round robin e.t.c more info can be found here