PHP Classes

Improving Your PHP Application Deployment with Docker Part 1: Setting Up Docker

Recommend this page to a friend!
  Blog PHP Classes blog   RSS 1.0 feed RSS 2.0 feed   Blog Improving Your PHP Ap...   Post a comment Post a comment   See comments See comments (2)   Trackbacks (0)  

Author:

Viewers: 615

Last month viewers: 12

Categories: PHP Tutorials

Docker is a sensational tool for building containers for applications with any tools that you need like PHP, MySQL, Nginx, or whatever else you need, with much greater efficiency of resources like CPU, RAM, etc..

Read this article to learn more about Docker and how you can set it up to run your PHP application development or production environment.




Loaded Article

Contents

Introduction

What is Docker?

Setup Docker

Docker is Not a Virtualization Tool

Docker is an Object Oriented Design Tool for an Infrastructure

Docker Does Not Execute Containers, but Manages Them

Beginning

Usage

Docker is a Client-Server System Service

Conclusion


Introduction

Docker logo

Everybody talks about Docker. I think I know what you say: "It's something just for fun", "One can prepare an image for a cloud and launch it the same way", "You can just set up an LXC, chroot or AppArmor". One more trendy toy. Too lazy to study, at least. But are you curious about what it is and why everybody is talking about it? Ok, this article is for you.

What is Docker?

Docker is a tool for building lightweight containers that have alll that you need to setup the tools for making a whole application run like PHP or other languages, databases, Web servers, etc..

This blog already published about lightweight containers like LXC.

If you never heard about containers in Linux, here is a list of pages that you may want read to understand what's this all about:

Setup Docker

Setting up Docker is not difficult. For Windows you can use Docker Toolbox, or use your favorite virtual machine and set it up yourself. Take some time and learn from the manual as much as you can. However the manual is not clear in many matters. This article contains important information that the documentation is missing.

Docker is Not a Virtualization Tool

Docker does not emulate hardware. It does not change the filesystem root to your environment like chroot. However Docker partially matches the functionality of chroot. It's not a security system like AppArmor. Docker uses the same containers as LXC, but it is interesting not because of those containers.

For me Docker is nothing I thought about before I read documentation.

Here is my regular Linux distribution:

Welcome to Ubuntu 15.04 (GNU/Linux 3.19.0-15-generic x86_64)

Last login: Tue Aug 18 00:43:50 2015 from 192.168.48.1
gri@ubuntu:~$ uname -a
Linux ubuntu 3.19.0-15-generic #15-Ubuntu SMP Thu Apr 16 23:32:37 UTC 2015 x86_64 x86_64 x86_64 GNU/                                       Linux
gri@ubuntu:~$ free -h
             total       used       free     shared    buffers     cached
Mem:          976M       866M       109M        11M       110M       514M
-/+ buffers/cache:       241M       735M
Swap:         1.0G       1.0M       1.0G

Here is CentOS container started by Docker:

gri@ubuntu:~$ docker run -ti centos
[root@301fc721eeb9 /]# uname -a
Linux 301fc721eeb9 3.19.0-15-generic #15-Ubuntu SMP Thu Apr 16 23:32:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@301fc721eeb9 /]# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
[root@301fc721eeb9 /]# free -h
              total        used        free      shared  buff/cache   available
Mem:           976M         85M        100M         12M        790M        677M
Swap:          1.0G        1.0M        1.0G

As you may see, it is the same kernel, memory, file system, but the distributions, libraries, processes and users are different. Although, they can be same as well, if you want.

Docker is an Object Oriented Design Tool for an Infrastructure

A common point of disagreement is whether Nginx configuration files are a part of a web application. Software architects plan systems that require infrastructure dependencies that system administrators want to avoid. That often happens right before the launch of the software projects.

Conflicts of this kind throw the projects down, make deadlines be missed and sometimes impose significant financial losses. Then come the devops guys (developers responsible for operations), replacing conventional procedural bash shell calls with an OOP design applied to the whole infrastructure.

Docker provides encapsulation, inheritance and polymorphism to system components, such as database or data. You can decompose a whole information system. An application, web server, database, system libraries and data can be independent components of a whole. You may inject dependencies from configurations and make it work in a group, identically on different servers.

Docker Does Not Execute Containers, but Manages Them

Containers are executed by a kernel feature called Cgroups. The docker service starts the container by using a command received from a client application, like docker itself, and waits until the container releases the standard I/O streams. That is why in the docker configuration documentation for Nginx you can read:

Be sure to include daemon off; in your custom configuration to ensure that Nginx stays in the foreground so that Docker can track the process properly (otherwise your container will stop immediately after starting)!

When the container finishes its execution, it is not deleted, if it is not configured for that explicitly. Each container runs with a command "$ docker run image_name" without the parameters --name or --rm creates a new container with a unique ID.

Such container stays in a system until deleted. Docker is a system prone to littering. Container names are unique within a system. I recommend naming each permanent container. The ones that do not need to store any data, I recommend running with --rm parameter.

Containers are created with commands "docker run" and "docker create". You can see all existing containers with the command "docker ps -a".

Usage

The best known scenario of using Docker is building microservices. But there is more. We use Docker to avoid vendor lock-in, to get the application working when a library such as OpenSSL on a production server that does not support a cipher used by a government API, to make your application work independently from a PHP or Python version on a customer server. You can reduce costs on expensive front-end developers setting up a web server and a database.

We create an open source project, not just as a code, but as a composition of pre-configured packages of different applications, written in different languages, working in different OSI layers.

Docker can be used in existing old applications as well when we need new features and want to mitigate the complexity growth. We achieve better security and reliability by running critical parts of our application in the independent containers.

For example, I built a billing module with a simple REST API, and left it working for more then a year. None of weekly deployments, bugs and rollbacks resulted in direct money loss, the most critical part of an application is secure and stable.

Another good idea is running the third party untrusted code, such as PHPBB or custom extensions, in limited containers without even a shell. Of course, each of these features can be implemented with other tools as well, and the choice is always yours.

Beginning

I use Mac OS X, so I opened the respective Getting Started page, installed Docker, executed several exercises, and I felt lost.

The first questions: what is the location of docker application and data? What format is used for storing container data? How is it arranged? Later I found a blog post with he answers.

In short, to work with a file system Docker can use one of its drivers. Usually it is AUFS. Files for all containers are in /var/lib/docker/aufs/diff/. /var/lib/docker/containers/ contains service information, not the containers themselves.

Images are like classes. Containers are like objects created from classes. The difference is that a container can be committed and form an image. Images consist of the so called layers. Layers are in fact folders inside  /var/lib/docker/aufs/diff/.

Most of the images with applications inherit from some ready-made official system images. When Docker downloads an image, it needs just the missing layers. For instance I download an image from here

docker@dev:~$ docker pull nginx
latest: Pulling from nginx
aface2a79f55: Pull complete
72b67c8ad0ca: Downloading [=============>                                     ] 883.6 kB/3.386 MB
9108e25be489: Download complete
902b87aaaec9: Already exists
9a61b6b1315e: Already exists

It stores nginx version 1.9.4. The image is 52 MB but in fact I am just downloading just 3MB. This is because nginx is built on debian:jessie that "Already exists" in my storage. There is a lot of images based on Ubuntu as well. Of course, it makes sense to build all images of an application stack with the same ancestor image.

Docker is a Client-Server System Service

As client-server system, Docker can freeze. If you order to download an image, the only way to interrupt the process is to restart the service. Authors discuss how to solve this problem for two years already, but no solution for it was provided.

For example, there is a bug in Docker 1.8.1:

docker@dev:~$ docker pull debian
Using default tag: latest
latest: Pulling from library/debian
2c49f83e0b13: Downloading [===================>                               ] 19.89 MB/51.37 MB

Press Ctrl-C, then start the download again.

docker@dev:~$ docker pull debian
Using default tag: latest

Here we are, frozen. Restart the daemon.

docker@dev:~$ sudo /etc/init.d/docker restart
Need TLS certs for dev, 127.0.0.1, 10.0.2.15, 192.168.99.104
-------------------
docker@dev:~$ sudo /etc/init.d/docker status
Docker daemon is running
docker@dev:~$ docker pull debian
Using default tag: latest
latest: Pulling from library/debian
...
Status: Downloaded newer image for debian:latest

Sometimes docker does not want to die and does not release the port that it is listening. The init script does not process the boundary cases yet. Well, just do not forget to check its status using "sudo /etc/init.d/docker status" and "sudo netstat -ntpl"  to see if it is still running.

One more important notice. The order of parameters for the docker command is significant. If you write "docker create nginx --name=nginx", the --name=nginx parameter is considered a command to execute in a container, not a container name.

Conclusion

Well with the explanations above I hope it will be easier for you to understand the official documentation Docker to get started and successfully set it up.

The next parts of this article it will be covered the setup of more specific application environments such as PHP.

For now, if you liked this article or you have a question about setting up and using Docker, post a comment here.




You need to be a registered user or login to post a comment

1,611,040 PHP developers registered to the PHP Classes site.
Be One of Us!

Login Immediately with your account on:



Comments:

2. nice! - Alexander Skakunov (2015-11-29 00:19)
nice!... - 0 replies
Read the whole comment and replies

1. Hardening The Soft things - Padhoo Nair (2015-11-25 05:29)
Techies around the world including the Giant MS are hardening... - 0 replies
Read the whole comment and replies



  Blog PHP Classes blog   RSS 1.0 feed RSS 2.0 feed   Blog Improving Your PHP Ap...   Post a comment Post a comment   See comments See comments (2)   Trackbacks (0)