I. Welcome to the Docker World

 1. Docker and virtualization


In the days without Docker, we would use hardware virtualization (virtual machines) to provide isolation. Here, VMs share the resources of the host machine by creating an intermediate virtual software layer Hypervisor on the operating system and virtualizing multiple virtual hardware environments using the resources of the physical machine, where applications run on the VM kernel. However, there is a bottleneck in the utilization of hardware by virtual machines because it is difficult for virtual machines to dynamically adjust the hardware resources they occupy according to the current business volume, so containerization technology has become popular. Among them, Docker is an open source application container engine that allows developers to package their applications as well as dependency packages into a portable container and then distribute it to any popular Linux machine.


Docker containers do not use hardware virtualization; its daemon is a process on the host, in other words, the application runs directly on the host kernel. Because there is no additional middle layer between the program running in the container and the computer’s operating system, no resources are wasted by redundant software running or virtual hardware emulation.

 The advantages of Docker go beyond that, so let’s compare.

1 Docker 1
 Start-up speed sec min
 Delivery/deployment  Consistent development, test, and production environments  No mature system
per  approximate physical machine  High performance loss
heave  Minimal (MB)  Larger (GB)
 Migration/extension  Cross-platform, replicable  more complex

 2. Mirrors, containers and repositories


Docker consists of image (Image), container (Container), warehouse (Repository) three parts.


A Docker image can be likened to a system disk used to install a computer, including the operating system, and the necessary software. For example, an image can contain a complete centos operating system environment with Nginx and Tomcat installed. Note that the image is read-only. This is understandable, just as the system disk we burned is actually readable. We can use docker images to see a list of local mirrors.


Docker’s containers can be simply understood as providing the system hardware environment, which is what actually runs the project program, consumes machine resources, and provides services. For example, we can temporarily think of a container as a Linux computer that runs directly. Then, containers are started based on images, and each container is isolated from each other. Note that containers create a writable layer as the top layer based on the image when they are started. We can use docker ps -a to see what containers have run locally.


Docker’s repositories are used to store images. This is very similar to Git. We can download images from the central repository or from our own repository. You can also commit your image locally and push it to a remote repository. Repositories are categorized into public and private repositories, the largest public repository is the official Dock Hub, and there are many choices of public repositories in China, such as Aliyun and so on.

 3. Docker enables changes in the development process


In my opinion, the impact of Docker on the development process is to standardize the environment. For example, it turns out that we have three environments: development (daily) environment, testing environment, and production environment. Here, we need to deploy the same software, scripts and running programs for each environment, as shown in the figure. In fact, the contents of the startup scripts are the same, but they are not uniformly maintained, and often have problems. In addition, for running programs, if the underlying runtime environment on which they depend is inconsistent, it will also cause trouble and anomalies.


Now, with the introduction of Docker, we only need to maintain one Docker image. In other words, multiple environments, one image, one build and run everywhere at the system level. At this point, we’ve standardized the run scripts, we’ve mirrored the underlying software, and we’ve standardized the deployment of the same programs that we’re going to deploy. Therefore, Docker provides us with a standardized operation and maintenance model, and solidifies the operation and maintenance steps and processes.


With this process improvement, it’s easier for us to achieve our DevOps goals because our images are generated to run on any system and deployed quickly. Additionally, a big motivation for using Docker was to implement elastic scheduling based on Docker to more fully utilize machine resources and save costs.


Haha, I also found some great points of gain in the process of using Docker, for example, we only need to switch TAGs and reboot when we release a rollback. For example, if we upgrade our environment, we only need to upgrade the base image, and then the newly built application image will automatically reference the new version. (Feel free to add more!)

 II. Starting with building a Web server

 1. Environment first, install Docker

 Now, we need to install the following steps to install Docker.


  • Register an account at hub.docker.com/.
  •  download and install


Official download address: (Mac): download.docker.com/mac/stable/… AliCloud download address (Mac): mirrors.aliyun.com/docker-tool… AliCloud download address (Windows): mirrors.aliyun.com/docker-tool…


  • Installation Guide Here, double-click the Doker.dmg installation package you just downloaded to install it.


Once the installation is complete and launched, an icon appears in the top navigation bar of the Mac with a menu that allows you to do things like docker configuration and exit.


Official Guide: docs.docker.com/install/ AliCloud Guide (Linux): yq.aliyun.com/articles/11…

  •  Setting up the acceleration service


There are many acceleration service providers in the market, such as DaoCloud, AliCloud and so on. Here, the author is using AliCloud. (Note that the author’s operating system is a Mac, the other operation series see AliCloud operation documents)


Right-click on the docker icon in the top bar of your desktop, select Preferences, and in the Registry mirrors list under the Daemon tab (Advanced tab in versions of Docker prior to 17.03), place the


https://xxx.mirror.aliyuncs.com Add it to the “registry-mirrors” array, click the Apply & Restart button, and wait for Docker to restart and apply the configured image gas pedal.


Aliyun Operation Documentation: cr.console.aliyun.com/cn-hangzhou…

  •  View Version


At this point, we are done with the installation. Here, let’s check the version.

docker version

  View the results as shown below.


2. Practical, starting with building a web server


Let’s be practical, so let’s build a web server first. Then, the author takes you slowly to understand what is done in this process. First, we need to pull the centos image.

docker run -p 80 --name web -i -t centos /bin/bash


Immediately after that, we install the nginx server by executing the following command:

rpm -ivh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm


After installing the Nginx sources, you can officially install Nginx.

yum install -y nginx


At this point, we can see the path to the installation by typing the whereis nginx command again. Finally, we need to get Nginx running.

nginx


Now, we execute ctrl + P +  Q to switch to the background. Then, run docker ps -a to see the randomly assigned ports.


Here, the port assigned by the author is 32769 , then accessing http://127.0.0.1:32769 through the browser will work.

 It’s done. Ha-ha-ha-ha.

 3. Review of the whole process of understanding


Now, let’s understand the flow. First, we enter the docker run -p 80 --name web -i -t centos /bin/bash command that will run the interactive container, where the -i option tells the Docker container to keep the standard input stream open to the container even if the container doesn’t have a terminal connection, and the other -t option tells Docker to assign the container a virtual terminal so that we can install the Nginx server next. (Author’s note: Docker also supports the -d option that tells Docker to run the container’s daemon in the background).


Docker automatically generates a random name for each container we create. In fact, this approach, while convenient, is poorly readable and can be costly for us to understand for later maintenance. Therefore, we tell Docker to create a container with the name web via the --name web option. In addition, we tell Docker to open port 80 via -p 80 so that Nginx can be accessed and served externally. However, our host machine will automatically do port mapping, for example, the port assigned above is 32769 , note that if you shut down or reboot, this port will change, so how to solve the problem of fixed ports, I will be back to analyze in detail and take you to the field.


Here, there is another very important point of knowledge docker run . Docker starts a new container with the run command. Docker first looks for the image in the local machine, if it is not installed, Docker looks for the image on the Docker Hub and downloads and installs it locally, and finally Docker creates a new container and starts the program.


However, when docker run is executed a second time, since Docker already has that image installed locally, Docker will simply create a new container and start the program.


Note that docker run creates a new container each time it is used, so we only need to use the command docker start to start this container again later. Here, docker start is used to restart an existing image, while docker run consists of placing the image into the container docker create and then starting the container docker start as shown in the figure.


Now, we can build on the case above by shutting down the Docker container via the exit command. Of course, if we are running a daemon in the background, we can also stop it with docker stop web . Note that docker stop is slightly different from docker kill in that docker stop sends a SIGTERM signal while docker kill sends a SIGKILL signal. Then, we restart it using docker start .

docker start web


The Docker container will run along the parameters specified by the docker run command after restarting, however, it is still running in the background at this point. We must switch to running the interactive container via the docker attach command.

docker attach web

  4. More than that, there are more orders


Docker provides a very rich set of commands. As the saying goes, a picture is worth a thousand words, and we can learn a lot about them and their previous uses from the images below. (You can skip reading directly, we recommend bookmarking it for extended reading)

 If you want more information, you can read the official documentation.

Command Description
docker attach Attach local standard input, output, and error streams to a running container
docker build Build an image from a Dockerfile
docker builder Manage builds
docker checkpoint Manage checkpoints
docker commit Create a new image from a container’s changes
docker config Manage Docker configs
docker container Manage containers
docker cp Copy files/folders between a container and the local filesystem
docker create Create a new container
docker deploy Deploy a new stack or update an existing stack
docker diff Inspect changes to files or directories on a container’s filesystem
docker engine Manage the docker engine
docker events Get real time events from the server
docker exec Run a command in a running container
docker export Export a container’s filesystem as a tar archive
docker history Show the history of an image
docker image Manage images
docker images List images
docker import Import the contents from a tarball to create a filesystem image
docker info Display system-wide information
docker inspect Return low-level information on Docker objects
docker kill Kill one or more running containers
docker load Load an image from a tar archive or STDIN
docker login Log in to a Docker registry
docker logout Log out from a Docker registry
docker logs Fetch the logs of a container
docker manifest Manage Docker image manifests and manifest lists
docker network Manage networks
docker node Manage Swarm nodes
docker pause Pause all processes within one or more containers
docker plugin Manage plugins
docker port List port mappings or a specific mapping for the container
docker ps List containers
docker pull Pull an image or a repository from a registry
docker push Push an image or a repository to a registry
docker rename Rename a container
docker restart Restart one or more containers
docker rm Remove one or more containers
docker rmi Remove one or more images
docker run Run a command in a new container
docker save Save one or more images to a tar archive (streamed to STDOUT by default)
docker search Search the Docker Hub for images
docker secret Manage Docker secrets
docker service Manage services
docker stack Manage Docker stacks
docker start Start one or more stopped containers
docker stats Display a live stream of container(s) resource usage statistics
docker stop Stop one or more running containers
docker swarm Manage Swarm
docker system Manage Docker
docker tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
docker top Display the running processes of a container
docker trust Manage trust on Docker images
docker unpause Unpause all processes within one or more containers
docker update Update configuration of one or more containers
docker version Show the Docker version information
docker volume Manage volumes
docker wait Block until one or more containers stop, then print their exit codes


Official reading link: docs.docker.com/engine/refe…

 5. Advancement: simplification of warehousing and software installation


Remember the “images, containers and repositories” that I introduced at the beginning of this article, Docker’s repositories are used to store images. We can download images from the central repository or from our own repository. At the same time, we can push the created image from local to remote repository.


First of all, I first introduced a knowledge point: Docker’s image is its file system, a mirror can be placed on the upper layer of another mirror, then located in the lower layer is its parent mirror. So, Docker has many image layers, each of which is read-only and does not change. When we create a new container, Docker builds an image stack and adds a read/write layer at the top of the stack, as shown here.


Now, we can view the local image with the command docker images .

docker images

  The result of the query, as shown in Fig.

 Here, the meaning of a few terms is explained.

  •  REPOSITORY: The name of the repository.

  • TAG: Mirror tag, where lastest indicates the latest version. Note that a mirror can have multiple tags, then we can manage useful version and feature tags by tag.
  •  IMAGE ID : The unique ID of the image.
  •  CREATED : Creation time.
  •  SIZE : The size of the mirror.


So, if the first time we pull the image via docker pull centos:latest , then when we run docker run -p 80 --name web -i -t centos /bin/bash , it won’t go back to remotely fetch it because it’s already installed locally, so Docker will just create a new container and start the program.


In fact, there is already an official image of Nginx installed that we can use directly. Now, let’s rebuild a web server by pulling a mirror. First, we look up the mirrors at docker search . We get a list of mirrors for Nginx.

docker search nginx


To add to this, we can also search for repositories by visiting Docker Hub ( hub.docker.com/), then the higher the star count, the more reliable it is to use.


Now, we pull the latest image of Nginx via docker pull nginx . Of course, we can also do this via docker pull nginx:latest .

docker pull nginx


We then create and run a container. Unlike earlier, we tell Docker to run the container’s daemon in the background via the -d option. And, we tell Docker via 8080:80 that port 8080 is the port that is open to the public, and that port 80 is open to the public maps to the port number in the container.

docker run -p 8080:80 -d --name nginx nginx


We look again at docker ps -a and see that the container is already running in the background and executing nginx commands in the background with port 8080 open to the public.

 So just go to http://127.0.0.1:8080 through your browser.

 6. Other options, use of alternative registration servers


Docker Hub is not the only source of software, we can also switch to other alternative registered servers in China, such as Aliyun. We can log into cr.console.aliyun.com to search and pull publicly available mirrors.


Now, let’s enter the docker pull command to pull it.

docker pull registry.cn-hangzhou.aliyuncs.com/qp_oraclejava/orackejava:8u172_DCEVM_HOTSWAPAGEN_JCE


Here, the author continues to add a knowledge point: the address of the registration server. In fact, there is a set of specifications for the address of the registration server. The full format is: [repository host/] [username/] container short name [:label]. Here, the repository host is registry.cn-hangzhou.aliyuncs.com, the username is qp_oraclejava, the container short name is orackejava, and the tag name is 8u172_DCEVM_HOTSWAPAGEN_JCE. In fact, the image we pulled above via docker pull centos:latest is equivalent to docker pull registry.hub.docker.com/centos:latest .

 III. Building my mirror image


With the above, I believe you have a general understanding of how to use Docker, as if we installed a system through VMware and got it running, then we can work on this Linux system (CentOS or Ubuntu) on top of anything we want. In fact, we’ll often take a snapshot of our installed VMware system and clone it for our next quick replication. Here, Docker can also build customized Docker images, such as the official Docker image with Nginx installed that we used above. Note that we’re just building a new image by adding a layer on top of an existing base image.


To summarize, Docker provides the ability to customize images, which allows us to save changes to the base image and use it again. Then, we can package the operating system, runtime environment, scripts and programs together and serve them externally on the host.


There are two ways to build an image with Docker, one way is to use the docker commit command and the other way is to use the docker build command and the Dockerfile file. The docker commit command is not recommended because it does not standardize the process, so in the enterprise we recommend using the docker build command and the Dockerfile file to build our images. We use the Dockerfile file to make building images more reproducible and to ensure that the startup scripts and runtime procedures are standardized.


1. Build the first Dockerfile file


Now, let’s move on to the real world. Here, we build an image of the web server we set up at the beginning. First, we need to create an empty Dokcerfile.

mkdir dockerfile_test
cd dockerfile_test/
touch Dockerfile
nano Dockerfile


Immediately after that, we need to write a Dockerfile file with the following code listing

FROM centos:7
MAINTAINER LiangGzone "[email protected]"
RUN rpm -ivh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
RUN yum install -y nginx
EXPOSE 80


Finally, we build via the docker build command.

docker build -t="lianggzone/nginx_demo:v1" .


Now, let’s take a look at our new image at docker images .


2. Understanding the Dockerfile Process


Wow, we smoothly built a new image by writing a Dockerfile file. The process was unbelievably simple. Now, let’s understand the whole process. First of all, FROM centos:7 is a necessary first step for Dockerfile to run a container from an existing image, in other words, Docker needs to rely on a base image to build on. In other words, Docker needs to rely on a base image to build. Here, we specify centos as the base image, which is version 7 (CentOS 7). We then specify that the image’s author is LiangGzone at MAINTAINER LiangGzone "[email protected]" and its email address is [email protected]. This helps to inform users of its author and contact information. Next, we execute two RUN commands to download and install Nginx, and finally expose port 80 of the Dokcer container via EXPOSE 80 . Note that Docker executes from the top down, so it’s important to be clear about the order in which the entire process is executed. In addition, Docker creates a new image layer and commits it after each command.


We build using the docker build command, specifying that - t tells Docker the name and version of the image. Note that Docker will automatically set a lastest tag for the image if none is specified. One more thing, we also have a . at the end to make Docker go to the current local directory to find the Dockerfile file. Note that Docker commits the result to an image at each build step, and then treats the previous image layer as a cache, so when we rebuild a similar image layer, we’ll just reuse the previous image. If we need to skip it, we can use the --no-cache option to tell Docker not to cache it.

 3. Dockerfile Command Explained


Dockerfile provides a very large number of commands. The author has compiled a special list here, and I recommend bookmarking it for viewing.


Official address: docs.docker.com/engine/refe…


Command Identification I: RUN, CMD, ENTRYPOINT


RUN The purpose of the CMD and ENTRYPOINT directives is very similar, except that the RUN directive is a command that is run when the container is built, while CMD and ENTRYPOINT are shell commands that are executed when the container is started, and RUN is overridden by the docker run command, but ENTRYPOINT is not. In fact, any arguments specified by the docker run command are passed back to the ENTRYPOINT command as arguments. CMD The ENTRYPOINT command can also be used in conjunction with the command. For example, you can use the exec form of ENTRYPOINT to set fixed default commands and parameters, and then use either form of CMD to set other defaults that may be changed.

FROM ubuntu
ENTRYPOINT ["top", "-b"]
CMD ["-c"]

  Command identification two: ADD, COPY


ADD COPY The only difference is that ADD supports extracting and decompressing archives (tar, gzip, bzip2, etc). The only difference is that COPY supports extracting and decompressing archives (tar, gzip, bzip2, etc.).

By lzz

Leave a Reply

Your email address will not be published. Required fields are marked *