Since adjustments to the deployment method may affect the normal access of online users, adjustments and tests can only be picked at a time when there are few users (early morning).


I didn’t realize that I had stepped on a lot of potholes, and until midnight yesterday I was still looking for bugs with other technical students from other teams:


This article to share with you our project deployment method to upgrade the form, process, and encountered some pitfalls, maybe in the future you will also be used ~!

 Why the upgrade?


In the beginning, almost all of our projects were deployed to servers, and many of them shared a server, like this:

 For what?


The answer is simple: low cost! Because I happen to have a couple of highly configured servers, which, if deployed for only 1 – 2 projects, would be under-utilized in terms of CPU, RAM, and bandwidth, and would be a proper waste of resources.


And now small companies or individuals deploying projects can directly use the Pagoda Linux panel, which is very convenient.


So we try not to use CDNs, pay-per-use container platforms, etc., unless necessary.


It’s been over a year since I started my business, so why are we retooling the way we deploy our program now?

 A few of the main reasons:


(1) With the growth of business, a single project may not be able to meet the requirements, we may want to deploy the same project on multiple nodes to achieve load balancing and fault tolerance, manual deployment is too much trouble. This requires the ability to flexibly expand and reduce the capacity of machine nodes and the ability to pipeline deployment.


2) Projects are deployed on the same server and if the server goes down, it will affect multiple projects at the same time.


(3) There is competition for resources between projects, for example, a project is doing a lot of promotion, taking up a lot of bandwidth resources, other projects will have little available bandwidth, access will be very slow.


4) Permission risk. Once a developer is given access to the server, he or she will be able to change all items, and there is also the possibility of misuse.


For these reasons, plus the fact that there have been some incidents, we decided to upgrade the way the project is deployed.

 Change in deployment mode

 Previously, we deployed as shown below:


When users want to request a website, they first find the corresponding IP of the server through DNS domain name resolution, and the request is sent to the Nginx web server after passing through the high-defense server. Then Nginx according to the request path judgment, if you want to access the file, find the front-end website files; if the request is an interface, reverse proxy to the back-end services.

 After the upgrade, we deployed as shown below:

 There are 3 main changes:


(1) Access to a CDN with security protection and resource acceleration capabilities can improve the loading speed of the front-end site.


(2) The back-end container platform for deployment, with dynamic expansion and contraction, load balancing capabilities.


(3) Separation of front-end and back-end deployment, no longer rely on Nginx for forwarding, but to distinguish between different requests for domain names, through the DNS resolution to different CDN.


CDN platform I used both Tencent cloud CDN and blue cloud CDN, different projects chose a different CDN. blue cloud CDN (tsycdn.com) is not as famous as the Tencent cloud, but more cost-effective, and can effectively prevent DDOS attacks. They also helped me a lot during the time when my website was frequently attacked.


What impressed me the most was their technical support, who patiently worked with me for hours to fix bugs, no one else:


For the container platform, we put some of our services on WeChat Cloud Hosting, which makes it easy to configure the pipeline and realize automatic release and deployment after submitting code to GitHub:

 You can also view service logs, resource usage:


Although the WeChat cloud hosting platform feels like it hasn’t been updated for a long time, and the flexibility of configuring containers isn’t as high as it should be, it’s able to satisfy the usage requirements of most developers.

 upgrade process

 1. Back-end service migration


Since the back-end services are going to be deployed to a container platform, it is definitely necessary to make a Docker image of the project.


The method is simple, just create a Dockerfile in the root directory of your backend project and write the command to build the image.


For example, a Spring Boot project can use a configuration similar to the following:

 
FROM maven:3.8.1-jdk-8-slim as builder

 
RUN ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo Asia/Shanghai > /etc/timezone

 
WORKDIR /app
COPY pom.xml .
COPY src ./src

 
RUN mvn package -DskipTests
 
CMD ["java","-jar","/app/target/server.jar","--spring.profiles.active=prod"]


Here is a pitfall, pay attention to the time of the container environment, there may be a difference of 8 hours from the real time, resulting in the log time, as well as inserted into the database at the wrong time.

 2. Configure CDN


The key to configure CDN is to configure the address of the source station. CDN is equivalent to a cache, if the user needs data that can not be found on the CDN, the CDN node will request the source station to get the data, so the configuration of the source station must not be wrong.


The Back to Source setting in the above figure refers to the way the CDN requests the source station, including protocol, domain port number, etc.

 There are 2 caveats here:


1) Avoid adding any redirection logic to the source, otherwise the source address may be directly exposed when redirecting.


For example, if the cdn address is “yupi.icu” and the source site address is “base.yupi.icu”, the source site address is usually hidden, otherwise users can bypass the CDN and attack your source site directly. If the source station is configured with redirection logic, such as routing the suffix “/” to “/aaa”. Then users may be automatically redirected to “base.yupi.icu/” when they visit “yupi.icu/”! Exposed!


(2) If the CDN site opens HTTPS, back to the source protocol as much as possible with HTTP, otherwise there may be the same certificate sub-domain SSL configuration inconsistencies caused by the 421 error (Misdirected Request), this error can be said to be very cold, not their own on-line project, the probability of hearing have not heard of it.

 3. Configure DNS


After opening up access from the CDN to the source (the container platform), the final step is to configure DNS so that the domain name that the user accesses (e.g., www.code-nav.cn) resolves to the CDN.


It is important to note that the DNS resolution time is not the same throughout the country, so it is possible that after changing the resolution, users in Beijing can not access, but users in Shanghai can access. So don’t be in a hurry to take the old service offline!


I thought it was going well, but what happened, the CDN access to the source site surprisingly failed, and the source site returned a 444 error code (connection closed)! Another cold error!


This error has me stumped. Why is my server rejecting connections from domestic CDN nodes? The first guess is that the server is blocking the IP, so I checked the high defense, the server firewall, and the cloud service provider’s customer service, and they all said that the IP is not blocked.

 So, I went and looked at the Nginx logs:

  - - [29/Apr/2024:00:06:41 +0800] "GET /favicon.ico HTTP/1.1" 444 0 "https://www.code-nav.cn/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36"


Since Nginx has already received the request, there is a good chance that the Nginx configuration is denying the connection. But I’ve been digging through the Nginx configuration and I can’t find where the IP blocking is configured.


Finally guess what? I suddenly remembered that I had purchased a Nginx firewall on this server a few years ago. Although it expired a long time ago, it seems to be able to automatically block some IPs for me. I guess it’s because I was testing the CDN yesterday and visited the source site too often.


So I uninstalled the Nginx firewall and I don’t have this error anymore.

By lzz

Leave a Reply

Your email address will not be published. Required fields are marked *