重定向的九种方案及性能比较
说到重定向,大家必然不会陌生,最常见的场景之一便是各种文章、社交媒体上的短链接。
最近计划对于之前的短链接服务进行升级改造。在改造前,对于常见 Web 语言,如 Java、PHP、Python、Node、Ruby、Go和服务工具 Nginx、Caddy、Traefik 做了简单的对比分析。
希望这篇文章能够帮你在这个技术场景下,更立体的了解到各种语言/工具的基础性能差异。
写在前面
这里我使用了两台按量付费的云主机进行测试,一台作为测试服务器,用于运行不同语言/工具的代码,另外一台作为测试发起方,对测试服务器模拟发起大量用户请求。
两台服务器的系统配置完全一致,均为 4c4g 的阿里云密集计算型服务器,型号名称“ecs.ic5.xlarge
”,主频 2.5 GHz/2.7 GHz,采用“天湖”架构的 CPU,系统选择容器时代的宠儿:Ubuntu Server。
在进行测试分析前,进行以下约定:
-
所有语言和工具的测试均使用容器环境,编排执行均使用 compose 执行对应的 docker-compose.yml 来完成,镜像均使用 Alpine 系统版本。 -
所有语言均不使用框架完成转发逻辑,尽量使用“原生模块”处理请求。 -
所有语言的代码实现和工具配置,代码量和代码复杂度要尽可能少。 -
所有语言和工具都进行100并发和1000并发下的10万次请求测试。
配置测试环境
两台机器默认的环境不需要很复杂,仅安装 docker 和 compose 两个工具足以,可以参考下面的命令,完成初始系统的安装:
apt update && apt upgrade -y
apt remove docker docker-engine docker.io
apt install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
apt-key fingerprint 0EBFCD88
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt install -y docker-ce
curl -L https://github.com/docker/compose/releases/download/1.27.4/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
安装完毕后,先验证软件版本,均为以下结果,顺便检查 ulimit 限制:
# docker -v
Docker version 19.03.13, build 4484c46d9d
# docker-compose -v
docker-compose version 1.27.4, build 40524192
# ulimit
unlimited
查看到 ulimit
结果为 unlimited
,说明未对系统资源进行限制。
因为需要在一台服务器上发起测试请求,所以在测试服务器上执行 apt install apache2-utils
,获取通用测试工具 ab
,输入下面的命令,可以查看到工具的具体版本等信息:
# apt show apache2-utils
Package: apache2-utils
Version: 2.4.41-4ubuntu3.1
...
安装 Linux Dash 监控面板
为了更直观的获取测试过程服务器的整体性能和资源使用,我们使用 linux-dash
这个开源工具,并使用 Golang 版本执行,尽可能减少对系统资源的占用,影响各种测试结果。
在 Ubuntu 环境下安装 Golang 极其简单,下载,解压,设置环境变量即可:
wget https://golang.org/dl/go1.15.5.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.15.5.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
安装完毕,验证 Golang 版本如下:
go version
go version go1.15.5 linux/amd64
然后是下载 Linux Dash 的源码,并进行简单的构建和运行。
git clone --depth 1 https://github.com/afaqurk/linux-dash.git
cd linux-dash/app/server
go build index.go
执行 ./index
之后,你会看到运行的日志:“Starting http server at: 0.0.0.0:80”。接着打开浏览器,就能看到实时的性能监控面板了。
测试命令
我们使用的测试命令十分简单,只需要三条即可:
-
使用 curl -I -X "GET" http://192.168.23.55:8080/
来验证服务联通性以及查看服务端响应结果。 -
使用 ab -n 100000 -c 100 http://192.168.23.55:8080/
来测试100并发数情况下的服务能力。 -
使用 ab -n 100000 -c 1000 http://192.168.23.55:8080/
来测试1000并发数情况下的服务能力。
PHP
江湖流传 PHP 是最好的语言,所以先对 PHP 进行测试。
PHP – 官方通用镜像
考虑到 PHP 每代版本都有较大性能提升,所以我们采用官方最新的 php:8.0.0RC4-apache
镜像来进行测试。
测试使用的 compose 配置如下:
version: "3"
services:
php-apache:
image: php:8.0.0RC4-apache
volumes:
- ./src:/var/www/html/
ports:
- 8080:80
将上面的内容保存为 docker-compose.yml
后,在同级目录中创建一个 index.php
即可开始我们的测试,在 PHP 中完成重定向操作非常简单,只需要一句话:
<?php exit(header('Location: http://localhost:1024', true, 301));?>
先执行一次请求,可以看到服务器的响应如下:
HTTP/1.1 301 Moved Permanently
Date: Mon, 16 Nov 2020 02:17:21 GMT
Server: Apache/2.4.38 (Debian)
X-Powered-By: PHP/8.0.0RC4
Location: http://localhost:1024
Content-Type: text/html; charset=UTF-8
接着进行 100 并发数下 10万次请求:
Server Software: Apache/2.4.38
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 100
Time taken for tests: 11.683 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 24200000 bytes
HTML transferred: 0 bytes
Requests per second: 8559.09 [#/sec] (mean)
Time per request: 11.683 [ms] (mean)
Time per request: 0.117 [ms] (mean, across all concurrent requests)
Transfer rate: 2022.75 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 6
Processing: 0 12 5.1 11 118
Waiting: 0 11 3.7 11 102
Total: 1 12 5.1 11 119
Percentage of the requests served within a certain time (ms)
50% 11
66% 11
75% 12
80% 12
90% 14
95% 17
98% 26
99% 35
100% 119 (longest request)
再继续进行 1000 并发下10万次请求测试:
Server Software: Apache/2.4.38
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 1000
Time taken for tests: 13.549 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 24200000 bytes
HTML transferred: 0 bytes
Requests per second: 7380.66 [#/sec] (mean)
Time per request: 135.489 [ms] (mean)
Time per request: 0.135 [ms] (mean, across all concurrent requests)
Transfer rate: 1744.26 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 7 91.3 0 3043
Processing: 6 115 825.7 59 13519
Waiting: 1 115 825.7 59 13519
Total: 23 122 832.2 59 13544
Percentage of the requests served within a certain time (ms)
50% 59
66% 61
75% 62
80% 64
90% 67
95% 73
98% 95
99% 1064
100% 13544 (longest request)
观察控制台,可以看到 CPU 使用率在 70%,系统负载飙升至 49,内存用量从9%升到了12%。
PHP – 官方内置 Server
前文提到,我们尽量使用原生的服务能力,PHP 其实很早就内置了一个 Web Server,所以我们可以使用其内置的 Server 来尝试提供 Web 服务。
测试使用的 compose 配置和之前区别不大,调整一下 command
指令即可:
version: "3.6"
services:
php8:
image: php:8.0.0RC4-apache
volumes:
- ./src/:/var/www/html/
ports:
- 8080:80
command: php -S 0.0.0.0:80 -t /var/www/html/
查看服务响应,相比较之前多了一行 Connection
响应信息,其余没有太大变化。
HTTP/1.1 301 Moved Permanently
Host: 192.168.23.55:8080
Date: Mon, 16 Nov 2020 02:34:39 GMT
Connection: close
X-Powered-By: PHP/8.0.0RC4
Location: http://localhost:1024
Content-type: text/html; charset=UTF-8
接着进行 100 并发数下 10万次请求:
Server Software:
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 100
Time taken for tests: 15.348 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 21700000 bytes
HTML transferred: 0 bytes
Requests per second: 6515.38 [#/sec] (mean)
Time per request: 15.348 [ms] (mean)
Time per request: 0.153 [ms] (mean, across all concurrent requests)
Transfer rate: 1380.70 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 13
Processing: 2 15 1.5 15 30
Waiting: 0 15 1.4 15 30
Total: 2 15 1.4 15 31
Percentage of the requests served within a certain time (ms)
50% 15
66% 15
75% 15
80% 15
90% 18
95% 18
98% 19
99% 19
100% 31 (longest request)
再继续进行 1000 并发下10万次请求测试:
ab -n 100000 -c 1000 http://192.168.23.55:8080/
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.23.55 (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
apr_socket_recv: Connection reset by peer (104)
Total of 99594 requests completed
观察控制台,可以看到 CPU 使用率在 49%,系统负载飙升至 44,内存用量几乎没有变化。
这里 PHP 内置服务的性能有一些跟不上了,有一些请求因为超时中断,未能完成测试。虽然这个测试没有完成,但是结合100并发场景的测试结果,足够说明这个模式用于简单测试使用、或者个人场景来说是满足的,但是慎用于生产环境。
PHP 启用 OPCache
在 PHP 使用场景下,为了获取更好的语言性能,我们一般会开启 OPCache 。
在容器环境中,默认 PHP 配置是以模版形式保存在 /usr/local/etc/php/php.ini-production
路径,我们在配置文件中找到 OPCache
配置小节。
[opcache]
; Determines if Zend OPCache is enabled
;opcache.enable=1
; Determines if Zend OPCache is enabled for the CLI version of PHP
;opcache.enable_cli=0
; The OPcache shared memory storage size.
;opcache.memory_consumption=128
...
将和 OPCache 配置有关的项目前的注释分号全部去掉,然后将 opcache.enable_cli
设置为 1
,再将 ;zend_extension=opcache
改为 zend_extension=opcache
,最后将内容保存为 php.ini
。
接着编写 compose 配置文件:
version: "3"
services:
php-apache:
image: php:8.0.0RC4-apache
volumes:
- ./src:/var/www/html/
- ./php.ini:/usr/local/etc/php/php.ini
ports:
- 8080:80
启动服务后,我们将会收到类似下面的服务器响应。
HTTP/1.1 301 Moved Permanently
Date: Mon, 16 Nov 2020 02:51:44 GMT
Server: Apache/2.4.38 (Debian)
X-Powered-By: PHP/8.0.0RC4
Location: http://localhost:1024
Content-Type: text/html; charset=UTF-8
接着进行 100 并发数下 10万次请求:
Server Software: Apache/2.4.38
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 100
Time taken for tests: 10.885 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 24200000 bytes
HTML transferred: 0 bytes
Requests per second: 9186.73 [#/sec] (mean)
Time per request: 10.885 [ms] (mean)
Time per request: 0.109 [ms] (mean, across all concurrent requests)
Transfer rate: 2171.08 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 7
Processing: 0 11 8.9 9 211
Waiting: 0 10 6.6 9 209
Total: 0 11 8.9 9 212
Percentage of the requests served within a certain time (ms)
50% 9
66% 10
75% 10
80% 11
90% 17
95% 27
98% 41
99% 50
100% 212 (longest request)
再继续进行 1000 并发下10万次请求测试:
Server Software: Apache/2.4.38
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 1000
Time taken for tests: 13.379 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 24200000 bytes
HTML transferred: 0 bytes
Requests per second: 7474.21 [#/sec] (mean)
Time per request: 133.793 [ms] (mean)
Time per request: 0.134 [ms] (mean, across all concurrent requests)
Transfer rate: 1766.37 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 17 153.6 0 7112
Processing: 4 96 633.4 55 13352
Waiting: 0 94 633.4 55 13352
Total: 20 112 656.0 55 13373
Percentage of the requests served within a certain time (ms)
50% 55
66% 57
75% 60
80% 62
90% 71
95% 89
98% 290
99% 1086
100% 13373 (longest request)
可以看到相比较第一个小节中,在不改动代码,仅调整配置的前提下,使用默认的官方镜像的模式,在 100 并发和 1000 并发的情况下,相比较原来,分别提升了 7% 和 1%,响应时间也有了一些优化和提升。
观察控制台,可以看到 CPU 使用率在 67%,系统负载飙升至 40,内存用量从9%升到了12%。
Java
Java 是目前虚拟机最强语言,也是 Web 领域通用性最好的语言之一。
Java – OpenJDK 镜像未编译
那么来看看 Java 在内置 Web Server 模块下未编译时的表现吧。
依旧是先给出 compose 配置文件。
version: "3"
services:
java:
image: openjdk:16-jdk-alpine3.12
ports:
- 8080:80
volumes:
- ./main.java:/app/main.java
command: java /app/main.java
然后我们需要编写不到三十行的脚本,并将它保存为 main.java
:
package com.soulteary.test;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
import java.io.IOException;
import java.net.InetSocketAddress;
public class Main {
public static void main(String[] args) throws Exception {
HttpServer httpServer = HttpServer.create(new InetSocketAddress(80), 0);
httpServer.createContext("/", new HttpHandler() {
@Override
public void handle(HttpExchange httpExchange) throws IOException {
httpExchange.getResponseHeaders().add("Content-Type", "text/html; charset=UTF-8");
httpExchange.getResponseHeaders().add("Location", "http://localhost:1024");
httpExchange.sendResponseHeaders(301, 0);
httpExchange.close();
}
});
httpServer.start();
}
}
将服务运行起来后,我们能够看到 Java 的响应除了我们代码中声明的两行外,默认比 PHP 要少一些。
HTTP/1.1 301 Moved Permanently
Date: Mon, 16 Nov 2020 05:07:37 GMT
Content-type: text/html; charset=UTF-8
Location: http://localhost:1024
进行 100 并发数下 10万次请求:
Server Software:
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 100
Time taken for tests: 10.501 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 16300000 bytes
HTML transferred: 0 bytes
Requests per second: 9522.61 [#/sec] (mean)
Time per request: 10.501 [ms] (mean)
Time per request: 0.105 [ms] (mean, across all concurrent requests)
Transfer rate: 1515.81 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 4 61.9 0 3046
Processing: 1 6 39.2 5 3304
Waiting: 0 6 39.2 5 3304
Total: 2 10 83.2 5 4324
Percentage of the requests served within a certain time (ms)
50% 5
66% 6
75% 6
80% 6
90% 6
95% 7
98% 8
99% 10
100% 4324 (longest request)
再进行 1000 并发数下 10万次请求:
Server Software:
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 1000
Time taken for tests: 16.461 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 16300000 bytes
HTML transferred: 0 bytes
Requests per second: 6074.99 [#/sec] (mean)
Time per request: 164.609 [ms] (mean)
Time per request: 0.165 [ms] (mean, across all concurrent requests)
Transfer rate: 967.02 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 32 367.2 0 7205
Processing: 1 37 547.7 6 13379
Waiting: 0 36 547.7 6 13379
Total: 1 68 761.1 6 16418
Percentage of the requests served within a certain time (ms)
50% 6
66% 6
75% 6
80% 6
90% 6
95% 7
98% 76
99% 1217
100% 16418 (longest request)
观察控制台,可以看到 CPU 使用率在 24%,系统负载为 12.56,内存用量从9%升到了13%。
对比之前可以看到,Java 的执行效率很高, 50% 的请求的处理时间在 5ms 内,98 的请求处理速度在 8ms 内。
虽然处理速度很快,但是 100 并发下,仅比开了 OPCache 的 PHP 高 3.6%,而 1000 并发场景下,因为 2% 的请求处理时间太长,导致测试结果反而不是很好看。不过也足以得出类似 PHP 内置 Server 的结论,简单测试和个人使用完全没有问题,线上使用要慎重,最好配合其他服务软件使用。
Java – 编译后的程序
在不改动代码的前提下,我们测试下编译后的 Java 程序的执行效率。
先调整 compose 配置:
version: "3"
services:
java:
image: openjdk:16-jdk-alpine3.12
ports:
- 8080:80
volumes:
- ./Main.java:/app/Main.java
command: sh -c "javac -encoding UTF-8 -nowarn -source 1.8 -target 1.8 -d . Main.java && java com/soulteary/test/Main"
服务器响应和之前除了时间外一样。
HTTP/1.1 301 Moved Permanently
Date: Mon, 16 Nov 2020 08:48:17 GMT
Content-type: text/html; charset=UTF-8
Location: http://localhost:1024
进行 100 并发数下 10万次请求:
Server Software:
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 100
Time taken for tests: 9.904 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 16300000 bytes
HTML transferred: 0 bytes
Requests per second: 10097.39 [#/sec] (mean)
Time per request: 9.904 [ms] (mean)
Time per request: 0.099 [ms] (mean, across all concurrent requests)
Transfer rate: 1607.30 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 4 63.7 0 3044
Processing: 1 6 30.0 5 3351
Waiting: 0 6 30.0 5 3351
Total: 2 10 72.9 5 3353
Percentage of the requests served within a certain time (ms)
50% 5
66% 6
75% 6
80% 6
90% 6
95% 6
98% 7
99% 7
100% 3353 (longest request)
进行 1000 并发数下 10万次请求。
Server Software:
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 1000
Time taken for tests: 27.707 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 16300000 bytes
HTML transferred: 0 bytes
Requests per second: 3609.15 [#/sec] (mean)
Time per request: 277.074 [ms] (mean)
Time per request: 0.277 [ms] (mean, across all concurrent requests)
Transfer rate: 574.50 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 22 240.5 0 7212
Processing: 1 58 970.8 5 26658
Waiting: 0 58 970.8 5 26658
Total: 2 81 1044.2 5 27691
Percentage of the requests served within a certain time (ms)
50% 5
66% 6
75% 6
80% 6
90% 6
95% 6
98% 209
99% 1226
100% 27691 (longest request)
观察控制台,可以看到 CPU 使用率在 26%,系统负载为 2.5,内存用量从9%升到了12%。
相比较未编译的程序,可以看到性能提升还是比较大的,以100并发情况为例,性能提升 5%,99%都能在7ms内;而1000并发情况下,95%的请求也能够在 6ms 内完成。不过使用场景和结论依旧不会有变化,内置服务适用于简单测试和个人使用,生产环境使用需要慎重。
Go
继续来了解下云原生领域当红炸子鸡:Go 。
Go – 未编译情况
先来看看 Golang 在未编译情况下的表现。
依旧是先给出 compose 配置文件:
version: "3"
services:
golang:
image: golang:1.15.5-alpine3.12
ports:
- 8080:80
command: go run /app/main.go
volumes:
- ./main.go:/app/main.go
Go 的代码实现更精简一些,只需要写不到二十行代码。
package main
import (
"log"
"net/http"
)
func redirect(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, "http://localhost:1024", 301)
}
func main() {
http.HandleFunc("/", redirect)
err := http.ListenAndServe(":80", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
将上面的代码保存为 main.go
,然后启动服务,可以看到响应结果也很精简。
HTTP/1.1 301 Moved Permanently
Content-Type: text/html; charset=utf-8
Location: http://localhost:1024
Date: Mon, 16 Nov 2020 04:09:09 GMT
进行 100 并发数下 10万次请求:
Server Software:
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 56 bytes
Concurrency Level: 100
Time taken for tests: 5.245 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 22000000 bytes
HTML transferred: 5600000 bytes
Requests per second: 19064.01 [#/sec] (mean)
Time per request: 5.245 [ms] (mean)
Time per request: 0.052 [ms] (mean, across all concurrent requests)
Transfer rate: 4095.78 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.7 1 7
Processing: 0 4 1.9 4 25
Waiting: 0 4 2.0 3 25
Total: 0 5 1.8 5 26
Percentage of the requests served within a certain time (ms)
50% 5
66% 5
75% 6
80% 6
90% 7
95% 8
98% 10
99% 12
100% 26 (longest request)
进行 1000 并发数下 10万次请求:
Server Software:
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 56 bytes
Concurrency Level: 1000
Time taken for tests: 5.379 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 22000000 bytes
HTML transferred: 5600000 bytes
Requests per second: 18591.49 [#/sec] (mean)
Time per request: 53.788 [ms] (mean)
Time per request: 0.054 [ms] (mean, across all concurrent requests)
Transfer rate: 3994.26 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 17 8.5 19 34
Processing: 8 36 11.5 34 200
Waiting: 0 30 12.9 27 192
Total: 8 53 10.4 53 200
Percentage of the requests served within a certain time (ms)
50% 53
66% 56
75% 59
80% 61
90% 65
95% 69
98% 78
99% 85
100% 200 (longest request)
观察控制台,可以看到 CPU 使用率在 36%,系统负载为 1.29,内存用量从9%升到了10%。
可以看到未编译的 Go 的程序不论是在 100 并发还是 1000 并发下的表现都是非常不错的。动辄1万8到1万9的响应能力(执行效率),侧面说明了为什么开源工具越来越多的由 Go 编写而成。
Go – 编译后的程序
编译前的程序的结果已经非常棒了,接着来看看编译后的程序。
在代码不需变化的前提下,调整 compose 配置文件中的 command
指令即可:
version: "3"
services:
golang:
image: golang:1.15.5-alpine3.12
ports:
- 8080:80
command: sh -c "go build /app/main.go && ./main"
volumes:
- ./main.go:/app/main.go
进行 100 并发数下 10万次请求:
Server Software:
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 56 bytes
Concurrency Level: 100
Time taken for tests: 5.271 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 22000000 bytes
HTML transferred: 5600000 bytes
Requests per second: 18972.32 [#/sec] (mean)
Time per request: 5.271 [ms] (mean)
Time per request: 0.053 [ms] (mean, across all concurrent requests)
Transfer rate: 4076.08 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.7 1 7
Processing: 0 4 2.1 4 28
Waiting: 0 4 2.1 4 27
Total: 0 5 2.0 5 28
Percentage of the requests served within a certain time (ms)
50% 5
66% 5
75% 6
80% 6
90% 7
95% 9
98% 11
99% 13
100% 28 (longest request)
进行 1000 并发数下 10万次请求:
Server Software:
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 56 bytes
Concurrency Level: 1000
Time taken for tests: 5.405 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 22000000 bytes
HTML transferred: 5600000 bytes
Requests per second: 18499.78 [#/sec] (mean)
Time per request: 54.055 [ms] (mean)
Time per request: 0.054 [ms] (mean, across all concurrent requests)
Transfer rate: 3974.56 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 13 10.7 16 33
Processing: 0 40 16.5 38 214
Waiting: 0 35 18.1 31 209
Total: 0 54 15.1 54 219
Percentage of the requests served within a certain time (ms)
50% 54
66% 57
75% 60
80% 61
90% 67
95% 77
98% 92
99% 106
100% 219 (longest request)
观察控制台,可以看到 CPU 使用率在 35%,系统负载为 1.3,内存用量始终为9%。
测试结果发现编译前后区别并不大,甚至还略有轻微下降,不过不影响可用可信赖的判断。这里性能轻微下降,有可能是 alpine 版本编译结果的执行效率的问题,后面有时间再进行进一步测试吧。
Node.js
接下来试试我最喜欢的语言之一,Node.js。
compose 配置内容如下:
version: "3"
services:
nginx:
image: node:15.2.0-alpine3.12
volumes:
- ./index.js:/app/index.js
ports:
- 8080:80
command: node /app/index.js
相比较前面的程序(除PHP外),示例程序可以写的更简洁一些:
var http = require('http');
http.createServer(function (request, response) {
response.writeHead(301, { 'Location': 'http://localhost:1024/' }).end();
}).listen(80);
服务运行之后,默认响应内容如下:
HTTP/1.1 301 Moved Permanently
Location: http://localhost:1024/
Date: Mon, 16 Nov 2020 03:41:15 GMT
Connection: keep-alive
Keep-Alive: timeout=5
进行 100 并发数下 10万次请求:
Server Software:
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 100
Time taken for tests: 11.449 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 12400000 bytes
HTML transferred: 0 bytes
Requests per second: 8734.66 [#/sec] (mean)
Time per request: 11.449 [ms] (mean)
Time per request: 0.114 [ms] (mean, across all concurrent requests)
Transfer rate: 1057.71 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.4 0 8
Processing: 2 11 2.6 11 43
Waiting: 1 7 1.9 7 35
Total: 3 11 2.5 11 48
Percentage of the requests served within a certain time (ms)
50% 11
66% 12
75% 12
80% 12
90% 13
95% 14
98% 18
99% 23
100% 48 (longest request)
进行 1000 并发数下 10万次请求:
Server Software:
Server Hostname: 192.168.23.55
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 1000
Time taken for tests: 13.628 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 12400000 bytes
HTML transferred: 0 bytes
Requests per second: 7337.80 [#/sec] (mean)
Time per request: 136.281 [ms] (mean)
Time per request: 0.136 [ms] (mean, across all concurrent requests)
Transfer rate: 888.56 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 29 183.8 1 3035
Processing: 29 107 31.6 110 507
Waiting: 4 80 30.7 87 475
Total: 29 136 184.8 113 3105
Percentage of the requests served within a certain time (ms)
50% 113
66% 124
75% 128
80% 130
90% 146
95% 153
98% 1089
99% 1129
100% 3105 (longest request)
观察控制台,可以看到 CPU 使用率在 22%,系统负载为 0.59,内存用量从9%升到10%。
可以看到执行还算及格,虽然性能看起来不如 Go 和 Java,但是在 1000 并发的场景下,完全没有丢失或者发生特别长时间的响应超时。
Python – 内置 Web 服务
Python 的默认 compose 如下:
version: "3"
services:
python:
image: python:3.9.0-alpine3.12
volumes:
- ./main.py:/main.py
command: python /main.py
ports:
- 8080:80
实现代码相比其他语言,也比较简洁,十行以内就能满足功能需求:
from http.server import HTTPServer, BaseHTTPRequestHandler
class Redirect(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(301)
self.send_header('Location', 'http://localhost:1024')
self.end_headers()
HTTPServer(("0.0.0.0", 80), Redirect).serve_forever()
服务启动之后,查看服务相应内容的脚本需要额外指定一个参数 “-X GET
”来强制声明
curl -I -X "GET" http://192.168.23.55:8080/
HTTP/1.0 301 Moved Permanently
Server: BaseHTTP/0.6 Python/3.9.0
Date: Mon, 16 Nov 2020 03:48:32 GMT
Location: http://localhost:1024
测试 100 并发时候,便会出现下面这样的问题,所以对于 Python 来说,更多的测试可以跳过了。
Benchmarking 192.168.23.55 (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
apr_pollset_poll: The timeout specified has expired (70007)
Total of 99936 requests completed
...
apr_pollset_poll: The timeout specified has expired (70007)
Total of 99954 requests completed
观察控制台,可以看到 CPU 使用率在 29%,系统负载为 0.72,内存用量始终为9%。
Python 的内置模块相比较 PHP 和 Java 的模块略差一些,但是对于系统资源的使用其实还好,不过这和没有完成测试应该也有很大关系,客观减少了许多计算。
Ruby
最后一个测试的语言类的选手是 Ruby。
compose 配置如下:
version: "3"
services:
ruby:
image: ruby:3.0.0-preview1-alpine3.12
ports:
- 8080:80
command: ruby /main.rb
volumes:
- ./main.rb:/main.rb
和其他语言一样,需要写一段脚本,不到二十行:
require 'socket'
server = TCPServer.new 80
while session = server.accept
request = session.gets
puts request
session.print "HTTP/1.1 301rn"
session.print "Content-Type: text/htmlrn"
session.print "Location: http://localhost:9000rn"
session.print "rn" #
session.print ""
session.close
end
将代码保存为 main.rb
后,启动服务,得到的响应结果为:
HTTP/1.1 301
Content-Type: text/html
Location: http://localhost:1024
进行 100 并发数下 10万次请求:
Server Software:
Server Hostname: test-server
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 100
Time taken for tests: 5.528 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 7400000 bytes
HTML transferred: 0 bytes
Requests per second: 18088.15 [#/sec] (mean)
Time per request: 5.528 [ms] (mean)
Time per request: 0.055 [ms] (mean, across all concurrent requests)
Transfer rate: 1307.15 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.6 0 7
Processing: 1 5 1.0 5 10
Waiting: 0 5 1.1 5 10
Total: 2 6 0.8 5 10
WARNING: The median and mean for the initial connection time are not within a normal deviation
These results are probably not that reliable.
WARNING: The median and mean for the total time are not within a normal deviation
These results are probably not that reliable.
Percentage of the requests served within a certain time (ms)
50% 5
66% 5
75% 5
80% 6
90% 7
95% 8
98% 8
99% 8
100% 10 (longest request)
而进行 1000 并发数下 10万次请求,则出现了和 Python 类似的情况:
Benchmarking test-server (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
apr_pollset_poll: The timeout specified has expired (70007)
Total of 99822 requests completed
观察控制台,可以看到 CPU 使用率在 14%,系统负载为 0.1,内存用量始终为9%。
小规模使用的情况下,Ruby 简直超出预期。
服务器软件
测试完各种可以用“短代码”实现重定向的语言后,来看看专门为 Web 请求进行过优化的“正规军”吧。
Nginx
Nginx 从 1.19 开始支持配置模版,可以用下面的写法将配置文件在应用启动后注入到模版中:
version: "3"
services:
nginx:
image: nginx:1.19.4-alpine
volumes:
- ./templates:/etc/nginx/templates
ports:
- 8080:80
environment:
- NGINX_HOST=localhost
- NGINX_PORT=80
接着创建一个名为 default.conf.template
的文件:
server {
listen ${NGINX_PORT};
server_name ${NGINX_HOST};
location / {
rewrite .* http://localhost:1024 permanent;
}
}
然后启动应用,会看到一个响应内容显得略多的结果:
HTTP/1.1 301 Moved Permanently
Server: nginx/1.19.4
Date: Mon, 16 Nov 2020 03:13:32 GMT
Content-Type: text/html
Content-Length: 169
Connection: keep-alive
Location: http://localhost:1024
测试 100并发10万次请求的结果如下:
Server Software: nginx/1.19.4
Server Hostname: test-server
Server Port: 8080
Document Path: /
Document Length: 169 bytes
Concurrency Level: 100
Time taken for tests: 5.075 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 36000000 bytes
HTML transferred: 16900000 bytes
Requests per second: 19705.04 [#/sec] (mean)
Time per request: 5.075 [ms] (mean)
Time per request: 0.051 [ms] (mean, across all concurrent requests)
Transfer rate: 6927.55 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 0.6 2 6
Processing: 0 3 1.4 3 26
Waiting: 0 3 1.4 3 26
Total: 0 5 1.4 5 26
Percentage of the requests served within a certain time (ms)
50% 5
66% 5
75% 5
80% 5
90% 6
95% 7
98% 10
99% 11
100% 26 (longest request)
测试1000并发10万次请求的结果如下:
Server Software: nginx/1.19.4
Server Hostname: test-server
Server Port: 8080
Document Path: /
Document Length: 169 bytes
Concurrency Level: 1000
Time taken for tests: 4.839 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 36000000 bytes
HTML transferred: 16900000 bytes
Requests per second: 20663.50 [#/sec] (mean)
Time per request: 48.395 [ms] (mean)
Time per request: 0.048 [ms] (mean, across all concurrent requests)
Transfer rate: 7264.51 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 21 79.6 16 1052
Processing: 6 27 11.9 25 429
Waiting: 0 22 11.6 20 425
Total: 15 48 80.3 41 1153
Percentage of the requests served within a certain time (ms)
50% 41
66% 45
75% 48
80% 50
90% 57
95% 65
98% 84
99% 109
100% 1153 (longest request)
观察控制台,可以看到 CPU 使用率在 3%,系统负载为 0.1,内存用量始终为9%。
可以看到带有复杂 Web 业务逻辑的 Nginx 毫不意外的轻松秒杀了上述所有语言实现。但是相比较语言实现,编写 Nginx Conf 很多时候不够灵活。
Caddy
接着来试试使用优秀的 Golang 框架 Iris 实现的新服务软件,Caddy。
compose 配置类似 Nginx ,需要将配置文件映射到容器内:
version: "3"
services:
caddy:
image: caddy:2.1.1-alpine
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
ports:
- 8080:80
然后创建一个名为 Caddyfile
的文件,相比较传统的 Nginx.conf ,内容简单到只需要一句话:
redir http://localhost:1024/ permanent
服务启动之后,我们会收到一个简单的响应,不过有个细节要注意,Caddy 没有返回 “301”:
HTTP/1.1 308 Permanent Redirect
Connection: close
Location: https://192.168.23.55/
Server: Caddy
Date: Mon, 16 Nov 2020 03:33:25 GMT
进行 100 并发 10万次请求:
Server Software: Caddy
Server Hostname: test-server
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 100
Time taken for tests: 6.194 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 15700000 bytes
HTML transferred: 0 bytes
Requests per second: 16143.41 [#/sec] (mean)
Time per request: 6.194 [ms] (mean)
Time per request: 0.062 [ms] (mean, across all concurrent requests)
Transfer rate: 2475.11 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 5
Processing: 0 6 3.5 5 44
Waiting: 0 6 3.4 5 44
Total: 0 6 3.5 6 44
Percentage of the requests served within a certain time (ms)
50% 6
66% 6
75% 7
80% 8
90% 10
95% 13
98% 17
99% 20
100% 44 (longest request)
进行 1000 并发 10万次请求:
Server Software: Caddy
Server Hostname: test-server
Server Port: 8080
Document Path: /
Document Length: 0 bytes
Concurrency Level: 1000
Time taken for tests: 6.185 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 15700000 bytes
HTML transferred: 0 bytes
Requests per second: 16168.60 [#/sec] (mean)
Time per request: 61.848 [ms] (mean)
Time per request: 0.062 [ms] (mean, across all concurrent requests)
Transfer rate: 2478.97 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 2.5 0 29
Processing: 0 60 21.1 60 277
Waiting: 0 60 21.0 59 273
Total: 0 61 20.9 61 277
Percentage of the requests served within a certain time (ms)
50% 61
66% 66
75% 70
80% 74
90% 85
95% 97
98% 113
99% 124
100% 277 (longest request)
观察控制台,可以看到 CPU 使用率在 23%,系统负载为 0.1,内存用量从9%升到了10%。
可以看到相比较原生的 Go 的编译后的程序的性能提升都十分明显,加上 Caddy 的配置灵活,应该未来还是有不少“出场机会”的。
Traefik
最后来看看我们一直深度使用的 Traefik,和其他的程序和脚本略有差异,云原生的软件,默认特点之一便是完备的启动参数和从环境变量、容器变量中获取相关的信息,所以只需要准备一个 docker-compose.yml
就可以开始测试了:
version: '3'
services:
traefik:
image: traefik:v2.3.2
ports:
- 8080:80
command:
- "--log.level=ERROR"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=true"
- "--entrypoints.http.address=:80"
labels:
- "traefik.http.middlewares.redir.redirectregex.regex=^.*"
- "traefik.http.middlewares.redir.redirectregex.replacement=http://localhost:1024"
- "traefik.http.routers.redir-test.rule=HostRegexp(`{any:.*}`)"
- "traefik.http.routers.redir-test.entrypoints=http"
- "traefik.http.routers.redir-test.middlewares=redir"
- "traefik.http.services.backend.loadbalancer.server.scheme=http"
- "traefik.http.services.backend.loadbalancer.server.port=80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
服务启动后,获得的响应头和 Caddy 类似,并非我们预期的 “301”,还好当前的浏览器主要以解析 Location
这个响应头,和判断响应码是否为 “300 系列”,所以实际使用过程中,不会出现预期之外的问题。
HTTP/1.1 307 Temporary Redirect
Location: http://localhost:1024
Date: Mon, 16 Nov 2020 07:26:18 GMT
Content-Length: 18
Content-Type: text/plain; charset=utf-8
进行 100 并发10万次请求:
Server Software:
Server Hostname: test-server
Server Port: 8080
Document Path: /
Document Length: 5 bytes
Concurrency Level: 100
Time taken for tests: 11.258 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 15700000 bytes
HTML transferred: 500000 bytes
Requests per second: 8882.85 [#/sec] (mean)
Time per request: 11.258 [ms] (mean)
Time per request: 0.113 [ms] (mean, across all concurrent requests)
Transfer rate: 1361.92 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 6
Processing: 0 11 6.2 10 63
Waiting: 0 8 5.3 7 63
Total: 0 11 6.3 10 64
Percentage of the requests served within a certain time (ms)
50% 10
66% 13
75% 15
80% 16
90% 19
95% 23
98% 27
99% 30
100% 64 (longest request)
进行 1000 并发10万次请求:
Server Software:
Server Hostname: test-server
Server Port: 8080
Document Path: /
Document Length: 5 bytes
Concurrency Level: 1000
Time taken for tests: 11.975 seconds
Complete requests: 100000
Failed requests: 0
Non-2xx responses: 100000
Total transferred: 15700000 bytes
HTML transferred: 500000 bytes
Requests per second: 8350.87 [#/sec] (mean)
Time per request: 119.748 [ms] (mean)
Time per request: 0.120 [ms] (mean, across all concurrent requests)
Transfer rate: 1280.36 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 2.4 0 25
Processing: 1 118 48.5 113 491
Waiting: 0 103 46.5 101 452
Total: 1 119 48.3 114 491
Percentage of the requests served within a certain time (ms)
50% 114
66% 130
75% 142
80% 152
90% 182
95% 208
98% 241
99% 268
100% 491 (longest request)
观察控制台,可以看到 CPU 使用率在 57%,系统负载为 1.5,内存用量从9%升到了10%。
这里可以看到 Traefik 并不是性能最好的工具,相比较 Nginx 而言,有很长的路要走。但是相比较我们直接使用高级语言去写,稳定性和可靠性高了不少,况且软件还支持大量通用 Web 处理任务。
最后
示例代码已开源在 https://github.com/soulteary/redirect-test,欢迎自行取用,或者补充。
行文仓促,欢迎提建议、一起讨论。
–EOF
本文使用「署名 4.0 国际 (CC BY 4.0)」许可协议,欢迎转载、或重新修改使用,但需要注明来源。署名 4.0 国际 (CC BY 4.0)
本文作者: 苏洋
创建时间: 2020年11月01日 统计字数: 12976字 阅读时间: 26分钟阅读 本文链接: https://soulteary.com/2020/11/01/use-nginx-to-build-a-front-end-log-statistics-service-service.html
原文始发于微信公众号(折腾技术):重定向的九种方案及性能比较
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/26530.html