[转帖]如何对minio进行性能测试和分析

如何,minio,进行,性能,测试,分析 · 浏览次数 : 0

小编点评

**归纳总结** * 文件切片分批上传性能测试 * 统计平均值 * 50% median * 50% slowest * CLEANUP done

正文

https://developer.aliyun.com/article/1006775

 

 

环境详情

server(组成集群,ec为12:4)

ip hosts 硬盘
storage01 172.16.50.1 12*10T
storage02 172.16.50.2 12*10T
storage03 172.16.50.3 12*10T
storage04 172.16.50.4 12*10T

client

ip host
headnode 172.16.50.5
node02 172.16.50.6
node03 172.16.50.7
node04 172.16.50.8

为什么选择speedtest和warp作为测试工具?

speedtest是一个易用的测试工具,它会先运行PUTS,然后运行GETS,通过增量的方式测试得到最大吞吐量。而warp则是一个完整的工具链,提供了很独立的测试项,能够测试GET;PUT;DELETE等都可以测试得到。同时通过cs的结构设计,更符合真实的使用场景,得到最贴近应用的性能结果,有利于性能分析。

warp结构如下图所示

环境详情

server(组成集群,ec为12:4)

ip hosts 硬盘
storage01 172.16.50.1 12*10T
storage02 172.16.50.2 12*10T
storage03 172.16.50.3 12*10T
storage04 172.16.50.4 12*10T
 

client

ip host
headnode 172.16.50.5
node02 172.16.50.6
node03 172.16.50.7
node04 172.16.50.8
 

为什么选择speedtest和warp作为测试工具?

speedtest是一个易用的测试工具,它会先运行PUTS,然后运行GETS,通过增量的方式测试得到最大吞吐量。而warp则是一个完整的工具链,提供了很独立的测试项,能够测试GET;PUT;DELETE等都可以测试得到。同时通过cs的结构设计,更符合真实的使用场景,得到最贴近应用的性能结果,有利于性能分析。

warp结构如下图所示

image.png

开始测试

speedtest

  1. 开始前需要下载minio client
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
mv mc /usr/local/bin/
  1. 写入host到/etc/hosts
  2. 配置客户端
/usr/local/bin/mc alias set minio http://172.16.50.1:9000 <YOUR-ACCESS-KEY> <YOUR-SECRET-KEY>
  1. 开始测试,执行(以前叫speedtest,现在修改成support perf了)
/usr/local/bin/mc support perf minio/<bucket>

这里可以得到一个网络、硬盘和吞吐量的结果

NODE                    RX              TX
http://storage01:9000   1.8 GiB/s       2.0 GiB/s
http://storage02:9000   1.8 GiB/s       2.0 GiB/s
http://storage03:9000   2.7 GiB/s       2.5 GiB/s
http://storage04:9000   2.4 GiB/s       2.2 GiB/s

NetPerf: ✔

NODE                    PATH    READ            WRITE
http://storage02:9000   /data1  192 MiB/s       235 MiB/s
http://storage02:9000   /data2  219 MiB/s       240 MiB/s
http://storage02:9000   /data3  186 MiB/s       262 MiB/s
http://storage02:9000   /data4  191 MiB/s       230 MiB/s
http://storage02:9000   /data5  179 MiB/s       221 MiB/s
http://storage02:9000   /data6  170 MiB/s       222 MiB/s
http://storage02:9000   /data7  200 MiB/s       219 MiB/s
http://storage02:9000   /data8  198 MiB/s       230 MiB/s
http://storage02:9000   /data9  166 MiB/s       207 MiB/s
http://storage02:9000   /data10 206 MiB/s       209 MiB/s
http://storage02:9000   /data11 198 MiB/s       213 MiB/s
http://storage02:9000   /data12 164 MiB/s       205 MiB/s
http://storage04:9000   /data1  172 MiB/s       259 MiB/s
http://storage04:9000   /data2  201 MiB/s       250 MiB/s
http://storage04:9000   /data3  218 MiB/s       256 MiB/s
http://storage04:9000   /data4  188 MiB/s       245 MiB/s
http://storage04:9000   /data5  157 MiB/s       197 MiB/s
http://storage04:9000   /data6  155 MiB/s       205 MiB/s
http://storage04:9000   /data7  154 MiB/s       210 MiB/s
http://storage04:9000   /data8  143 MiB/s       185 MiB/s
http://storage04:9000   /data9  181 MiB/s       207 MiB/s
http://storage04:9000   /data10 174 MiB/s       214 MiB/s
http://storage04:9000   /data11 173 MiB/s       218 MiB/s
http://storage04:9000   /data12 178 MiB/s       206 MiB/s
http://storage03:9000   /data1  194 MiB/s       337 MiB/s
http://storage03:9000   /data2  204 MiB/s       267 MiB/s
http://storage03:9000   /data3  212 MiB/s       261 MiB/s
http://storage03:9000   /data4  200 MiB/s       235 MiB/s
http://storage03:9000   /data5  168 MiB/s       216 MiB/s
http://storage03:9000   /data6  209 MiB/s       221 MiB/s
http://storage03:9000   /data7  179 MiB/s       222 MiB/s
http://storage03:9000   /data8  170 MiB/s       220 MiB/s
http://storage03:9000   /data9  144 MiB/s       186 MiB/s
http://storage03:9000   /data10 142 MiB/s       172 MiB/s
http://storage03:9000   /data11 149 MiB/s       171 MiB/s
http://storage03:9000   /data12 132 MiB/s       206 MiB/s
http://storage01:9000   /data1  99 MiB/s        119 MiB/s
http://storage01:9000   /data2  103 MiB/s       114 MiB/s
http://storage01:9000   /data3  104 MiB/s       114 MiB/s
http://storage01:9000   /data4  98 MiB/s        186 MiB/s
http://storage01:9000   /data5  186 MiB/s       261 MiB/s
http://storage01:9000   /data6  224 MiB/s       251 MiB/s
http://storage01:9000   /data7  188 MiB/s       240 MiB/s
http://storage01:9000   /data8  222 MiB/s       252 MiB/s
http://storage01:9000   /data9  185 MiB/s       247 MiB/s
http://storage01:9000   /data10 209 MiB/s       236 MiB/s
http://storage01:9000   /data11 191 MiB/s       224 MiB/s
http://storage01:9000   /data12 222 MiB/s       250 MiB/s

DrivePerf: ✔

        THROUGHPUT      IOPS
PUT     1.3 GiB/s       21 objs/s
GET     4.2 GiB/s       67 objs/s

MinIO 2022-08-25T07:17:05Z, 4 servers, 48 drives, 64 MiB objects, 62 threads

ObjectPerf: ✔

我们其实还可以按需做一些调整,做一些独立的测试。比如只需要测量object的读写速度,可以使用下述命令

mc support perf object minio

同理,我们也可以单独测量硬盘或者网络吞吐量,如下述命令

# driver
root@storage01:~# mc support perf object minio
        THROUGHPUT      IOPS
PUT     1.6 GiB/s       26 objs/s
GET     4.0 GiB/s       63 objs/s
MinIO 2022-08-25T07:17:05Z, 4 servers, 48 drives, 64 MiB objects, 27 threads
ObjectPerf: ✔
# throughtput
root@storage01:~# mc support perf net minio
NODE                    RX              TX
http://storage01:9000   1.9 GiB/s       2.1 GiB/s
http://storage02:9000   2.4 GiB/s       2.2 GiB/s
http://storage03:9000   2.6 GiB/s       2.4 GiB/s
http://storage04:9000   2.1 GiB/s       2.3 GiB/s
NetPerf: ✔

也可以选择在指定时间内测试指定大小的对象的读写速度

root@storage01:~# mc support perf object minio --duration 20s --size 128MiB
        THROUGHPUT      IOPS
PUT     2.3 GiB/s       18 objs/s
GET     3.8 GiB/s       30 objs/s
MinIO 2022-08-25T07:17:05Z, 4 servers, 48 drives, 128 MiB objects, 41 threads
ObjectPerf: ✔

warp

在开始测试之前,需要在客户端上安装warp,GitHub - minio/warp: S3 benchmarking tool这里面有多个版本可供选择,建议使用二进制包,比较省事。

wget https://github.com/minio/warp/releases/download/v0.6.2/warp_0.6.2_Linux_x86_64.tar.gz
mkdir warp
tar zxf warp_0.6.2_Linux_x86_64.tar.gz -C warp
# 执行客户端
./warp client

执行后,默认开启7761端口,假如配置了防火墙还需要放行该端口才行,我这里没有配置,所以就不演示了

在所有客户端开启warp后,我们可以选择启动一台执行warp混合基准测试了

warp mixed --duration=3m --warp-client=headnode --warp-client=node0{2...3} --host=storage0{1...4}:9000 --access-key=minio --secret-key=miniodev
warp: Benchmark data written to "warp-remote-2022-09-03[172425]-Fj1O.csv.zst"
Mixed operations.
Operation: DELETE, 10%, Concurrency: 40, Ran 2m58s.
 * Throughput: 18.47 obj/s
Operation: GET, 45%, Concurrency: 40, Ran 2m58s.
 * Throughput: 831.05 MiB/s, 83.11 obj/s
Operation: PUT, 15%, Concurrency: 40, Ran 2m58s.
 * Throughput: 276.45 MiB/s, 27.65 obj/s
Operation: STAT, 30%, Concurrency: 40, Ran 2m58s.
 * Throughput: 55.30 obj/s
Cluster Total: 1106.95 MiB/s, 184.46 obj/s over 2m58s.

我们也可以单独对GET操作进行压力测试得到最大的写入吞吐量

 

接下来我们对DELETE操作进行测试

root@headnode:~# warp delete --duration=3m --warp-client=headnode --warp-client=node0{2...3} --host=storage0{1...4}:9000 --access-key=minio --secret-key=miniodev
warp: Benchmark data written to "warp-remote-2022-09-03[173528]-1d62.csv.zst"
----------------------------------------
Operation: PUT
* Average: 0.56 MiB/s, 576.46 obj/s
Throughput by host:
 * http://storage01:9000: Avg: 0.14 MiB/s, 144.36 obj/s
 * http://storage02:9000: Avg: 0.14 MiB/s, 143.90 obj/s
 * http://storage03:9000: Avg: 0.14 MiB/s, 143.78 obj/s
 * http://storage04:9000: Avg: 0.14 MiB/s, 144.32 obj/s
Throughput, split into 86 x 1s:
 * Fastest: 655.8KiB/s, 655.89 obj/s
 * 50% Median: 584.4KiB/s, 584.43 obj/s
 * Slowest: 438.5KiB/s, 438.49 obj/s
----------------------------------------
Operation: DELETE
* Average: 897.41 obj/s
Throughput by host:
 * http://storage01:9000: Avg: 225.39 obj/s
 * http://storage02:9000: Avg: 221.08 obj/s
 * http://storage03:9000: Avg: 220.70 obj/s
 * http://storage04:9000: Avg: 222.31 obj/s
Throughput, split into 41 x 1s:
 * Fastest: 998.26 obj/s
 * 50% Median: 890.61 obj/s
 * Slowest: 861.53 obj/s
warp: Cleanup done
对DELETE操作进行测试
root@headnode:~# warp stat --autoterm --duration=3m --warp-client=headnode --warp-client=node0{2...3} --host=storage0{1...4}:9000 --access-key=minio --secret-key=miniodev
warp: Benchmark data written to "warp-remote-2022-09-03[173008]-vsV9.csv.zst"
----------------------------------------
Operation: PUT
* Average: 0.56 MiB/s, 584.19 obj/s
Throughput by host:
 * http://storage01:9000: Avg: 0.14 MiB/s, 145.46 obj/s
 * http://storage02:9000: Avg: 0.14 MiB/s, 145.46 obj/s
 * http://storage03:9000: Avg: 0.14 MiB/s, 145.32 obj/s
 * http://storage04:9000: Avg: 0.14 MiB/s, 144.19 obj/s
Throughput, split into 33 x 1s:
 * Fastest: 609.4KiB/s, 624.07 obj/s
 * 50% Median: 585.7KiB/s, 599.81 obj/s
 * Slowest: 499.3KiB/s, 511.37 obj/s
----------------------------------------
Operation: STAT
* Average: 10262.16 obj/s
Throughput by host:
 * http://storage01:9000: Avg: 2549.94 obj/s
 * http://storage02:9000: Avg: 2566.62 obj/s
 * http://storage03:9000: Avg: 2572.47 obj/s
 * http://storage04:9000: Avg: 2572.58 obj/s
Throughput, split into 179 x 1s:
 * Fastest: 10622.95 obj/s
 * 50% Median: 10268.04 obj/s
 * Slowest: 9574.31 obj/s
warp: Cleanup done.
将文件切片分批上传性能测试
root@headnode:~# warp multipart --parts=500 --part.size=10MiB --warp-client=headnode --warp-client=node0{2...3} --host=storage0{1...4}:9000 --access-key=minio --secret-key=miniodev
warp: Benchmark data written to "warp-remote-2022-09-03[174331]-AWo5.csv.zst"
----------------------------------------
Operation: PUT
* Average: 559.26 MiB/s, 55.93 obj/s
Throughput by host:
 * http://storage01:9000: Avg: 137.98 MiB/s, 13.80 obj/s
 * http://storage02:9000: Avg: 140.59 MiB/s, 14.06 obj/s
 * http://storage03:9000: Avg: 138.60 MiB/s, 13.86 obj/s
 * http://storage04:9000: Avg: 141.93 MiB/s, 14.19 obj/s
Throughput, split into 16 x 1s:
 * Fastest: 616.2MiB/s, 61.62 obj/s
 * 50% Median: 562.9MiB/s, 56.29 obj/s
 * Slowest: 437.3MiB/s, 43.73 obj/s
----------------------------------------
Operation: GET
* Average: 1026.15 MiB/s, 102.62 obj/s
Throughput by host:
 * http://storage01:9000: Avg: 254.73 MiB/s, 25.47 obj/s
 * http://storage02:9000: Avg: 257.86 MiB/s, 25.79 obj/s
 * http://storage03:9000: Avg: 257.91 MiB/s, 25.79 obj/s
 * http://storage04:9000: Avg: 255.68 MiB/s, 25.57 obj/s
Throughput, split into 298 x 1s:
 * Fastest: 1133.4MiB/s, 113.34 obj/s
 * 50% Median: 1030.0MiB/s, 103.00 obj/s
 * Slowest: 876.0MiB/s, 87.60 obj/s
warp: Cleanup done.

好的,以上就是我做的一些测试和总结,希望对大家有帮助

与[转帖]如何对minio进行性能测试和分析相似的内容:

[转帖]如何对minio进行性能测试和分析

https://developer.aliyun.com/article/1006775 环境详情 server(组成集群,ec为12:4) ip hosts 硬盘 storage01 172.16.50.1 12*10T storage02 172.16.50.2 12*10T storage03

[转帖]如何对minio进行性能测试和分析

https://developer.aliyun.com/article/1006775 环境详情 server(组成集群,ec为12:4) ip hosts 硬盘 storage01 172.16.50.1 12*10T storage02 172.16.50.2 12*10T storage03

[转帖]如何使用 minio 进行 BR 备份

https://tidb.net/blog/ada69456#5.%20%E4%BD%BF%E7%94%A8%20minio%20%E8%BF%9B%E8%A1%8C%20BR%20%E5%A4%87%E4%BB%BD%E7%9A%84%E6%9D%83%E9%99%90%E8%AF%B4%E6%9

[转帖]如何利用wrarp测试oss性能?

https://zhuanlan.zhihu.com/p/529735003 前言 我们利用mino与ceph rgw搭建好的oss经过多层网络转发,传输速度必定有所折损,这个时候我们使用wrap来测试oss对象存储的真实性能。 利用wrarp测试oss性能 wrarp是minio项目下的一个开源测

[转帖]K8S 挂载 minio csi 的方式.

对象存储 前置条件 安装Minio(在102主机上操作) 安装csi-s3插件(在103主机上操作) 使用 参考 本文介绍kubernetes如何基于对象存储(minio)创建PV与PVC 前置条件 准备两台主机,如下: 192.168.92.102:Minio节点,用来安装Minio 192.16

[转帖]tidb 如何对 TiDB 进行 TPC-C 测试

https://docs.pingcap.com/zh/tidb/stable/benchmark-tidb-using-tpcc TPC-C 是一个对 OLTP(联机交易处理)系统进行测试的规范,使用一个商品销售模型对 OLTP 系统进行测试,其中包含五类事务: NewOrder – 新订单的生成

[转帖]如何通过shell脚本对一个文件中的所有数值相加并求和

https://developer.aliyun.com/article/886170?spm=a2c6h.24874632.expert-profile.255.7c46cfe9h5DxWK 1.背景 在一些巡检脚本中有时通常需要把一个文件中的数值进行相加得出综合,由于是文件中的所有数值,因此不能

[转帖]如何通过JMeter测试金仓数据库KingbaseES并搭建环境

1.安装JMeter Apache JMeter是Apache组织开发的基于Java的压力测试工具,主要用于对软件的压力测试,它最初被设计用于Web应用测试,但后来扩展到其它测试领域。它可测试静态、动态资源,如静态文件、Java小服务程序、CGI脚本、Java对象、数据库等等。JMeter可以用于对

[转帖]如何选择RabbitMQ的消息保存方式?

https://www.cnblogs.com/zhengchunyuan/p/10179677.html RabbitMQ对于queue中的message的保存方式有两种方式:disc和ram。如果采用disc,则需要对exchange/queue/delivery mode都要设置成durabl

[转帖]如何查询mysql的字符集

https://www.yisu.com/zixun/686696.html 今天给大家介绍一下如何查询mysql的字符集。文章的内容小编觉得不错,现在给大家分享一下,觉得有需要的朋友可以了解一下,希望对大家有所帮助,下面跟着小编的思路一起来阅读吧。 方法:1、“show charset”语句,查看