一、Redis 介绍
- Redis代表REmote DIctionary Server是一种开源的内存中数据存储,通常用作数据库,缓存或消息代理。它可以存储和操作高级数据类型,例如列表,地图,集合和排序集合。
- 由于Redis接受多种格式的密钥,因此可以在服务器上执行操作,从而减少了客户端的工作量。
- 它仅将磁盘用于持久性,而将数据完全保存在内存中。
- Redis是一种流行的数据存储解决方案,并被GitHub,Pinterest,Snapchat,Twitter,StackOverflow,Flickr等技术巨头所使用。
- 它的速度非常快。它是用ANSI C编写的,并且可以在POSIX系统上运行,例如Linux,Mac OS X和Solaris。
- Redis通常被排名为最流行的键/值数据库和最流行的与容器一起使用的NoSQL数据库。
- 其缓存解决方案减少了对云数据库后端的调用次数。
- 应用程序可以通过其客户端API库对其进行访问。
- 所有流行的编程语言都支持Redis。
- 它是开源且稳定的。
- Redis Cluster是一组Redis实例,旨在通过对数据库进行分区来扩展数据库,从而使其更具弹性。
- 群集中的每个成员(无论是主副本还是辅助副本)都管理哈希槽的子集。如果主机无法访问,则其从机将升级为主机。在由三个主节点组成的最小Redis群集中,每个主节点都有一个从节点(以实现最小的故障转移),每个主节点都分配有一个介于0到16,383之间的哈希槽范围。节点A包含从0到5000的哈希槽,节点B从5001到10000,节点C从10001到16383。
- 群集内部的通信是通过内部总线进行的,使用协议传播有关群集的信息或发现新节点。
- 拓扑状态:
- 存储状态:
1
|
[root@k8s-harbor01 ~] # mkdir -p /data/storage/k8s/redis |
2)创建nfs的rbac
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
|
[root@k8s-master01 ~] # mkdir -p /opt/k8s/k8s_project/redis [root@k8s-master01 ~] # cd /opt/k8s/k8s_project/redis [root@k8s-master01 redis] # vim nfs-rbac.yaml --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-provisioner namespace: wiseco --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io /v1 metadata: name: nfs-provisioner-runner namespace: wiseco rules: - apiGroups: [ "" ] resources: [ "persistentvolumes" ] verbs: [ "get" , "list" , "watch" , "create" , "delete" ] - apiGroups: [ "" ] resources: [ "persistentvolumeclaims" ] verbs: [ "get" , "list" , "watch" , "update" ] - apiGroups: [ "storage.k8s.io" ] resources: [ "storageclasses" ] verbs: [ "get" , "list" , "watch" ] - apiGroups: [ "" ] resources: [ "events" ] verbs: [ "watch" , "create" , "update" , "patch" ] - apiGroups: [ "" ] resources: [ "services" , "endpoints" ] verbs: [ "get" , "create" , "list" , "watch" , "update" ] - apiGroups: [ "extensions" ] resources: [ "podsecuritypolicies" ] resourceNames: [ "nfs-provisioner" ] verbs: [ "use" ] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io /v1 metadata: name: run-nfs-provisioner subjects: - kind: ServiceAccount name: nfs-provisioner namespace: wiseco roleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io |
创建并查看
1
2
3
4
5
6
7
8
9
10
11
|
[root@k8s-master01 redis] # kubectl apply -f nfs-rbac.yaml serviceaccount /nfs-provisioner created clusterrole.rbac.authorization.k8s.io /nfs-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io /run-nfs-provisioner created [root@k8s-master01 redis] # kubectl get sa -n wiseco|grep nfs nfs-provisioner 1 24s [root@k8s-master01 redis] # kubectl get clusterrole -n wiseco|grep nfs nfs-provisioner-runner 2021-02-04T02:21:11Z [root@k8s-master01 redis] # kubectl get clusterrolebinding -n wiseco|grep nfs run-nfs-provisioner ClusterRole /nfs-provisioner-runner 34s |
1
2
3
4
5
6
7
8
9
10
11
12
|
[root@k8s-master01 redis] # ll total 4 -rw-r--r-- 1 root root 1216 Feb 4 10:20 nfs-rbac.yaml [root@k8s-master01 redis] # vim redis-nfs-class.yaml apiVersion: storage.k8s.io /v1beta1 kind: StorageClass metadata: name: redis-nfs-storage namespace: wiseco provisioner: redis /nfs reclaimPolicy: Retain |
创建并查看
1
2
3
4
5
6
|
[root@k8s-master01 redis] # kubectl apply -f redis-nfs-class.yaml storageclass.storage.k8s.io /redis-nfs-storage created [root@k8s-master01 redis] # kubectl get sc -n wiseco NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE redis-nfs-storage redis /nfs Retain Immediate false |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
|
[root@k8s-master01 redis] # ll total 8 -rw-r--r-- 1 root root 1216 Feb 4 10:20 nfs-rbac.yaml -rw-r--r-- 1 root root 155 Feb 4 10:24 redis-nfs-class.yaml [root@k8s-master01 redis] # vim redis-nfs.yml apiVersion: apps /v1 kind: Deployment metadata: name: redis-nfs-client-provisioner namespace: wiseco spec: replicas: 1 selector: matchLabels: app: redis-nfs-client-provisioner strategy: type : Recreate template: metadata: labels: app: redis-nfs-client-provisioner spec: serviceAccount: nfs-provisioner containers: - name: redis-nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com /open-ali/nfs-client-provisioner imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env : - name: PROVISIONER_NAME value: redis /nfs - name: NFS_SERVER value: 172.16.60.238 - name: NFS_PATH value: /data/storage/k8s/redis volumes: - name: nfs-client-root nfs: server: 172.16.60.238 path: /data/storage/k8s/redis |
创建并查看
1
2
3
4
5
|
[root@k8s-master01 redis] # kubectl apply -f redis-nfs.yml deployment.apps /redis-nfs-client-provisioner created [root@k8s-master01 redis] # kubectl get pods -n wiseco|grep nfs redis-nfs-client-provisioner-58b46549dd-h87gg 1 /1 Running 0 40s |
2、部署Redis Cluster集群
本案例部署采用的namespace命名空间是wiseco
1)准备image镜像
redis-trib.rb工具可以去redis源码中拷贝一个到当前目录,然后构建镜像。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
[root@k8s-master01 redis] # pwd /opt/k8s/k8s_project/redis [root@k8s-master01 redis] # ll total 12 -rw-r--r-- 1 root root 1216 Feb 4 15:31 nfs-rbac.yaml -rw-r--r-- 1 root root 155 Feb 4 15:32 redis-nfs-class.yaml -rw-r--r-- 1 root root 1006 Feb 4 15:32 redis-nfs.yml [root@k8s-master01 redis] # mkdir image [root@k8s-master01 redis] # cd image [root@k8s-master01 image] # ll total 64 -rw-r--r-- 1 root root 191 Feb 4 18:14 Dockerfile -rwxr-xr-x 1 root root 60578 Feb 4 15:49 redis-trib.rb [root@k8s-master01 image] # cat Dockerfile FROM redis:4.0.11 RUN apt-get update -y RUN apt-get install -y ruby \ rubygems RUN apt-get clean all RUN gem install redis RUN apt-get install dnsutils -y COPY redis-trib.rb /usr/local/bin/ |
创建镜像并上传到Harbor仓库
1
2
|
[root@k8s-master01 image] # docker build -t 172.16.60.238/wiseco/redis:4.0.11 . [root@k8s-master01 image] # docker push 172.16.60.238/wiseco/redis:4.0.11 |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
|
[root@k8s-master01 redis] # pwd /opt/k8s/k8s_project/redis [root@k8s-master01 redis] # ll total 12 drwxr-xr-x 2 root root 45 Feb 4 18:14 image -rw-r--r-- 1 root root 1216 Feb 4 15:31 nfs-rbac.yaml -rw-r--r-- 1 root root 155 Feb 4 15:32 redis-nfs-class.yaml -rw-r--r-- 1 root root 1006 Feb 4 15:32 redis-nfs.yml [root@k8s-master01 redis] # mkdir conf [root@k8s-master01 redis] # cd conf/ [root@k8s-master01 conf] # vim redis-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: redis-cluster namespace: wiseco data: fix-ip.sh: | #!/bin/sh CLUSTER_CONFIG= "/data/nodes.conf" if [ -f ${CLUSTER_CONFIG} ]; then if [ -z "${POD_IP}" ]; then echo "Unable to determine Pod IP address!" exit 1 fi echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}" sed -i.bak -e '/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/' ${POD_IP} '/' ${CLUSTER_CONFIG} fi exec "$@" redis.conf: | cluster-enabled yes cluster-config- file /data/nodes .conf cluster-node-timeout 10000 protected-mode no daemonize no pidfile /var/run/redis .pid port 6379 tcp-backlog 511 bind 0.0.0.0 timeout 3600 tcp-keepalive 1 loglevel verbose logfile /data/redis .log databases 16 save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir /data #requirepass yl123456 appendonly yes appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb lua- time -limit 20000 slowlog-log-slower-than 10000 slowlog-max-len 128 #rename-command FLUSHALL "" latency-monitor-threshold 0 notify-keyspace-events "" hash -max-ziplist-entries 512 hash -max-ziplist-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set -max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 aof-rewrite-incremental-fsync yes |
1
2
3
4
|
[root@k8s-master01 conf] # kubectl apply -f redis-configmap.yaml [root@k8s-master01 conf] # kubectl get cm -n wiseco|grep redis redis-cluster 2 8m55s |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
|
[root@k8s-master01 redis] # pwd /opt/k8s/k8s_project/redis [root@k8s-master01 redis] # ll total 12 drwxr-xr-x 2 root root 34 Feb 4 18:52 conf drwxr-xr-x 2 root root 45 Feb 4 18:14 image -rw-r--r-- 1 root root 1216 Feb 4 15:31 nfs-rbac.yaml -rw-r--r-- 1 root root 155 Feb 4 15:32 redis-nfs-class.yaml -rw-r--r-- 1 root root 1006 Feb 4 15:32 redis-nfs.yml [root@k8s-master01 redis] # mkdir deploy [root@k8s-master01 redis] # cd deploy/ [root@k8s-master01 deploy] # cat redis-cluster.yml --- apiVersion: v1 kind: Service metadata: namespace: wiseco name: redis-cluster spec: clusterIP: None ports: - port: 6379 targetPort: 6379 name: client - port: 16379 targetPort: 16379 name: gossip selector: app: redis-cluster --- apiVersion: apps /v1 kind: StatefulSet metadata: namespace: wiseco name: redis-cluster spec: serviceName: redis-cluster replicas: 6 selector: matchLabels: app: redis-cluster template: metadata: labels: app: redis-cluster spec: containers: - name: redis image: 172.16.60.238 /wiseco/redis :4.0.11 ports: - containerPort: 6379 name: client - containerPort: 16379 name: gossip command : [ "/etc/redis/fix-ip.sh" , "redis-server" , "/etc/redis/redis.conf" ] env : - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP volumeMounts: - name: conf mountPath: /etc/redis/ readOnly: false - name: data mountPath: /data readOnly: false volumes: - name: conf configMap: name: redis-cluster defaultMode: 0755 volumeClaimTemplates: - metadata: name: data annotations: volume.beta.kubernetes.io /storage-class : "redis-nfs-storage" spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi |
创建并查看
1
2
3
4
5
6
7
8
9
10
11
12
|
[root@k8s-master01 deploy] # kubectl apply -f redis-cluster.yml [root@k8s-master01 deploy] # kubectl get pods -n wiseco|grep redis-cluster redis-cluster-0 1 /1 Running 0 10m redis-cluster-1 1 /1 Running 0 10m redis-cluster-2 1 /1 Running 0 10m redis-cluster-3 1 /1 Running 0 10m redis-cluster-4 1 /1 Running 0 9m35s redis-cluster-5 1 /1 Running 0 9m25s [root@k8s-master01 deploy] # kubectl get svc -n wiseco|grep redis-cluster redis-cluster ClusterIP None <none> 6379 /TCP ,16379 /TCP 10m |
查看PV、PVC
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
[root@k8s-master01 deploy] # kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-20bcb3be-90e1-4354-bd11-4f442a3bd562 10Gi RWX Delete Bound wiseco /data-redis-cluster-0 redis-nfs-storage 19m pvc-3b53a31b-9a53-4bd4-93ff-2cf9fed551de 10Gi RWX Delete Bound wiseco /data-redis-cluster-2 redis-nfs-storage 12m pvc-43c0cba2-54a9-4416-afb6-8b7730a199dc 10Gi RWX Delete Bound wiseco /data-redis-cluster-1 redis-nfs-storage 12m pvc-66daade5-1b97-41ce-a9e0-4cf88d63894d 10Gi RWX Delete Terminating wiseco /data-redis-cluster-5 redis-nfs-storage 11m pvc-dd62a086-1802-446a-9f9d-35620f7f0b4a 10Gi RWX Delete Bound wiseco /data-redis-cluster-4 redis-nfs-storage 11m pvc-e5aa9802-b983-471c-a7da-32eebc497610 10Gi RWX Delete Bound wiseco /data-redis-cluster-3 redis-nfs-storage 12m [root@k8s-master01 deploy] # kubectl get pvc -n wiseco NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-redis-cluster-0 Bound pvc-20bcb3be-90e1-4354-bd11-4f442a3bd562 10Gi RWX redis-nfs-storage 19m data-redis-cluster-1 Bound pvc-43c0cba2-54a9-4416-afb6-8b7730a199dc 10Gi RWX redis-nfs-storage 12m data-redis-cluster-2 Bound pvc-3b53a31b-9a53-4bd4-93ff-2cf9fed551de 10Gi RWX redis-nfs-storage 12m data-redis-cluster-3 Bound pvc-e5aa9802-b983-471c-a7da-32eebc497610 10Gi RWX redis-nfs-storage 12m data-redis-cluster-4 Bound pvc-dd62a086-1802-446a-9f9d-35620f7f0b4a 10Gi RWX redis-nfs-storage 11m data-redis-cluster-5 Bound pvc-66daade5-1b97-41ce-a9e0-4cf88d63894d 10Gi RWX redis-nfs-storage 11m |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
[root@k8s-harbor01 redis] # pwd /data/storage/k8s/redis [root@k8s-harbor01 redis] # ll total 0 drwxrwxrwx 2 root root 63 Feb 4 18:59 wiseco-data-redis-cluster-0-pvc-20bcb3be-90e1-4354-bd11-4f442a3bd562 drwxrwxrwx 2 root root 63 Feb 4 18:59 wiseco-data-redis-cluster-1-pvc-43c0cba2-54a9-4416-afb6-8b7730a199dc drwxrwxrwx 2 root root 63 Feb 4 18:59 wiseco-data-redis-cluster-2-pvc-3b53a31b-9a53-4bd4-93ff-2cf9fed551de drwxrwxrwx 2 root root 63 Feb 4 19:00 wiseco-data-redis-cluster-3-pvc-e5aa9802-b983-471c-a7da-32eebc497610 drwxrwxrwx 2 root root 63 Feb 4 19:00 wiseco-data-redis-cluster-4-pvc-dd62a086-1802-446a-9f9d-35620f7f0b4a drwxrwxrwx 2 root root 63 Feb 4 19:00 wiseco-data-redis-cluster-5-pvc-66daade5-1b97-41ce-a9e0-4cf88d63894d [root@k8s-harbor01 redis] # ls ./* . /wiseco-data-redis-cluster-0-pvc-20bcb3be-90e1-4354-bd11-4f442a3bd562 : appendonly.aof nodes.conf redis.log . /wiseco-data-redis-cluster-1-pvc-43c0cba2-54a9-4416-afb6-8b7730a199dc : appendonly.aof nodes.conf redis.log . /wiseco-data-redis-cluster-2-pvc-3b53a31b-9a53-4bd4-93ff-2cf9fed551de : appendonly.aof nodes.conf redis.log . /wiseco-data-redis-cluster-3-pvc-e5aa9802-b983-471c-a7da-32eebc497610 : appendonly.aof nodes.conf redis.log . /wiseco-data-redis-cluster-4-pvc-dd62a086-1802-446a-9f9d-35620f7f0b4a : appendonly.aof nodes.conf redis.log . /wiseco-data-redis-cluster-5-pvc-66daade5-1b97-41ce-a9e0-4cf88d63894d : appendonly.aof nodes.conf redis.log |
3、初始化Redis Cluster集群
接下来是形成Redis Cluster集群,运行以下命令并键入yes以接受配置。
集群形式:前三个节点成为主节点,后三个节点成为从节点。
需要注意:
redis-trib.rb必须使用ip进行初始化redis集群,使用域名会报如下错误:*******/redis/client.rb:126:in `call’: ERR Invalid node address specified: redis-cluster-0.redis-headless.sts-app.svc.cluster.local:6379 (Redis::CommandError)
这里进行Redis Cluster集群初始化的命令:
以下命令并键入yes以接受配置。前三个节点成为主节点,后三个节点成为从节点。
kubectl exec -it redis-cluster-0 -n wiseco -- redis-trib.rb create --replicas 1 $(kubectl get pods -l app=redis-cluster -n wiseco -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
|
先获取Redis Cluster集群的6个节点Pod的ip地址 [root@k8s-master01 redis] # kubectl get pods -n wiseco -o wide|grep redis-cluster redis-cluster-0 1 /1 Running 0 4h34m 172.30.217.83 k8s-node04 <none> <none> redis-cluster-1 1 /1 Running 0 4h34m 172.30.85.217 k8s-node01 <none> <none> redis-cluster-2 1 /1 Running 0 4h34m 172.30.135.181 k8s-node03 <none> <none> redis-cluster-3 1 /1 Running 0 4h34m 172.30.58.251 k8s-node02 <none> <none> redis-cluster-4 1 /1 Running 0 4h33m 172.30.85.216 k8s-node01 <none> <none> redis-cluster-5 1 /1 Running 0 4h33m 172.30.217.82 k8s-node04 <none> <none> [root@k8s-master01 redis] # kubectl get pods -l app=redis-cluster -n wiseco -o jsonpath='{range.items[*]}{.status.podIP}:6379 ' 172.30.217.83:6379 172.30.85.217:6379 172.30.135.181:6379 172.30.58.251:6379 172.30.85.216:6379 172.30.217.82:6379 这里特别注意一下: 上面命令最后一个单引号前面一定要有空格!! 因为接下来进行Redis Cluster集群初始化的时候,集群节点间的ip+port之间要通过空格隔开。 [root@k8s-master01 redis] # kubectl exec -it redis-cluster-0 -n wiseco -- redis-trib.rb create --replicas 1 $(kubectl get pods -l app=redis-cluster -n wiseco -o jsonpath='{range.items[*]}{.status.podIP}:6379 ') >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 172.30.217.83:6379 172.30.85.217:6379 172.30.135.181:6379 Adding replica 172.30.58.251:6379 to 172.30.217.83:6379 Adding replica 172.30.85.216:6379 to 172.30.85.217:6379 Adding replica 172.30.217.82:6379 to 172.30.135.181:6379 M: e5a3154a17131075f35fb32953b8cf8d6cfc7df0 172.30.217.83:6379 slots:0-5460 (5461 slots) master M: 961398483262f505a115957e7e4eda7ff3e64900 172.30.85.217:6379 slots:5461-10922 (5462 slots) master M: 2d1440e37ea4f4e9f6d39d240367deaa609d324d 172.30.135.181:6379 slots:10923-16383 (5461 slots) master S: 0d7bf40bf18d474509116437959b65551cd68b03 172.30.58.251:6379 replicates e5a3154a17131075f35fb32953b8cf8d6cfc7df0 S: 8cbf699a850c0dafe51524127a594fdbf0a27784 172.30.85.216:6379 replicates 961398483262f505a115957e7e4eda7ff3e64900 S: 2987a33f4ce2e412dcc11c1c1daa2538591cd930 172.30.217.82:6379 replicates 2d1440e37ea4f4e9f6d39d240367deaa609d324d Can I set the above configuration? ( type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ...... >>> Performing Cluster Check (using node 172.30.217.83:6379) M: e5a3154a17131075f35fb32953b8cf8d6cfc7df0 172.30.217.83:6379 slots:0-5460 (5461 slots) master M: 961398483262f505a115957e7e4eda7ff3e64900 172.30.85.217:6379 slots:5461-10922 (5462 slots) master M: 2d1440e37ea4f4e9f6d39d240367deaa609d324d 172.30.135.181:6379 slots:10923-16383 (5461 slots) master M: 0d7bf40bf18d474509116437959b65551cd68b03 172.30.58.251:6379 slots: (0 slots) master replicates e5a3154a17131075f35fb32953b8cf8d6cfc7df0 M: 8cbf699a850c0dafe51524127a594fdbf0a27784 172.30.85.216:6379 slots: (0 slots) master replicates 961398483262f505a115957e7e4eda7ff3e64900 M: 2987a33f4ce2e412dcc11c1c1daa2538591cd930 172.30.217.82:6379 slots: (0 slots) master replicates 2d1440e37ea4f4e9f6d39d240367deaa609d324d [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
|
[root@k8s-master01 redis] # kubectl exec -it redis-cluster-0 -n wiseco -- redis-cli cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:130 cluster_stats_messages_pong_sent:137 cluster_stats_messages_sent:267 cluster_stats_messages_ping_received:132 cluster_stats_messages_pong_received:130 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:267 [root@k8s-master01 redis] # for x in $(seq 0 5); do echo "redis-cluster-$x"; kubectl exec redis-cluster-$x -n wiseco -- redis-cli role; echo; done redis-cluster-0 master 168 172.30.58.251 6379 168 redis-cluster-1 master 168 172.30.85.216 6379 168 redis-cluster-2 master 182 172.30.217.82 6379 168 redis-cluster-3 slave 172.30.217.83 6379 connected 182 redis-cluster-4 slave 172.30.85.217 6379 connected 168 redis-cluster-5 slave 172.30.135.181 6379 connected 182 |