[转帖]tidb4.0.4使用tiup扩容TiKV 节点

tidb4,使用,tiup,扩容,tikv,节点 · 浏览次数 : 0

小编点评

排版是将内容排列整的方法,通常用于将内容排列整到一个表格中。以下是一些排版的方法: * **行间排版**:将内容换行,使它们之间的距离相同。 * **间距排版**:将内容换行,但间距大小不同。 * **缩排**:将内容缩短,使它们之间的距离相同。 * **排版**:将内容按照大小排列,使它们之间的距离相同。 以下是一些排版的方法的示例: * **行间排版**: ``` 内容1 内容2 内容3 ``` * **间距排版**: ``` 内容1 内容2 内容3 内容4 内容5 ``` * **缩排**: ``` 内容1 内容2 内容3 ... ``` * **排版**: ``` 内容1 内容2 内容3 内容4 内容5 内容6 ... ```

正文

https://blog.csdn.net/mchdba/article/details/108896766

 

环境:centos7、tidb4.0.4、tiup-v1.0.8 

添加两个tikv节点  172.21.210.37-38

思路:初始化两台服务器、配置ssh互通——>编辑配置文件——>执行扩容命令——>重启grafana

1、初始化服务器、配置ssh互通

1
2
3
4
1、时间同步
2、配置ssh
ssh-copy-id root@172.21.210.37
ssh-copy-id root@172.21.210.38

2、编辑配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
tiup cluster list                                     #查看当前的集群名称列表
tiup cluster edit-config <cluster-name>  #查看集群配置、拷贝对应的配置
 
vi scale-out.yaml
tikv_servers:
- host: 172.21.210.37
  ssh_port: 22
  port: 20160
  status_port: 20180
  deploy_dir: /data1/tidb-deploy/tikv-20160
  data_dir: /data1/tidb-data/tikv-20160
  arch: amd64
  os: linux
- host: 172.21.210.38
  ssh_port: 22
  port: 20160
  status_port: 20180
  deploy_dir: /data1/tidb-deploy/tikv-20160
  data_dir: /data1/tidb-data/tikv-20160
  arch: amd64
  os: linux

3、执行扩容命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
此处假设当前执行命令的用户和新增的机器打通了互信,如果不满足已打通互信的条件,需要通过 -p 来输入新机器的密码,或通过 -i 指定私钥文件。
tiup cluster scale-out <cluster-name> scale-out.yaml
预期输出 Scaled cluster <cluster-name> out successfully 信息,表示扩容操作成功
 
root@host-172-21-210-32 tidb_config]# tiup cluster scale-out tidb scale-out.yaml
Starting component `cluster`:  scale-out tidb scale-out.yaml
Please confirm your topology:
TiDB Cluster: tidb
TiDB Version: v4.0.4
Type  Host           Ports        OS/Arch       Directories
----  ----           -----        -------       -----------
tikv  172.21.210.37  20160/20180  linux/x86_64  /data1/tidb-deploy/tikv-20160,/data1/tidb-data/tikv-20160
tikv  172.21.210.38  20160/20180  linux/x86_64  /data1/tidb-deploy/tikv-20160,/data1/tidb-data/tikv-20160
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa.pub
 
 
  - Download tikv:v4.0.4 (linux/amd64) ... Done
+ [ Serial ] - RootSSH: user=root, host=172.21.210.38, port=22, key=/root/.ssh/id_rsa
+ [ Serial ] - EnvInit: user=tidb, host=172.21.210.38
+ [ Serial ] - RootSSH: user=root, host=172.21.210.37, port=22, key=/root/.ssh/id_rsa
+ [ Serial ] - EnvInit: user=tidb, host=172.21.210.37
+ [ Serial ] - Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy','/data1/tidb-data'
+ [ Serial ] - Mkdir: host=172.21.210.38, directories='/data1/tidb-deploy','/data1/tidb-data'
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.39
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.33
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.34
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.33
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.35
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.36
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32
+ [ Serial ] - UserSSH: user=tidb, host=172.21.210.38
 
+ [ Serial ] - UserSSH: user=tidb, host=172.21.210.37
+ [ Serial ] - Mkdir: host=172.21.210.38, directories='/data1/tidb-deploy/tikv-20160','/data1/tidb-deploy/tikv-20160/log','/data1/tidb-deploy/tikv-20160/bin','/data1/tidb-deploy/tikv-20160/conf','/data1/tidb-deploy/tikv-20160/scripts'
+ [ Serial ] - Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy/tikv-20160','/data1/tidb-deploy/tikv-20160/log','/data1/tidb-deploy/tikv-20160/bin','/data1/tidb-deploy/tikv-20160/conf','/data1/tidb-deploy/tikv-20160/scripts'
 
 
  - Copy blackbox_exporter -> 172.21.210.37 ... ? Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy/monitor-9100','/data1/t...
  - Copy blackbox_exporter -> 172.21.210.37 ... ? Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy/monitor-9100','/data1/t...
  - Copy node_exporter -> 172.21.210.37 ... ? CopyComponent: component=node_exporter, version=v0.17.0, remote=172.21.210.37:/data1/t...
  - Copy blackbox_exporter -> 172.21.210.37 ... ? MonitoredConfig: cluster=tidb, user=tidb, node_exporter_port=9100, blackbox_export...
  - Copy node_exporter -> 172.21.210.38 ... Done
+ [ Serial ] - ScaleConfig: cluster=tidb, user=tidb, host=172.21.210.37, service=tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=
+ [ Serial ] - ScaleConfig: cluster=tidb, user=tidb, host=172.21.210.38, service=tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=
+ [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}
Starting component pd
        Starting instance pd 172.21.210.33:2379
        Starting instance pd 172.21.210.32:2379
        Start pd 172.21.210.33:2379 success
        Start pd 172.21.210.32:2379 success
Starting component node_exporter
        Starting instance 172.21.210.32
        Start 172.21.210.32 success
Starting component blackbox_exporter
        Starting instance 172.21.210.32
        Start 172.21.210.32 success
Starting component node_exporter
        Starting instance 172.21.210.33
        Start 172.21.210.33 success
Starting component blackbox_exporter
        Starting instance 172.21.210.33
        Start 172.21.210.33 success
Starting component tikv
        Starting instance tikv 172.21.210.35:20160
        Starting instance tikv 172.21.210.34:20160
        Starting instance tikv 172.21.210.39:20160
        Starting instance tikv 172.21.210.36:20160
        Start tikv 172.21.210.39:20160 success
        Start tikv 172.21.210.34:20160 success
        Start tikv 172.21.210.35:20160 success
        Start tikv 172.21.210.36:20160 success
Starting component node_exporter
        Starting instance 172.21.210.35
        Start 172.21.210.35 success
Starting component blackbox_exporter
        Starting instance 172.21.210.35
        Start 172.21.210.35 success
Starting component node_exporter
        Starting instance 172.21.210.34
        Start 172.21.210.34 success
Starting component blackbox_exporter
        Starting instance 172.21.210.34
        Start 172.21.210.34 success
Starting component node_exporter
        Starting instance 172.21.210.39
        Start 172.21.210.39 success
Starting component blackbox_exporter
        Starting instance 172.21.210.39
        Start 172.21.210.39 success
Starting component node_exporter
        Starting instance 172.21.210.36
        Start 172.21.210.36 success
Starting component blackbox_exporter
        Starting instance 172.21.210.36
        Start 172.21.210.36 success
Starting component tidb
        Starting instance tidb 172.21.210.33:4000
        Starting instance tidb 172.21.210.32:4000
        Start tidb 172.21.210.32:4000 success
        Start tidb 172.21.210.33:4000 success
Starting component prometheus
        Starting instance prometheus 172.21.210.32:9090
        Start prometheus 172.21.210.32:9090 success
Starting component grafana
        Starting instance grafana 172.21.210.32:3000
        Start grafana 172.21.210.32:3000 success
Starting component alertmanager
        Starting instance alertmanager 172.21.210.32:9093
        Start alertmanager 172.21.210.32:9093 success
Checking service state of pd
        172.21.210.32      Active: active (running) since Fri 2020-10-16 22:50:31 CST; 2 weeks 5 days ago
        172.21.210.33      Active: active (running) since Fri 2020-10-16 22:50:22 CST; 2 weeks 5 days ago
Checking service state of tikv
        172.21.210.34      Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago
        172.21.210.35      Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago
        172.21.210.36      Active: active (running) since Sat 2020-10-17 02:25:23 CST; 2 weeks 5 days ago
        172.21.210.39      Active: active (running) since Fri 2020-10-16 23:34:13 CST; 2 weeks 5 days ago
Checking service state of tidb
        172.21.210.32      Active: active (running) since Fri 2020-10-16 22:50:49 CST; 2 weeks 5 days ago
        172.21.210.33      Active: active (running) since Fri 2020-10-16 22:50:40 CST; 2 weeks 5 days ago
Checking service state of prometheus
        172.21.210.32      Active: active (running) since Sat 2020-10-17 02:25:27 CST; 2 weeks 5 days ago
Checking service state of grafana
        172.21.210.32      Active: active (running) since Fri 2020-10-16 23:55:07 CST; 2 weeks 5 days ago
Checking service state of alertmanager
        172.21.210.32      Active: active (running) since Fri 2020-10-16 22:51:06 CST; 2 weeks 5 days ago
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.38
+ [Parallel] - UserSSH: user=tidb, host=172.21.210.37
+ [ Serial ] - save meta
+ [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}
Starting component tikv
        Starting instance tikv 172.21.210.38:20160
        Starting instance tikv 172.21.210.37:20160
        Start tikv 172.21.210.37:20160 success
        Start tikv 172.21.210.38:20160 success
Starting component node_exporter
        Starting instance 172.21.210.37
        Start 172.21.210.37 success
Starting component blackbox_exporter
        Starting instance 172.21.210.37
        Start 172.21.210.37 success
Starting component node_exporter
        Starting instance 172.21.210.38
        Start 172.21.210.38 success
Starting component blackbox_exporter
        Starting instance 172.21.210.38
        Start 172.21.210.38 success
Checking service state of tikv
        172.21.210.37      Active: active (running) since Thu 2020-11-05 11:33:46 CST; 3s ago
        172.21.210.38      Active: active (running) since Thu 2020-11-05 11:33:46 CST; 2s ago
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/alertmanager-9093.service, deploy_dir=/data1/tidb-deploy/alertmanager-9093, data_dir=[/data1/tidb-data/alertmanager-9093], log_dir=/data1/tidb-deploy/alertmanager-9093/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.36, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tidb-4000.service, deploy_dir=/data1/tidb-deploy/tidb-4000, data_dir=[], log_dir=/data1/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/pd-2379.service, deploy_dir=/data1/tidb-deploy/pd-2379, data_dir=[/data1/tidb-data/pd-2379], log_dir=/data1/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.37, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.33, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tidb-4000.service, deploy_dir=/data1/tidb-deploy/tidb-4000, data_dir=[], log_dir=/data1/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.35, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/prometheus-9090.service, deploy_dir=/data1/tidb-deploy/prometheus-9090, data_dir=[/data1/tidb-data/prometheus-9090], log_dir=/data1/tidb-deploy/prometheus-9090/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.34, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/grafana-3000.service, deploy_dir=/data1/tidb-deploy/grafana-3000, data_dir=[], log_dir=/data1/tidb-deploy/grafana-3000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.38, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.33, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/pd-2379.service, deploy_dir=/data1/tidb-deploy/pd-2379, data_dir=[/data1/tidb-data/pd-2379], log_dir=/data1/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.39, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache
+ [ Serial ] - ClusterOperate: operation=RestartOperation, options={Roles:[prometheus] Nodes:[] Force:false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}
Stopping component prometheus
        Stopping instance 172.21.210.32
        Stop prometheus 172.21.210.32:9090 success
Starting component prometheus
        Starting instance prometheus 172.21.210.32:9090
        Start prometheus 172.21.210.32:9090 success
Starting component node_exporter
        Starting instance 172.21.210.32
        Start 172.21.210.32 success
Starting component blackbox_exporter
        Starting instance 172.21.210.32
        Start 172.21.210.32 success
Checking service state of pd
        172.21.210.33      Active: active (running) since Fri 2020-10-16 22:50:22 CST; 2 weeks 5 days ago
        172.21.210.32      Active: active (running) since Fri 2020-10-16 22:50:31 CST; 2 weeks 5 days ago
Checking service state of tikv
        172.21.210.35      Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago
        172.21.210.39      Active: active (running) since Fri 2020-10-16 23:34:13 CST; 2 weeks 5 days ago
        172.21.210.34      Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago
        172.21.210.36      Active: active (running) since Sat 2020-10-17 02:25:23 CST; 2 weeks 5 days ago
Checking service state of tidb
        172.21.210.32      Active: active (running) since Fri 2020-10-16 22:50:49 CST; 2 weeks 5 days ago
        172.21.210.33      Active: active (running) since Fri 2020-10-16 22:50:40 CST; 2 weeks 5 days ago
Checking service state of prometheus
        172.21.210.32      Active: active (running) since Thu 2020-11-05 11:33:53 CST; 2s ago
Checking service state of grafana
        172.21.210.32      Active: active (running) since Fri 2020-10-16 23:55:07 CST; 2 weeks 5 days ago
Checking service state of alertmanager
        172.21.210.32      Active: active (running) since Fri 2020-10-16 22:51:06 CST; 2 weeks 5 days ago
+ [ Serial ] - UpdateTopology: cluster=tidb
Scaled cluster `tidb` out successfully

4、查看集群状态、重启grafana

1
2
3
4
检查集群状态
    tiup cluster display <cluster-name>
重启grafana
    tiup cluster restart tidb -R grafana

与[转帖]tidb4.0.4使用tiup扩容TiKV 节点相似的内容:

[转帖]tidb4.0.4使用tiup扩容TiKV 节点

https://blog.csdn.net/mchdba/article/details/108896766 环境:centos7、tidb4.0.4、tiup-v1.0.8 添加两个tikv节点 172.21.210.37-38 思路:初始化两台服务器、配置ssh互通——>编辑配置文件——>执行扩

[转帖]使用 TiUP cluster 在单机上安装TiDB

https://zhuanlan.zhihu.com/p/369414808 TiUP 是 TiDB 4.0 版本引入的集群运维工具,TiUP cluster 是 TiUP 提供的使用 Golang 编写的集群管理组件,通过 TiUP cluster 组件就可以进行日常的运维工作,包括部署、启动、关

[转帖]使用 TiUP 升级 TiDB

本文档适用于以下升级路径: 使用 TiUP 从 TiDB 4.0 版本升级至 TiDB 7.1。 使用 TiUP 从 TiDB 5.0-5.4 版本升级至 TiDB 7.1。 使用 TiUP 从 TiDB 6.0-6.6 版本升级至 TiDB 7.1。 使用 TiUP 从 TiDB 7.0 版本升级

[转帖]使用 TiUP 部署 TiDB 集群

https://docs.pingcap.com/zh/tidb/stable/production-deployment-using-tiup TiUP 是 TiDB 4.0 版本引入的集群运维工具,TiUP cluster 是 TiUP 提供的使用 Golang 编写的集群管理组件,通过 TiU

[转帖]Titan 配置

https://www.bookstack.cn/read/TiDB-4.0/storage-engine-titan-configuration.md 开启 Titan Titan 对 RocksDB 兼容,也就是说,使用 RocksDB 存储引擎的现有 TiKV 实例可以直接开启 Titan。

[转帖]tidb数据库5.4.3和6.5.3版本性能测试对比

https://tidb.net/blog/5454621f 一、测试需求: 基于历史原因,我们的业务数据库一直使用5.4.3,最近由于研发提出需求:需要升级到6.5.3版本,基于版本不同,需要做个压力测试已验证2个版本之间的性能差异。 二、测试目的: 验证tidb数据库5.4.3和6.5.3版本性

[转帖]tidb集群部署

http://blog.itpub.net/29785807/viewspace-2789852/ 一.安装规划 1 2 3 4 5 6 使用15台服务器 5台tidb服务器:每台3个tidb实例+1个pd+1个pump 10台tikv服务器:每台4个tikv实例 drainer_servers 安

[转帖]tidb之旅——tidb架构选择

https://zhuanlan.zhihu.com/p/641650168 前言 从4月份开始利用tidb改造了我们公司bi系统。这个过程中,我感觉到了tidb的强大。也打算记录一下整个改造过程。我打算从4个方面来记录这个改造过程。tidb架构选择,dm工具的使用——这两个部分还是tidb6.5.

[转帖]在 TiDB 中正确使用索引,性能提升 666 倍

https://tidb.net/book/tidb-monthly/2022/2022-04/usercase/index-666 背景​ 最近在给一个物流系统做TiDB POC测试,这个系统是基于MySQL开发的,本次投入测试的业务数据大概10个库约900张表,最大单表6千多万行。 这个规模不算

[转帖]使用 TiFlash

TiDB试用 来源:TiDB 浏览 490 扫码 分享 2021-04-20 20:57:48 使用 TiFlash 按表构建 TiFlash 副本 查看表同步进度 使用 TiDB 读取 TiFlash 智能选择 Engine 隔离 手工 Hint 三种方式之间关系的总结 使用 TiSpark 读取