蓝绿部署就是不停止旧版本,直接部署新版本
部署过程:
优点: 无需停机,风险较小
缺点: 切换是全量的,如果版本2有问题,则对用户体验有直接影响, 需要双倍机器资源。
mkdir /root/bluegreen
cat > /root/bluegreen/blue.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: blue
spec:
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
selector:
matchLabels:
app: bluegreen
replicas: 4
template:
metadata:
labels:
app: bluegreen
version: v1.0
spec:
containers:
- name: bluegreen
image: registry.cn-hangzhou.aliyuncs.com/ray-docker/ray-demo-docker:v1
ports:
- containerPort: 80
EOF
[root@kubemaster ~]# kubectl apply -f /root/bluegreen/blue.yaml
cat > /root/bluegreen/bluegreenservice.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
name: bluegreen
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: bluegreen
version: v1.0
type: ClusterIP
EOF
[root@kubemaster ~]# kubectl apply -f /root/bluegreen/bluegreenservice.yaml
cat > /root/bluegreen/green.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: green
spec:
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
selector:
matchLabels:
app: bluegreen
replicas: 4
template:
metadata:
labels:
app: bluegreen
version: v2.0
spec:
containers:
- name: bluegreen
image: registry.cn-hangzhou.aliyuncs.com/ray-docker/ray-demo-docker:v2
ports:
- containerPort: 80
EOF
[root@kubemaster ~]# kubectl apply -f /root/bluegreen/green.yaml
[root@kubemaster ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/blue-599dd97cf7-74fqm 1/1 Running 0 95m
pod/blue-599dd97cf7-cs6mc 1/1 Running 0 95m
pod/blue-599dd97cf7-ddcf5 1/1 Running 0 95m
pod/blue-599dd97cf7-z47hv 1/1 Running 0 95m
pod/green-9fd69c4bc-c6jcd 1/1 Running 0 94m
pod/green-9fd69c4bc-grt7x 1/1 Running 0 94m
pod/green-9fd69c4bc-w7tkj 1/1 Running 0 94m
pod/green-9fd69c4bc-zx6pz 1/1 Running 0 94m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/bluegreen ClusterIP 10.97.172.131 <none> 80/TCP 95m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d17h
查看到的是V1版本
[root@kubemaster ~]# curl http://10.97.172.131/api/home
Webapplication - V1
version: v1.0
改为version: v2.0
,并重新发布
[root@kubemaster ~]# kubectl apply -f /root/bluegreen/bluegreenservice.yaml
查看到的是V2版本
[root@kubemaster ~]# curl http://10.97.172.131:80/api/home
Webapplication - V2
灰度发布,就是将一部分新版服务部署到线上环境,有些用户的请求会进入新版的服务,这会出现两种情况
部署过程:
准备好部署各个阶段的工件
从负载均衡列表中移除掉“金丝雀”服务器
升级“金丝雀”应用(排掉原有流量并进行部署)
对应用进行自动化测试
将“金丝雀”服务器重新添加到负载均衡列表中(连通性和健康检查)
如果“金丝雀”在线使用测试成功,升级剩余的其他服务器。(否则就回滚)
优点: 用户体验影响小,灰度发布过程出现问题只影响部分用户
cat > /root/canary/canary.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: blue
spec:
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
selector:
matchLabels:
app: bluegreen
replicas: 6
template:
metadata:
labels:
app: bluegreen
version: v1.0
spec:
containers:
- name: bluegreen
image: registry.cn-hangzhou.aliyuncs.com/ray-docker/ray-demo-docker:v1
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: green
spec:
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
selector:
matchLabels:
app: bluegreen
replicas: 4
template:
metadata:
labels:
app: bluegreen
version: v2.0
spec:
containers:
- name: bluegreen
image: registry.cn-hangzhou.aliyuncs.com/ray-docker/ray-demo-docker:v2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: bluegreen
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: bluegreen
version: v1.0
type: ClusterIP
EOF
[root@kubemaster ~]# kubectl apply -f /root/canary/canary.yaml
[root@kubemaster ~]# curl http://10.97.172.131:80/api/home
Webapplication - V2
[root@kubemaster ~]# curl http://10.97.172.131:80/api/home
Webapplication - V2
[root@kubemaster ~]# curl http://10.97.172.131:80/api/home
Webapplication - V1
取出部分服务器停止服务,更新后重新投入使用。直到集群中所有实例都是最新版本。默认maxUnavailable和surge都是25%
这种部署方式相对于蓝绿部署,更加节约资源——它不需要运行两个集群、两倍的实例数。可以部分部署,例如每次只取出集群的25%进行升级
mkdir /root/rollingUpdate
cat > /root/rollingUpdate/rollingUpdate.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: rolling-update
labels:
app: rolling-update-deploy
spec:
replicas: 3
selector:
matchLabels:
app: rolling-update-pod
template:
metadata:
labels:
app: rolling-update-pod
spec:
containers:
- name: rolling-update-container
image: registry.cn-hangzhou.aliyuncs.com/ray-docker/ray-demo-docker:v1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: bluegreen
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rolling-update-pod
version: v1.0
type: ClusterIP
EOF
[root@kubemaster ~]# kubectl apply -f /root/rollingUpdate/rollingUpdate.yaml
新启一个bash窗口
[root@kubemaster ~]# kubectl get deployment rolling-update -w
[root@kubemaster ~]# kubectl set image deployment/rolling-update rolling-update-container=registry.cn-hangzhou.aliyuncs.com/ray-docker/ray-demo-docker:v2 --record
[root@kubemaster ~]# kubectl get deployment rolling-update -w
NAME READY UP-TO-DATE AVAILABLE AGE
rolling-update 3/3 3 3 41m
rolling-update 3/3 3 3 41m
rolling-update 3/3 3 3 41m
rolling-update 3/3 0 3 41m
rolling-update 3/3 1 3 41m
rolling-update 4/3 1 4 41m
rolling-update 3/3 1 3 41m
rolling-update 3/3 2 3 41m
rolling-update 4/3 2 4 41m
rolling-update 3/3 2 3 41m
rolling-update 3/3 3 3 41m
rolling-update 4/3 3 4 41m
rolling-update 3/3 3 3 41m
kubectl describe deployment rolling-update
kubectl rollout history deployment rolling-update
## 回滚到上一个版本
kubectl rollout undo deployment rolling-update
##回滚到指定版本
kubectl rollout undo deployment rolling-update --to-revision=1
数值
注意:两者不能同时为0。
比例
注意:两者不能同时为0。
这是我们生产环境提供给用户的默认配置。即“一上一下,先上后下”最平滑原则:1个新版本pod ready(结合readiness)后,才销毁旧版本pod。此配置适用场景是平滑更新、保证服务平稳,但也有缺点,就是“太慢”了
DeploymentController调整replicaset数量时,严格通过以下公式来控制发布节奏
(目标副本数-maxUnavailable) <= 线上实际Ready副本数 <= (目标副本数+maxSurge)
举例:
如果期望副本数为10,且至少80%的副本能够正常工作。且建议maxSurge
与maxUnavailable
一致。
由此可以设定目标副本数
为10,线上实际Ready副本数
为8,从此可以得出maxUnavailable = 10 - 2 = 8
,即8 ≤ 线上实际Ready副本数
≤ 12