KVM环境和其他虚拟化或真实生产最大差异主要就是在实施前期准备工作上:
具体在 DB节点 和存储环境 的准备工作上有差异,本文会详细说明。
而剩余基本软件安装和补丁应用部分无差异,若不清楚可以直接参考之前文章:
下面就具体来看这两个方面的准备工作:
为了尽可能减少配置,这里先将之前的db1环境做一些基础配置:
/u01
目录挂载[root@localhost ~]# pvcreate /dev/vdb
[root@localhost ~]# vgcreate ora /dev/vdb
[root@localhost ~]# lvcreate -l 25599 -n u01 ora
[root@localhost ~]# mkfs.xfs /dev/mapper/ora-u01
[root@localhost ~]# mkdir /u01
[root@localhost ~]# vi /etc/fstab 末尾添加一行:
/dev/mapper/ora-u01 /u01 xfs defaults 0 0
[root@localhost ~]# mount -a
[root@localhost ~]# df -h /u01
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ora-u01 100G 33M 100G 1% /u01
这里初始化先设置为db01xx:
hostnamectl set-hostname db01xx
/etc/hosts配置末尾增加一行:
192.168.1.6 db01xx
yum -y install bc binutils compat-libcap1 compat-libstdc+±33 elfutils-libelf elfutils-libelf-devel \
fontconfig-devel glibc glibc-devel ksh libaio libaio-devel libX11 libXau libXi libXtst \
libXrender libXrender-devel libgcc libstdc++ libstdc+±devel libxcb make net-tools nfs-utils python \
python-configshell python-rtslib python-six targetcli smartmontools sysstat chronyvi ntp gcc unixODBC \
gcc-c++ psmisc unzip chrony perl perl-devel policycoreutils-python policycoreutils
目前只有桥接网卡,需要添加一个私有网卡,用于集群心跳等用途。步骤如下:
virsh domiflist db1
virsh attach-interface db1 network default
virsh dumpxml db1 > /etc/libvirt/qemu/db1.xml
virsh define /etc/libvirt/qemu/db1.xml
--输出如下:
[root@bogon vm-images]# virsh domiflist db1
接口 类型 源 型号 MAC
-------------------------------------------------------
vnet3 bridge br0 virtio 52:54:00:9d:30:17
[root@bogon vm-images]# virsh attach-interface db1 network default
成功附加接口
[root@bogon vm-images]# virsh domiflist db1
接口 类型 源 型号 MAC
-------------------------------------------------------
vnet3 bridge br0 virtio 52:54:00:9d:30:17
vnet4 network default rtl8139 52:54:00:79:88:c6
[root@bogon vm-images]# virsh dumpxml db1 > /etc/libvirt/qemu/db1.xml
[root@bogon vm-images]# virsh define /etc/libvirt/qemu/db1.xml
定义域 db1(从 /etc/libvirt/qemu/db1.xml)
--测试重启VM网卡是否依然存在:
[root@bogon vm-images]# virsh shutdown db1
域 db1 被关闭
[root@bogon vm-images]# virsh domiflist db1
接口 类型 源 型号 MAC
-------------------------------------------------------
- bridge br0 virtio 52:54:00:9d:30:17
- network default rtl8139 52:54:00:79:88:c6
[root@bogon vm-images]# virsh start db1
域 db1 已开始
[root@bogon vm-images]# virsh domiflist db1
接口 类型 源 型号 MAC
-------------------------------------------------------
vnet3 bridge br0 virtio 52:54:00:9d:30:17
vnet4 network default rtl8139 52:54:00:79:88:c6
这一步遇到添加网卡重启vm消失的问题,还是记录下过程备忘:
[root@bogon ~]# virsh attach-interface db1 network default
成功附加接口
[root@bogon ~]# virsh attach-interface db1 bridge br0
成功附加接口
[root@bogon ~]# ls -l /etc/libvirt/qemu/db1.xml
-rw-------. 1 root root 4751 1月 9 15:10 /etc/libvirt/qemu/db1.xml
--将当前的配置dump并写入到配置xml文件中
[root@bogon ~]# virsh dumpxml db1 > /etc/libvirt/qemu/db1.xml
在虚拟机上查看也会多了对应的网卡,网卡配置文件需要手工配置:
如果添加的网卡不再需要还可以删除:
[root@bogon ~]# virsh detach-interface db1 bridge
错误:域有 2 个接口。请使用 --mac 指定要分离的接口。
错误:分离接口失败
[root@bogon ~]# virsh domiflist db1
接口 类型 源 型号 MAC
-------------------------------------------------------
vnet3 bridge br0 virtio 52:54:00:9d:30:17
vnet4 network default rtl8139 52:54:00:1d:aa:2f
vnet5 bridge br0 rtl8139 52:54:00:12:b3:80
[root@bogon ~]# virsh detach-interface db1 bridge --mac 52:54:00:12:b3:80
成功分离接口
[root@bogon ~]# virsh domiflist db1
接口 类型 源 型号 MAC
-------------------------------------------------------
vnet3 bridge br0 virtio 52:54:00:9d:30:17
vnet4 network default rtl8139 52:54:00:1d:aa:2f
[root@bogon ~]# virsh detach-interface db1 network
成功分离接口
[root@bogon ~]# virsh domiflist db1
接口 类型 源 型号 MAC
-------------------------------------------------------
vnet3 bridge br0 virtio 52:54:00:9d:30:17
--将当前的配置dump并写入到配置xml文件中
[root@bogon ~]# virsh dumpxml db1 > /etc/libvirt/qemu/db1.xml
现在有个问题是,重启后添加的网卡会消失,虽然做了virsh dumpxml操作,保存到/etc/libvirt/qemu/db1.xml了;
也就是域中的信息没有获取正确的配置信息,看virsh help中尝试使用:
virsh define /etc/libvirt/qemu/db1.xml
[root@bogon vm-images]# virsh define /etc/libvirt/qemu/db1.xml
定义域 db1(从 /etc/libvirt/qemu/db1.xml)
然后再次尝试启停,终于稳定了,看来网上的经验不太完整,导致折腾了半天,以为是啥bug呢;
[root@bogon vm-images]# virsh domiflist db1
接口 类型 源 型号 MAC
-------------------------------------------------------
vnet3 bridge br0 virtio 52:54:00:9d:30:17
vnet4 network default rtl8139 52:54:00:2c:48:57
[root@bogon vm-images]# virsh shutdown db1
域 db1 被关闭
[root@bogon vm-images]# virsh domiflist db1
接口 类型 源 型号 MAC
-------------------------------------------------------
- bridge br0 virtio 52:54:00:9d:30:17
- network default rtl8139 52:54:00:2c:48:57
[root@bogon vm-images]#
[root@bogon vm-images]# virsh start db1
域 db1 已开始
[root@bogon vm-images]# virsh domiflist db1
接口 类型 源 型号 MAC
-------------------------------------------------------
vnet3 bridge br0 virtio 52:54:00:9d:30:17
vnet4 network default rtl8139 52:54:00:2c:48:57
所以最开始是少了一个关键步骤把更改的xml加载到域,也就是说无论增加网卡还是删除网卡,之后都要做这两步才可以保证永久生效:
[root@bogon vm-images]# virsh dumpxml db1 > /etc/libvirt/qemu/db1.xml
[root@bogon vm-images]# virsh define /etc/libvirt/qemu/db1.xml
定义域 db1(从 /etc/libvirt/qemu/db1.xml)
当然,如果觉得命令行不好玩,也可以采用图形方式virt-manager来添加/删除设备。
添加网卡成功后,配置一个IP地址,测试网络和宿主机正常。
遇到网卡变成eth1重启又变成eth0的情况,应该是反复删除导致,测试正常稳定后,再进行下一步。
在 /u01/media 目录下上传安装和补丁介质。
mkdir /u01/media
上传GI、DB、OPatch、RU补丁介质。
在上面基础配置做完之后,直接关闭db1,然后基于db1直接克隆出两台机器,用于RAC的两个节点:
virsh shutdown db1
virt-clone --original db1 --name rac1 --auto-clone
virt-clone --original db1 --name rac2 --auto-clone
修改主机名和IP地址,然后配置/etc/hosts内容:
192.168.1.11 db01rac1
192.168.1.12 db01rac2
192.168.1.13 db01rac1-vip
192.168.1.14 db01rac2-vip
192.168.1.15 db01rac-scan
基于vm1直接克隆出一台机器,用于RAC的共享存储(iSCSI模拟)
virt-clone --original vm1 --name storage1 --auto-clone
给vm1添加一块50G的磁盘,采用raw格式。
前面文章提过可以直接修改xml来添加磁盘,这次尝试使用命令的方式添加:
dd if=/dev/zero of=/flash/vm-images/s1-lun1.img bs=1M count=51200
virsh attach-disk storage1 /flash/vm-images/s1-lun1.img vdb
virsh dumpxml storage1 > /etc/libvirt/qemu/storage1.xml
virsh define /etc/libvirt/qemu/storage1.xml
这样添加的磁盘重启也不会丢失,可以观察xml变化,是一样的效果。下面是实际操作输出:
[root@bogon vm-images]# dd if=/dev/zero of=/flash/vm-images/s1-lun1.img bs=1M count=51200
记录了51200+0 的读入
记录了51200+0 的写出
53687091200字节(54 GB)已复制,84.4384 秒,636 MB/秒
[root@bogon vm-images]# virsh attach-disk storage1 /flash/vm-images/s1-lun1.img vdb
成功附加磁盘
[root@bogon vm-images]# virsh dumpxml storage1 > /etc/libvirt/qemu/storage1.xml
[root@bogon vm-images]# virsh define /etc/libvirt/qemu/storage1.xml
定义域 storage1(从 /etc/libvirt/qemu/storage1.xml)
这50G 的盘用于RAC的共享存储,计划分配:
OCRDG 3 * 1G
DATADG 30GB
ARCHDG 16GB
这里就可以直接参考之前文章 RHEL7 配置iSCSI模拟环境 来操作:
首先lvm按本次规划来创建:
pvcreate /dev/vdb
vgcreate vg_storage /dev/vdb
lvcreate -L 1g -n lv_lun1 vg_storage
lvcreate -L 1g -n lv_lun2 vg_storage
lvcreate -L 1g -n lv_lun3 vg_storage
lvcreate -L 30g -n lv_lun4 vg_storage
lvcreate -L 16g -n lv_lun5 vg_storage
使用yum安装targetd和targetcli,并配置好服务
yum -y install targetd targetcli
systemctl status targetd
systemctl start targetd
systemctl enable targetd
systemctl list-unit-files|grep targetd
targetcli进入命令行:
cd /backstores/block
create disk1 /dev/mapper/vg_storage-lv_lun1
create disk2 /dev/mapper/vg_storage-lv_lun2
create disk3 /dev/mapper/vg_storage-lv_lun3
create disk4 /dev/mapper/vg_storage-lv_lun4
create disk5 /dev/mapper/vg_storage-lv_lun5
使用targetcli创建iqn和LUN
cd /iscsi
create
cd /iscsi/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6/tpg1/luns
create /backstores/block/disk1
create /backstores/block/disk2
create /backstores/block/disk3
create /backstores/block/disk4
create /backstores/block/disk5
使用targetcli创建acls
cd /iscsi/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6/tpg1/acls
create iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6:client
cd /iscsi/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6/tpg1/portals
delete 0.0.0.0 3260
create 192.168.1.10
最终配置好的输出为:
/iscsi/iqn.20.../tpg1/portals> ls /
o- / [...]
o- backstores [...]
| o- block [Storage Objects: 5]
| | o- disk1 [/dev/mapper/vg_storage-lv_lun1 (1.0GiB) write-thru activated]
| | | o- alua [ALUA Groups: 1]
| | | o- default_tg_pt_gp [ALUA state: Active/optimized]
| | o- disk2 [/dev/mapper/vg_storage-lv_lun2 (1.0GiB) write-thru activated]
| | | o- alua [ALUA Groups: 1]
| | | o- default_tg_pt_gp [ALUA state: Active/optimized]
| | o- disk3 [/dev/mapper/vg_storage-lv_lun3 (1.0GiB) write-thru activated]
| | | o- alua [ALUA Groups: 1]
| | | o- default_tg_pt_gp [ALUA state: Active/optimized]
| | o- disk4 [/dev/mapper/vg_storage-lv_lun4 (30.0GiB) write-thru activated]
| | | o- alua [ALUA Groups: 1]
| | | o- default_tg_pt_gp [ALUA state: Active/optimized]
| | o- disk5 [/dev/mapper/vg_storage-lv_lun5 (16.0GiB) write-thru activated]
| | o- alua [ALUA Groups: 1]
| | o- default_tg_pt_gp [ALUA state: Active/optimized]
| o- fileio [Storage Objects: 0]
| o- pscsi [Storage Objects: 0]
| o- ramdisk [Storage Objects: 0]
o- iscsi [Targets: 1]
| o- iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6 [TPGs: 1]
| o- tpg1 [no-gen-acls, no-auth]
| o- acls [ACLs: 1]
| | o- iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6:client [Mapped LUNs: 5]
| | o- mapped_lun0 [lun0 block/disk1 (rw)]
| | o- mapped_lun1 [lun1 block/disk2 (rw)]
| | o- mapped_lun2 [lun2 block/disk3 (rw)]
| | o- mapped_lun3 [lun3 block/disk4 (rw)]
| | o- mapped_lun4 [lun4 block/disk5 (rw)]
| o- luns [LUNs: 5]
| | o- lun0 [block/disk1 (/dev/mapper/vg_storage-lv_lun1) (default_tg_pt_gp)]
| | o- lun1 [block/disk2 (/dev/mapper/vg_storage-lv_lun2) (default_tg_pt_gp)]
| | o- lun2 [block/disk3 (/dev/mapper/vg_storage-lv_lun3) (default_tg_pt_gp)]
| | o- lun3 [block/disk4 (/dev/mapper/vg_storage-lv_lun4) (default_tg_pt_gp)]
| | o- lun4 [block/disk5 (/dev/mapper/vg_storage-lv_lun5) (default_tg_pt_gp)]
| o- portals [Portals: 1]
| o- 192.168.1.10:3260 [OK]
o- loopback [Targets: 0]
o- vhost [Targets: 0]
/iscsi/iqn.20.../tpg1/portals>
如果系统开了防火墙,将防火墙添加放行tcp 3260端口:
firewall-cmd --permanent --add-port=3260/tcp
firewall-cmd --reload
到此,iSCSI服务端配置完成,需要到客户端配置,这里也就是RAC的2个节点上都要配置:
yum -y install iscsi-initiator-utils
vi /etc/iscsi/initiatorname.iscsi
#InitiatorName=iqn.1988-12.com.oracle:178a747c44
InitiatorName=iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6:client
--使用iscsiadm发现可用存储设备并登陆连接
iscsiadm -m discovery -t st -p 192.168.1.10
iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6 -p 192.168.1.10 --login
最终RAC两个节点fdisk -l 都可以看到新的5块磁盘:
[root@db01rac1 ~]# fdisk -l /dev/sd*
Disk /dev/sda: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdd: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sde: 17.2 GB, 17179869184 bytes, 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
至此,所有准备工作完成。