...
Patching should be finnished before 23:00, but experience shows that it's finished at around 20:00.
Patching procedures
Storage nodes
Log in to a ceph monitor (cephmon0, 1 or 2) and run the command "watch -n 1 ceph -s". Verify the following :
Code Block |
---|
# health: should be ok health: HEALTH_OK # mon: should be 3 daemons and have quorum # osd: all should be up, as of this example 50 of 50 are up. services: mon: 3 daemons, quorum cephmon0,cephmon1,cephmon2 mgr: cephmon0(active), standbys: cephmon1, cephmon2 osd: 50 osds: 50 up, 50 in rgw: 1 daemon active data: pools: 10 pools, 880 pgs objects: 1.39M objects, 5.59TiB usage: 16.8TiB used, 74.2TiB / 91.0TiB avail pgs: 878 active+clean 2 active+clean+scrubbing+deep io: client: 8.16KiB/s rd, 2.01MiB/s wr, 105op/s rd, 189op/s wr |
When everything is ok, reboot first node and await for ceph to be ok again before doing the next.
Compute nodes
Verify the instances running on the compute node
Code Block |
---|
openstack server list --all --host compute01
+--------------------------------------+--------------------+--------+-----------------------------------------+---------------------------------------------+-----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+--------------------+--------+-----------------------------------------+---------------------------------------------+-----------+
| 5c32f1d1-2f12-1234-beffe112345ceffe1 | kubertest-master-2 | ACTIVE | kubertest=10.2.0.7, 129.241.152.9 | CoreOS 20190501 | m1.xlarge |
+--------------------------------------+--------------------+--------+-----------------------------------------+---------------------------------------------+-----------+ |
- Check if one or more of the instances have ok network.
- Check if there are no more than 1 kube master on a compute node. They require quorum, so moving a master is needed if there are two instances of the same master on one compute node