Table of Contents |
---|
Installation/de-commisioning
...
If a storage-node is reinstalled, either because it needs a newer OS or because the node moves from old to new infrastructure, there is no need to start with its OSD fresh. The OSD's can be reinstalled into the cluster if they have not been reformatted with the following steps:
- Run "ceph-disk volume lvm list" to see which disks verify thall all OSDs are recognized by ceph
- "chown" all the ceph disk's to be owned by the "ceph" user and the "ceph" group
- Restart all the osd by running "ceph-disk volume lvm activate <path -to-disk>" for each disk that "ceph-disk" lists as an "ceph data" disk.-all"
Add a storage node
- Ensure that all SSDs are listed in
profile::disk::ssds
in the node-specific hiera - Install the role
role::storage
on the new node - Create OSDs, typically 2 per device on a 2TB Drive. Details below
...
Code Block |
---|
root@cephmon1:~# ceph osd pool set <POOL> crush_rule <CRUSH-MAP> |
Map osd to physical disk
On a cephmon
- ceph osd tree
- For the output needed
- ceph osd tree | grep down
- ceph osd find osd.XXX
- For the output needed
- ceph osd find osd.XXX | grep host
On the storage node
Find the device
- ceph-volume lvm list
- For the output needed
- ceph-volume lvm list | grep 'osd id\|devices'
Find serial number
smartctl -a <device from above> | grep -i "Serial Number"
Find physical drive bay
Idrac
Use idrac and track the serial number of the disk to which drive bay
Use the OS to trigger disk light if disk is working
- dd if=<device from above> of=/dev/null