Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Installation/de-commisioning

...

If a storage-node is reinstalled, either because it needs a newer OS or because the node moves from old to new infrastructure, there is no need to start with its OSD fresh. The OSD's can be reinstalled into the cluster if they have not been reformatted with the following steps:

  1. Run "ceph-disk volume lvm list" to see which disks verify thall all OSDs are recognized by ceph
  2. "chown" all the ceph disk's to be owned by the "ceph" user and the "ceph" group
  3. Restart all the osd by running "ceph-disk volume lvm activate <path -to-disk>" for each disk that "ceph-disk" lists as an "ceph data" disk.-all"

Add a storage node

  1. Ensure that all SSDs are listed in profile::disk::ssds in the node-specific hiera
  2. Install the role role::storage on the new node
  3. Create OSDs, typically 2 per device on a 2TB Drive. Details below

...

Code Block
root@cephmon1:~# ceph osd pool set <POOL> crush_rule <CRUSH-MAP>

Map osd to physical disk

On a cephmon

  • ceph osd tree
    • For the output needed
    • ceph osd tree | grep down
  • ceph osd find osd.XXX
    • For the output needed
    • ceph osd find osd.XXX | grep host

On the storage node

Find the device

  • ceph-volume lvm list
    • For the output needed
    • ceph-volume lvm list | grep 'osd id\|devices'

Find serial number

  • smartctl -a <device from above>  | grep -i "Serial Number"

Find physical drive bay

Idrac

Use idrac and track the serial number of the disk to which drive bay

Use the OS to trigger disk light if disk is working

  • dd if=<device from above> of=/dev/null