Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Installation/de-commisioning

...

If a storage-node is reinstalled, either because it needs a newer OS or because the node moves from old to new infrastructure, there is no need to start with its OSD fresh. The OSD's can be reinstalled into the cluster if they have not been reformatted with the following steps:

  1. Run "ceph-disk volume lvm list" to see which disks are recognized by ceph"chown" all the ceph disk's to be owned by the "ceph" user and the "ceph" groupverify thall all OSDs are recognized by ceph
  2. Restart all the osd by running "ceph-disk volume lvm activate <path -to-disk>" for each disk that "ceph-disk" lists as an "ceph data" disk.all"

Add a storage node

  1. Ensure that all SSDs are listed in profile::disk::ssds in the node-specific hiera
  2. Install the role role::storage on the new node
  3. Create OSDs, typically 2 per device on a 2TB Drive. Details below
Code Block
# List available disks
ceph-volume inventory

# Dell tends to install EFI stuff on the first disk. Check if there is any partitions on /dev/sdb. If it is, run
ceph-volume lvm zap /dev/sdb

# Create 2 OSDs on each disk you intend to add
ceph-volume lvm batch  --osds-per-device 2  /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk

# Restart the services
systemctl restart ceph.target


Storage management

HDDs vs SSDs in hybrid clusters

...

Code Block
root@cephmon1:~# ceph osd pool set <POOL> crush_rule <CRUSH-MAP>

Map osd to physical disk

On a cephmon

  • ceph osd tree
    • For the output needed
    • ceph osd tree | grep down
  • ceph osd find osd.XXX
    • For the output needed
    • ceph osd find osd.XXX | grep host

On the storage node

Find the device

  • ceph-volume lvm list
    • For the output needed
    • ceph-volume lvm list | grep 'osd id\|devices'

Find serial number

  • smartctl -a <device from above>  | grep -i "Serial Number"

Find physical drive bay

Idrac

Use idrac and track the serial number of the disk to which drive bay

Use the OS to trigger disk light if disk is working

  • dd if=<device from above> of=/dev/null