Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Run "ceph-disk list" to see which disks are recognized by ceph
  2. "chown" all the ceph disk's to be owned by the "ceph" user and the "ceph" group
  3. Restart all the osd by running "ceph-disk activate <path-to-disk>" for each disk that "ceph-disk" lists as an "ceph data" disk.


Add a storage node

  1. Ensure that all SSDs are listed in profile::disk::ssds in the node-specific hiera
  2. Install the role role::storage on the new node
  3. Create OSDs, typically 2 per device on a 2TB Drive. Details below
Code Block
# List available disks
ceph-volume inventory

# Dell tends to install EFI stuff on the first disk. Check if there is any partitions on /dev/sdb. If it is, run
ceph-volume lvm zap /dev/sdb

# Create 2 OSDs on each disk you intend to add
ceph-volume lvm batch  --osds-per-device 2  /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk

# Restart the services
systemctl restart ceph.target


Storage management

HDDs vs SSDs in hybrid clusters

...