Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

If a storage-node is reinstalled, either because it needs a newer OS or because the node moves from old to new infrastructure, there is no need to start with its OSD fresh. The OSD's can be reinstalled into the cluster if they have not been reformatted with the following steps:

  1. Run "ceph-disk volume lvm list" to see which disks are recognized by ceph"chown" all the ceph disk's to be owned by the "ceph" user and the "ceph" groupverify thall all OSDs are recognized by ceph
  2. Restart all the osd by running "ceph-disk volume lvm activate <path -to-disk>" for each disk that "ceph-disk" lists as an "ceph data" disk.-all"

Add a storage node

  1. Ensure that all SSDs are listed in profile::disk::ssds in the node-specific hiera
  2. Install the role role::storage on the new node
  3. Create OSDs, typically 2 per device on a 2TB Drive. Details below

...