...
- Run "ceph-disk list" to see which disks are recognized by ceph
- "chown" all the ceph disk's to be owned by the "ceph" user and the "ceph" group
- Restart all the osd by running "ceph-disk activate <path-to-disk>" for each disk that "ceph-disk" lists as an "ceph data" disk.
Add a storage node
- Ensure that all SSDs are listed in
profile::disk::ssds
in the node-specific hiera - Install the role
role::storage
on the new node - Create OSDs, typically 2 per device on a 2TB Drive. Details below
Code Block |
---|
# List available disks
ceph-volume inventory
# Dell tends to install EFI stuff on the first disk. Check if there is any partitions on /dev/sdb. If it is, run
ceph-volume lvm zap /dev/sdb
# Create 2 OSDs on each disk you intend to add
ceph-volume lvm batch --osds-per-device 2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk
# Restart the services
systemctl restart ceph.target
|
Storage management
HDDs vs SSDs in hybrid clusters
...